id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
230817689
pes2o/s2orc
v3-fos-license
ABCB1, CYP2B6, and CYP3A4 Genetic Polymorphisms do not Affect Methadone Maintenance Treatment in HCV-positive Patients Abstract The aim of this study was to determine the influence of ABCB1, CYP2B6, and CYP3A4 genetic polymorphisms on methadone metabolism in patients with hepatitis C virus (HCV) undergoing methadone maintenance treatment (MMT). The study included 35 participants undergoing MMT, who were divided in three groups: HCV-positive (N=12), HCV-negative (N=16), and HCV clinical remission (CR) (N=7). The concentrations of methadone and its main metabolite 2-ethylidene-1,5-dimethyl-3,3-diphenylpyrrolidine (EDDP) were determined with gas chromatography-mass spectrometry. The patients were genotyped for ABCB1 rs1045642, CYP2B6 rs3745274, CYP3A4 rs2242480, and CYP3A4 rs2740574 polymorphisms. Differences between single nucleotide polymorphism (SNP) genotypes and methadone-to-EDDP ratio were analysed with one-way ANOVA, which showed no significant difference between the genes (p=0.3772 for ABCB1 rs1045642, p=0.6909 for CYP2B6 rs3745274, and p=0.6533 for CYP3A4 rs2242480). None of the four analysed SNP genotypes correlated with methadone-to-EDDP concentration ratio. A major influence on it in hepatitis C-positive patients turned out to be the stage of liver damage. Methadone kinetics is also slightly affected by ABCB1 genetic polymorphisms (4). In a study of their effects on methadone maintenance treatment (MMT) in 60 opioiddependent patients, Coller et al. (5) reported that ABCB1 genetic variability influenced daily methadone requirements. In addition, CYP gene variants may contribute to the risk of fatal methadone toxicity posed by high concentrations of unmetabolised methadone in plasma (6). The risk may be even greater in patients with liver damage caused by hepatitis C virus (HCV) infection, which is common in opioid dependency. Our earlier studies (7,8) have shown that chronic overdose and liver insufficiency (damage) increase the amount of unmetabolised methadone and its toxicity, but we did not consider the influence of the ABCB1, CYP2B6, and CYP3A4 genes on methadone metabolism as one of the numerous factors that influence inter-individual variability in that respect. The aim of this study was therefore to address this gap and see if the ABCB1, CYP2B6, and CYP3A4 genetic polymorphisms affect methadone metabolism in patients on MMT. MATERIALS AND METHODS The study was conducted in accordance with the Declaration of Helsinki and was approved by the Ethics Committee of the University Hospital Split, Croatia (No. 530-01/12-01/164). All patients signed informed consent to participation in the study. Participants Our study participants were 35 adult men aged ≥21 years, who were in the MMT programme conducted by the Institute for Public Health of the Split-Dalmatia County, Croatia. To arrive to this number we first assessed for eligibility 74 adult male heroin addicts in the programme according to the following inclusion criteria: male, Caucasian, at least 9 months on MMT with regular attendance, no HIV or HBV co-infection or infection, negative urine tests for the presence of heroin or other pharmacological substances that could interfere with methadone metabolism, and no liver cirrhosis or history of significant alcohol abuse. HCV-positive patients were receiving interferon therapy that does not interfere with methadone metabolism (9). Thirty-nine candidates who did not meet these criteria were excluded. The remaining 35 participants were divided in groups according to their HCV status: HCV-negative (HCV-) (N=12), HCV positive (HCV+) (N=16), and those in clinical remission (HCV CR) (N=7). Their liver damage was assessed according to the fibrosis-4 (FIB-4) index as described elsewhere (10). They were receiving different recommended doses of oral methadone based on their clinical presentation, and these doses were not modified for at least six months before the study (Table 1). Methadone and EDDP determination For this study, methadone and EDDP concentrations were tested only in urine during three regular check-ups 15 days apart. We took two urine samples per check-up; one immediately before and the other 90 min after oral methadone administration, which totalled six samples per participant. The samples were stored at 4 °C and analysed within 1-4 days as described in detail in our previous reports (8,11). Methadone and EDDP concentrations were used to calculate their ratio (Table 1). DNA analysis Blood samples for DNA analysis were collected at regular check-ups and stored in EDTA tubes. DNA was isolated with a commercial genomic DNA isolation kit (High Pure PCR Template Preparation Kit, Roche Diagnostics GmbH, Mannheim, Germany) according to the manufacturer's instructions. Extracted DNA was quantified using a Qubit 4 fluorimeter (Thermo Fischer Scientific, Waltham, MA, USA). SNP genotyping Using the TaqMan ® SNP genotyping assay (Thermo Fischer Scientific) and an Applied Biosystems 7500 realtime polymerase chain reaction (RT-PCR) system (Applied Biosystems, Foster City, CA, USA) we genotyped for the following single-nucleotide polymorphisms (SNPs): ABCB1 rs1045642 (DME C_7586657_20), CYP2B6 SNP rs3745274 (DME C_7817765_60), CYP3A4 rs2242480 (DME C_26201900_30), and CYP3A4 rs2740574 (DME C_1837671_50). RT-PCR and allelic discrimination analyses were performed according to the manufacturer's instructions in a 25-mL reaction volume. The temperature program for RT-PCR was 60 °C for 1 min and 95 °C for 10 min, followed by 50 cycles of 92 °C for 15 s and 60 °C for 90 s. SNP genotypes were determined using instrument software with the manual allele call option. Statistical analysis Statistics and differences between the samples and groups were tested by using GraphPad Prism version 8.0.0 for Mac (GraphPad Software, San Diego, CA, USA). Hardy-Weinberg equilibrium, chi-square, and the p value were calculated with an on-line calculator (12). RESULTS AND DISCUSSION Methadone is converted to its inactive metabolites EDDP and EMDP by hepatic CYP450 enzymes. Its pharmacokinetics is individual, not only because of differences in sex, age, body weight, and use of other drugs, but also because of different allelic frequencies of genetic polymorphisms (2). Some earlier studies have already shown that allelic variations in the ABCB1 rs1045642, CYP2B6 rs3745274, and CYP3A4 rs2242480 and rs2740574 SNPs may affect the distribution of methadone in patients undergoing MMT (6,13,14). Table 2 shows the genotypes of the four investigated SNP loci in DNA obtained from blood samples. Genotype frequencies were consistent with the Hardy-Weinberg equilibrium and did not significantly differ for any of the studied SNPs (the p value ranged from 0.55 to 0.93). Minor allele frequencies (MAF) ( Table 2) were consistent with those presented in a meta study by Dennis et al. (15), in which they ranged between 24.5 and 50 % for ABCB1 rs1045642 and between 24 and 39 % for CYP2B6 rs3745274. In contrast to ABCB1 and CYP2B6 SNPs, no mutated alleles were found in either of the analysed CYP3A4 SNPs (Figure 1). We observed a small difference in wild and mutant genotypes between the ABCB1 and CYP2B6 genes. Patients with the ABCB1 AA and CYP2B6 TT genotype had a slower Sutlović D, et al In both SNPs of the CYP3A4 gene the wild genotype had a slower methadone metabolism than the heterozygous genotype. Although several studies have suggested an association between genetic variability and methadone metabolism, our results showed no significant difference in methadone-to-EDDP ratio between the SNP genotypes. For ABCB1 rs1045642 it was p=0.3772 (ANOVA F=1.005), for CYP2B6 rs3745274 p=0.6909 (F=0.374), and for CYP3A4 rs2242480 p=0.6533 (F=0.4313). Even within the groups divided by HCV status this difference was not significant (p>0.05) (Figure 2). In the HCV+ group, the methadone-to-EDDP ratio was similar for all three ABCB1 genotypes and slightly higher in patients with the mutated CYP2B6 genotype. The liver is the primary target organ of HCV infection and is the main organ responsible for metabolism of drugs. Wu et al. (16) reported that HCV affected methadone metabolism in MMT patients. Our results seem to confirm this finding, as the severity of liver damage (median FIB-4 index) was the highest in HCV patients, who also showed the slowest methadone metabolism (the highest methadoneto-EDDP ratio; Table 1). Figure 3 shows FIB-4 indices relative to the ABCB1 and CYP2B6 gene polymorphisms for all 35 patients. The median FIB-4 index in both examined genes was higher in patients with the mutated genotype. However, we found no statistically significant difference in FIB-4 index between the SNP genotypes. For ABCB1 rs1045642 it was p=0.4465 (ANOVA F=0.8269) and for CYP2B6 rs3745274 p=0.1551 (F=1.977). In conclusion, our study found no significant influence of the established genetic polymorphisms in our patients with methadone metabolism. Similar has been reported by Fonseca et. al. (17), who found negligible impact of the CYP3A5, CYP2B6, CYP2C9, CYP2C19, and ABCB1 genetic polymorphisms on S-methadone metabolite plasma concentrations (16). The only major influence on the methadone-to-EDDP ratio in HCV+ patients we found was the stage of liver damage. HCV-and HCV+ patients were taking similar methadone doses. The FIB-4 index and the methadone-to-EDDP ratio in HCV-patients were lower, regardless of the genotype. However, our conclusion should be taken with reserve, as this study had a small sample size. Future research should therefore involve more MMT patients.
2021-01-08T06:18:27.449Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "3c93124fa1a9de20b2b4fc6df8006064647c04c7", "oa_license": "CCBYNCND", "oa_url": "https://www.sciendo.com/pdf/10.2478/aiht-2020-71-3378", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "69149c92f0a28b517fdd461c597284ecb361e36f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249956537
pes2o/s2orc
v3-fos-license
Defining the region of interest of the knee for perioperative volumetric assessment with a portable 3D scanner in orthopedic and trauma surgery Background The aim of this study was to characterize three regions of interest (ROI) around the knee with a portable 3D scanner (Artec 3D scanner EVA). Soft tissue topography assessment with an optimized, precise, and reproducible method may assist surgeons when managing soft tissue swelling in the post traumatic setting. Methods 12 healthy volunteers (24 legs, 7 women, 5 man) were included in this study. The patient cohort showed a mean age of 27.1 years (SD±3), a mean weight of 70 kg (SD±13) and a mean height of 171 cm (SD±8.8). All scans were recorded by the same examiner in the same room and with the same scanner (Artec, 3 D scanner EVA). Three volume regions of interest (ROI) were defined: the distal femur (circumference measured between the of superior extent of the patella to 10 cm proximal), the knee joint (measured from the top of the patella to the tibial tuberosity) and the proximal tibia (tibial tuberosity to 10 cm distal). Results The mean volume of the right leg was 3.901 l (I. distal femur: 1.63 l, knee joint: 1.33 l, proximal tibia: 1.10 l) and mean volume of the left leg was 3.910 l (I. distal femur: 1.66 l, knee joint: 1.34 l, proximal tibia: 1.12 l). The volume difference between the right and left leg was 0.094 l (SD ± 0.083 l) The Wilcoxon-Mann-Whitney test showed no significant differences of the volumes between the right and left leg. Conclusions This study demonstrates that portable 3D scanning could be an accurate and reliable tool for orthopedics and trauma surgeons. Based on the ROIs of this pilot study, further studies are needed to test the significance for clinical applications for patients with an injured knee. Introduction Soft tissue swelling and edema are frequently encountered when managing orthopedic pathology [1]. The knee is prone to swelling secondary to tendinitis, arthritis, as well as any type of trauma such as fractures or ligament injuries. Severe swelling secondary to proximal tibial or distal femoral fractures precludes immediate operative intervention with open reduction and internal fixation and wrong timing of surgery being highly correlated with substantial soft tissue complications [2]. Therefore, a staged soft tissue management algorithm that begins with closed reduction and external fixation that is followed by open reduction and internal fixation is mandatory [2]. Tape and water displacement methods represent valid tools in the assessment of soft tissue swelling and yet are difficult to standardize and are subject to operator variability [3]. A gold standard for perioperative swelling characterization does not exist, which introduces a substantial subjective component regarding the timing of management for high energy injuries [2]. A portable three-dimensional (3D) scanner has been developed that measures the volume of a region of interest (ROI) [3,4]. Previous studies have demonstrated that 3D scanning can efficiently achieve objective and reproducible measurements that correlate well with previously established tape measurement and water displacement methods [4]. The capabilities as they relate to perioperative traumatic soft tissue management have not been evaluated. The aim of this study was to employ a portable 3D scanner (Artec 3D scanner EVA) to determine three commonly encountered regions of interest (ROI) around the knee. Population 12 healthy volunteers (24 legs, 7 women, 5 men) were included in this study. The patient cohort showed a mean age of 27.1 years (SD±3), a mean weight of 70 kg (SD±13) and a mean height of 171 cm (SD±8.8). Participants who documented injuries or any other functional disorders regarding knee, or ankle were excluded from the study. Each subject completed a standardized questionnaire (age, height, weight, gender, supporting vs. free leg) and an informed consent was obtained prior to the procedure. The study was performed according to the guidelines provided by the Declaration of Helsinki and was approved by the university ethical committee (STUDY NUMBER 2019-475). Study protocol and scanning procedure All scans were recorded by the same examiner and took place in the same room and with the same scanner (Artec, Modell EVA). Before scanning, circumferences were indicated with a marker. Hereby, the tibial tuberosity was established as a beginning point and the circumferences were marked 20 cm proximal and 10 cm distal, subdivided in segments of 2.5 cm (12 Volumes-V, Fig 1). Volunteers were seated and their full extended legs were placed on a rest table. Volunteers were asked to keep a natural foot position (90 degree angle). After that, the scanning procedure was started, and the examiner moved the scanner around the volunteer (Fig 2) until the knee as well as the surrounding ROI were completely recorded. All volunteers were instructed not to move during the scans. The ideal distance to perform the best scan was determined by the distance adjustment indicator within the Artec Studio 13 Software (Version 13, Artec Group, Luxembourg). Each scanning took around 5.8 (SD ± 2) minutes. Three volume regions of interest (ROI, Fig 3) were defined. To evaluate the interobserver-reliability, all ROIs were determined by four different orthopaedic surgeons and compared to each other. The distal femur was defined from the circumference of top of the patella to 10 cm proximal (Volume 1-Volume 4). The knee joint was defined from the top of the patella to tuberositas tibiae (Volume 5-Volume 8). The proximal tibia starts from tuberositas tibiae and ends 10 cm distal (Volume 9 -Volume 12). The mean distance from the tuberositas tibiae to the top of the patella in full extended legs in 12 cases was 11.4 cm (SD ± 1.8). The suprapatellar bursa is PLOS ONE located proximal to the knee joint, between the prefemoral and suprapatellar fat pads. In most (~85%) people, the suprapatellar bursa communicates with the knee joint. The suprapatellar bursa does not communicate with the knee joint in~15% of people, remaining separated by an embryonic septum [5,6]. However, we included the suprapatellar bursa in the distal femur. Statistical analysis Statistical analysis was performed using GraphPad Prism (Version 8.1.2, San Diego, California, USA). Data was first tested for normality using D'Agostino-Pearson normality test. Data which showed no normal distribution were further tested using Wilcoxon-Mann-Whitney signed-rank test to compare the volume differences between the left and right leg. Pvalues � 0.05 were considered significant. An a priori power analysis was performed (G � Power Version 3.0.10, Franz Faul, University of Kiel, Germany). This resulted in a sample size of 10 for a power of 80% with a p value of 0.05 determining significance. PLOS ONE Perioperative volumetric assessment of knee volumes with a portable 3D scanner Results The collective consisted of 7 women and 5 men (mean age 27.1 years, range 23 to 34 years). Leg volumes (volume 1-volume 12) ranged from 2.77 liter to 6.11 liter. Each 3D measurement required around 5.8 ± 2 minutes. The results of reproducibility of these three ROIs by four raters were highly reliable (intraclass correlation coefficient (ICC) = 0.90, 95% confidence interval: 0.87-0.93) Comparison left vs right leg Overall leg volume and volumes of the ROIs (I. distal femur, II. knee joint, III. proximal tibia) were compared between right and left leg (Fig 4). PLOS ONE The Wilcoxon-Mann-Whitney test showed no significant differences of the volumes between the right and left leg (Overall leg volume: p = 0.79, I. distal femur: p = 0.34, II. knee joint: p = 0.62, III. proximal tibia: p = 0.38, Fig 3). No significant volume differences between supporting and free leg were found (p>0.05). Discussion Objective assessment of soft tissue and swellings is a substantial challenge to orthopedic and trauma surgeons. Incorrect characterization may affect the timing of surgery and is higly correlated with soft tissue complications. No reliable objective measurement method exists for determining peri operative swelling of the limbs. Although certain methods have been described including bioelectrical impedance, tape measurement and water displacement methods [3,[7][8][9][10][11], they are all difficult to perform on a trauma patient. In particular, water displacement and tape measurement albeit reliable [4,12,13] cannot be used with open wounds and is both time-consuming and cumbersome. Moreover, the water displacement method provides zero information about the shape of the injured extremity [4,7,14]. For all of the reasons, perioperative assessment of soft tissue swelling is still performed in a subjective manner and varies among surgeons. An ideal method for volume assessment of the limbs of injured patients should be valid, objective, reliable, non-invasive, fast and preferably without the use of radiation. Especially in trauma surgery, preoperative volume comparison to the contralateral healthy limb is crucial. It is believed that a novel 3D scanner may offer substantial advantages and compete with existing methods [3,4]. Koban et al. tested the validity of the portable Artec Eva 3D scanner for medical purposes and showed a significant correlation to the water displacement method [3,7]. Seminati et al. analyzed the mean percentage error of the Artec Eva scanner in comparison to the water displacement method and identified only a mean error of 1.4% [15]. However, there is a lack of information concerning the volume variability between contralateral limbs in healthy subjects. Therefore, evidence -based assessment of the volume of an injured knee in comparison to the contralateral healthy side is not possible. In contrast to the water displacement method, portable 3D scanners are capable to analyze a specific region of interest (ROI) and detect differences to the unharmed contralateral limb in this ROI. This is particularly helpful for staged soft tissue management in severely injured patient (e.g.distal femur fractures, proximal tibia fractures or intraarticular knee injuries). To assess soft tissues for typical injuries in orthopedics and trauma surgery, it was mandatory to define three typical ROI around the knee (I. distal femur, II. knee joint, III. proximal tibia). In Accordance to previous studies, landmarks for the ROI with the highest inter-and intraobserver reliability were chosen [16]. Our results showed no significant differences of the overall volume and all three ROI between right and left leg. Accordingly, the selected ROIs seem to be valid for assessing and comparing injured limbs to the healthy contralateral side. In the present study, scanning time for both knees was approximately 5.8 (SD± 2) minutes and not significantly longer than for other medical purposes like assessment of lymphedema in previous studies [4]. Therefore, our data are in line with previous studies and suggest, that soft tissue assessment around the knee with the portable Artec Eva 3D scanner could be faster than conventional tape measurement and water displacement measurement [4] This study has several limitations. One limitation of this study is the small sample size of 12 healthy participants. The portable Arctec 3D scanner has a resolution of 0.1mm and previous studies with a similar sample size have shown a mean percentage error of 1.4% when compared to other methods [15]. Another limitation is that only one specific camera system was used for volumetric assessment. The Artec Eva 3D scanner has been previously used for medical purposes and showed a significant correlation to the goldstandard volumetric assessment [3,7,15]. A limitation of this technology is that scanning of a patient in a splint is not possible. Patients with an external fixation can be scanned as Artec software offers the ability to to substract external fixation after the scanning process has been completed. Finally, only leg volume of healthy volunteers was measured. Aim of this pilot study was to determine ROI of the knee that can be used for the most types of injuries. Therefore, our results are going to use for future studies to evaluate injured patients and therapeutic strategies over time. Conclusion This study demonstrates that portable 3D scanning may be an accurate and reliable tool for orthopedics trauma surgeons. Improved soft tissue management may improve the outcomes of severely injured knees. Subsequent studies that include injured patients are required to validate the ROIs.
2022-06-24T06:17:51.162Z
2022-06-23T00:00:00.000
{ "year": 2022, "sha1": "00aa150b824853ddf63b607acd0a5e44f6aba9a8", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "699175da58ff3ea6f673f4a231a662474b309473", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
260125930
pes2o/s2orc
v3-fos-license
Active Flow Control for Bluff Body Drag Reduction Using Reinforcement Learning with Partial Measurements Active flow control for drag reduction with reinforcement learning (RL) is performed in the wake of a 2D square bluff body at laminar regimes with vortex shedding. Controllers parameterised by neural networks are trained to drive two blowing and suction jets that manipulate the unsteady flow. RL with full observability (sensors in the wake) successfully discovers a control policy which reduces the drag by suppressing the vortex shedding in the wake. However, a non-negligible performance degradation (~50% less drag reduction) is observed when the controller is trained with partial measurements (sensors on the body). To mitigate this effect, we propose an energy-efficient, dynamic, maximum entropy RL control scheme. First, an energy-efficiency-based reward function is proposed to optimise the energy consumption of the controller while maximising drag reduction. Second, the controller is trained with an augmented state consisting of both current and past measurements and actions, which can be formulated as a nonlinear autoregressive exogenous model, to alleviate the partial observability problem. Third, maximum entropy RL algorithms (Soft Actor Critic and Truncated Quantile Critics) which promote exploration and exploitation in a sample efficient way are used and discover near-optimal policies in the challenging case of partial measurements. Stabilisation of the vortex shedding is achieved in the near wake using only surface pressure measurements on the rear of the body, resulting in similar drag reduction as in the case with wake sensors. The proposed approach opens new avenues for dynamic flow control using partial measurements for realistic configurations. Introduction Up to 50% of total road vehicle energy consumption is due to aerodynamic drag (Sudin et al. 2014).In order to improve vehicle aerodynamics, flow control approaches have been applied targeting the wake pressure drag, which is the dominant source of drag.Passive flow control has been applied (Choi et al. 2014) through geometry/surface modifications, e.g., boat tails (Lanser et al. 1991) and vortex generators (Lin 2002).However, passive control designs do not adapt to environmental changes (disturbances, operating regimes), leading to sub-optimal performance under variable operating conditions.Active openloop techniques, where pre-determined signals drive actuators, are typically energy inefficient since they target mean flow modifications.Actuators typically employed are synthetic jets (Glezer & Amitay 2002), movable flaps (Beaudoin et al. 2006;Brackston et al. 2016) and plasma actuators (Corke et al. 2010), among others.Since the flow behind vehicles is unsteady and subject to environmental disturbances and uncertainty, active feedback control is required to achieve optimal performance.However, two major challenges arise in feedback control design, which we aim to tackle in this study: (i) the flow dynamics are governed by the infinite-dimensional, nonlinear and non-local Navier-Stokes equations (Brunton & Noack 2015), and (ii) are partially observable in realistic applications due to sensor limitations.This study aims to tackle these challenges, particularly focusing on the potential of model-free control for a partially observable laminar flow, characterised by bluff body vortex shedding, as a preliminary step towards more complex flows and applications. Model-based active flow control Model-based feedback control design requires a tractable model for the dynamics of the flow, usually obtained by data-driven or operator-driven techniques.Such methods have been applied successfully to control benchmark two-dimensional (2D) bluff body wakes, obtaining improved aerodynamic performance, e.g.vortex shedding suppression and drag reduction.For example, Gerhard et al. (2003) controlled the circular cylinder wake at low Reynolds numbers based on a low-dimensional model obtained from the Galerkin projection of Karhunen-Loeve modes on the governing Navier-Stokes equations.Protas (2004) applied Linear Quadratic Gaussian control to stabilise vortex shedding based on a Föppl point vortex model.Illingworth (2016) applied the Eigensystem Realization Algorithm as a system identification technique to obtain a reduced-order model of the flow and used robust control methods to obtain feedback control laws.Jin et al. (2020) employed resolvent analysis to obtain a low-order input-output model from the Navier-Stokes equations based on which feedback control was applied to suppress vortex shedding. Model-based flow control has also been applied at high Reynolds numbers to control dominant coherent structures (persisting spatio-temporal symmetry breaking modes) which contribute to drag, including unsteady vortex shedding (Pastoor et al. 2008;Dahan et al. 2012;Dalla Longa et al. 2017;Brackston et al. 2018) and steady spatial symmetry breaking modes (Li et al. 2016;Brackston et al. 2016).For inhomogeneous flows in all three spatial dimensions, low-order models typically fail to capture the intractable and complex turbulent dynamics, leading inevitably to sub-optimal control performance when used in control synthesis. Model-free active flow control by reinforcement learning Model-free data-driven control methods bypass the above limitations by using input/output data from the dynamical system (environment) to learn the optimal control law (policy) directly without exploiting information from a mathematical model of the underlying process (Hou & Xu 2009). Model-free reinforcement learning (RL) has been successfully used for controlling complex systems, for which obtaining accurate and tractable models can be challenging.RL learns a control policy based on observed states and generates control actions which maximise a reward by exploring and exploiting state-action pairs.The system dynamics governing the evolution of the states for a specific action (environment) are assumed to be a Markov Decision Process (MDP).The policy is parameterised by artificial neural networks as a universal function approximator that can be optimised to an arbitrary control function with any order of complexity.RL with neural networks can also be interpreted as parameterised dynamic programming with the feature of universal function approximation (Bertsekas 2019).Therefore, RL requires only input-output data from complex systems in order to discover control policies using model-free optimisation. RL can effectively learn to control complex systems in various types of tasks, such as robotics (Kober et al. 2013) and autonomous driving (Kiran et al. 2021).In the context of chaotic dynamics related to fluid mechanics, Bucci et al. (2019) and Zeng & Graham (2021) applied RL to control the chaotic Kuramoto-Sivashinsky system.In the context of flow control for drag reduction, Rabault et al. (2019); Rabault & Kuhnle (2019) used RL control for the first time in 2D bluff body simulations at a laminar regime.The RL algorithm discovered a policy that, using pressure sensors in the wake and near the body, drives blowing and suction actuators on the circular cylinder to decrease the mean drag and wake unsteadiness.Tang et al. (2020) trained RL-controlled synthetic jets in the flow past a 2D cylinder at several Reynolds numbers, [100,200,300,400], and achieved drag reduction in a range of Reynolds number from 60 to 400, showing the generalisation ability of RL active flow control.Paris et al. (2021) applied the "S-PPO-CMA" RL algorithm to control the wake behind a 2D cylinder and optimise the sensor locations in the near wake.Li & Zhang (2022) augmented and guided RL with global linear stability and sensitivity analyses in order to control the confined cylinder wake.They showed that if the sensors cover the wavemaker region, the RL is robust and successfully stabilises the vortex shedding.Paris et al. (2023) proposed an RL methodology to optimise actuator placement in a laminar 2D flow around an airfoil, addressing the trade-off between performance and the number of actuators.Xu & Zhang (2023) used RL to suppress instabilities both in the Kuramoto-Sivashinsky system and 2D boundary layers, showing the effectiveness and robustness of RL control.Pino et al. (2023) compared RL and genetic programming algorithms to global optimisation techniques for various cases, including the viscous Burger's equation and vortex shedding behind a 2D cylinder.Chen et al. (2023) applied RL in the flow control of vortex-induced vibration of a 2D square bluff body with various actuator layouts.The vibration and drag of the body were both reduced and mitigated effectively by RL policies. Recently, RL has been used to control complex fluid systems, such as flows in turbulent regimes, in both simulations and experiments, addressing the potential of RL flow control in realistic applications.Fan et al. (2020) extended RL flow control to a turbulent regime in experiments at Reynolds number of O 10 5 , achieving effective drag reduction by controlling the rotation speed of two cylinders downstream of a bluff body.RL successfully discovered the global optimal open-loop control strategy that was previously found from a laborious non-automated, systematic grid search.The experimental results were further verified by high-fidelity numerical simulations.Ren et al. (2021) examined RL-controlled synthetic jets in a weakly turbulent regime, demonstrating effective control at Reynolds number of 1000.This flow control problem of drag reduction of a 2D cylinder flow using synthetic jets was extended to Reynolds number of 2000 by Varela et al. (2022).In their work, RL discovered a strategy of separation delay via high-frequency perturbations to achieve drag reduction.Sonoda et al. (2023) and Guastoni et al. (2023) applied RL control in numerical simulations of turbulent channel flow and showed that RL control can outperform opposition control in this complex flow control task. RL techniques have been also applied to various flow control problems with different geometries, such as flow past a 2D cylinder (Rabault et al. 2019), vortex-induced vibration of a 2D square bluff body (Chen et al. 2023), and a 2D boundary layer (Xu & Zhang 2023).However, model-free RL control techniques also have several drawbacks compared to model-based control.For example, it is usually challenging to tune the various RL hyperparameters.Also, model-free RL typically requires large amounts of training data through interactions with the environment, which makes RL expensive and infeasible for certain applications.Further information about RL and its applications in fluid mechanics can be found in the reviews of Garnier et al. (2021) and Vignon et al. (2023). Maximum entropy reinforcement learning In RL algorithms, two major branches have been developed: "on-policy" learning and "off-policy" learning.RL algorithms can also be classified into value-based, policy-based, and actor-critic methods (Sutton & Barto 2018).The actor-critic architecture combines advantages from both value-based and policy-based methods, so the state-of-the-art algorithms mainly use actor-critic architecture. The state-of-the-art on-policy algorithms include Trust Region Policy Optimization (TRPO, Schulman et al. (2015)), Asynchronous Advantage Actor-Critic (A3C, Mnih et al. (2016)), and Proximal Policy Optimization (PPO, Schulman et al. (2017)).Onpolicy algorithms require fewer computational resources than off-policy algorithms, but they are demanding in terms of available data (interactions with the environment).They use the same policy to obtain experience in the environment and update with policy gradient, which introduces a high self-relevant experience that may restrict convergence to a local minimum and limit exploration.As the amount of data needed for training grows with the complexity of applications, on-policy algorithms usually require a long training time for collecting data and converging. By contrast, off-policy algorithms usually have both behaviour and target policies to facilitate exploration while retaining exploitation.The behaviour policy usually employs stochastic behaviour to interact with an environment and collect experience, which is used to update the target policy.There are many off-policy algorithms emerging in the past decade, such as Deterministic Policy Gradient (DPG, Silver et al. (2018a,b)) and Truncated Quantile Critics (TQC, Kuznetsov et al. (2020)).Due to the behaviour-target framework, off-policy algorithms are able to exploit past information from a replay buffer to further increase sample efficiency.This "experience replay" suits a value-function-based method (Mnih et al. 2015), instead of calculating the policy gradient directly.Therefore, most of the off-policy algorithms implement an actor-critic architecture, e.g.SAC. One of the challenges of off-policy algorithms is the brittleness in terms of convergence.Sutton et al. (2008Sutton et al. ( , 2009) ) tackled the instability issue of off-policy learning with linear approximations.They used a Bellman-error-based cost function together with stochastic gradient descent (SGD) to ensure the convergence of learning.Maei et al. (2009) further extended this method to nonlinear function approximation using a modified temporal difference algorithm.However, some algorithms nowadays still experience the problem of brittleness when using improper hyperparameters.Adapting these algorithms for control in various environments is sometimes challenging, as the learning stability is sensitive to their hyperparameters, such as DDPG (Duan et al. 2016;Henderson et al. 2018). To increase sample efficiency and learning stability, off-policy algorithms were developed within a maximum entropy framework (Ziebart et al. 2008;Haarnoja et al. 2017), known as "maximum entropy reinforcement learning".Maximum entropy RL solves an optimisation problem by maximizing the cumulative reward augmented with an entropy term.In this context, the concept of entropy was first introduced by Shannon (1948) in the information theory.The entropy quantifies the uncertainty of a data source, which is extended to the uncertainty of the outputs of stochastic neural networks in the RL framework.During the training phase, the maximum entropy RL maximises rewards and entropy simultaneously to improve control robustness (Ziebart 2010) and increase exploration via diverse behaviours (Haarnoja et al. 2017).Further details about maximum entropy RL and two particular algorithms used in the present work (SAC and TQC) are introduced in §2.2. Partial measurements and POMDP In most RL flow control applications, RL controllers have been assumed to have fullstate information (the term "state" is in the context of control theory) or a sensor layout without any limitations on the sensor locations.In this study, it is denoted as "full measurement" (FM) when measurements contain full-state information.In practical applications, measurements are typically obtained on the surface of the body (e.g.pressure taps), and only partial-state information is available due to the missing downstream evolution of the system dynamics.This is denoted as "partial measurement" (PM), comparatively.PM can lead to control performance degradation compared to FM because the sensors are restricted from observing enough information from the flowfield.In the control of vortex shedding, full stabilisation can be achieved by placing sensors within the wavemaker region of bluff bodies, which is located approximately at the end of the recirculation region.In this case, full-state information regarding the vortex shedding is available to sensors.Placing sensors far from the recirculation region, for example, on the rear surface of the bluff body (denoted as PM in this work), introduces a convection delay of vortex shedding sensing and partial observation of the state of the system. In the language of RL, control with PM can be described as a Partially Observable Markov Decision Process (POMDP) (Cassandra 1998) instead of an MDP.In POMDP problems, the best stationary policy can be arbitrarily worse than the optimal policy in the underlying MDP (Singh et al. 1994).In order to improve the performance of RL with POMDP, additional steps are required to reduce the POMDP problem to an MDP problem.This can be done trivially by using an augmented state, known as "sufficient statistic" (Bertsekas 2012), i.e. augmenting the state vector with past measurements and actions (Bucci et al. 2019;Wang et al. 2023), or Recurrent Neural Networks (RNN), such as Long-Short Term Memory (LSTM) (Verma et al. 2018).Theoretically, LSTM networks and augmented state approaches can yield comparable performance in partially observable problems (see Cobbe et al. 2020, Supplementary).Practically, the augmented state methodology provides notable benefits, including reduced training complexity and ease in parameter tuning, provided that the control state dynamics are tractable and short-term correlated. In the specific case for which flowfield information is available, POMDP can also be reduced to an MDP by flow reconstruction techniques based on supervised learning.For instance, Bright et al. (2013) estimates the full state based on a library containing the reduced order information from the full flowfield.However, there might be difficulties in constructing such a library as the entire flowfield might not be available in practical applications. Contribution of the present work The present work uses RL to discover control strategies of partially observable fluid flow environments without access to the full flow-field/state measurements.Fluid flow systems typically exhibit more complex sampling in higher dimensional observation space compared to other physical systems, necessitating a robust exploration strategy and rapid convergence in the optimisation process.To address these challenges, we employ offpolicy-maximum entropy RL algorithms (SAC and TQC) that efficiently identify nearly optimal policies in the large action space inherent to fluid flow systems, especially for cases with partial measurements and observability.We aim to achieve two objectives related to RL flow control for bluff body drag reduction problems.First, we aim to improve the RL control performance in a PM environment by reducing a POMDP problem to an MDP problem.More details about this method are introduced in §2.4.Second, we present investigations on different reward functions and key hyperparameters to develop an approach that can be adapted to a broader range of flow control applications.We demonstrate the proposed framework and its capability to discover nearly optimal feedback control strategies in the benchmark laminar flow of a square 2D bluff body with fixed separation at the trailing edge, using sensors only on the downstream surface of the body. The article is structured as follows.In Section §2, the RL framework is presented, which consists of the SAC and TQC optimisation algorithms interacting with the flow simulation environment.A hyperparameter-free reward function is proposed to optimise the energy efficiency of the dynamically controlled system.Exploiting past action-state information converts the POMDP problem in a PM environment to an MDP, enabling the discovery of nearly optimal policies.Results are presented and discussed in Section §3.The convergence study of RL is first introduced.The degradation of RL control performance in PM environments (POMDP) is presented, and the improvement is addressed by exploiting a sequence of past action-measurement information.At the end of this section, we compare the results from TQC with SAC, addressing the advantages of using TQC as an improved version of SAC.In Section §4, we provide conclusions for the current research and discuss future research directions. Methodology We demonstrate the RL drag reduction framework on the flow past a 2D square bluff body at laminar regimes characterised by two-dimensional vortex shedding.We study the canonical flow behind a square bluff body due to the fixed separation of the boundary layer at the rear surface, which is relevant to road vehicle aerodynamics.Control is applied by two jet actuators at the rear edge of the body before the fixed separation and partialor full-state observations are obtained from pressure sensors on the downstream surface or near wake region, respectively.The RL agent handles the optimisation, control and interaction with the flow simulation environment, as shown in figure 1.The instantaneous signals a t , o t and r t denote actions, observations and rewards at time step t. Details of the flow environment are provided in §2.1.The SAC and TQC RL algorithms used in this work are introduced in §2.2.The reward functions based on optimal energy efficiency are presented in §2.3.The method to convert a POMDP to an MDP by designing a dynamic feedback controller for achieving nearly optimal RL control performance is discussed in §2.4. Flow environment The environment is a 2D Direct Numerical Simulation (DNS) of the flow past a square bluff body of height B. The velocity profile at the inflow of the computational domain is uniform with freestream velocity U ∞ .Length quantities are non-dimensionalised with the bluff body height B and velocity quantities are non-dimensionalised with the freestream velocity U ∞ .Consequently, time is non-dimensionalised with B/U ∞ .The Reynolds number, defined as Re = U ∞ B/ν, is 100.The computational domain is rectangular with boundaries at (−20.5, 26.5) in the streamwise x direction and (−12.5, 12.5) in the transverse y direction.The centre of the square bluff body is at (x, y) = (0, 0).The flow velocity is denoted as u = (u, v) where u is the velocity component in the x direction and v in the y direction. The DNS flow environment is simulated using FEniCS and the Dolfin library (Logg et al. 2012), based on the implementation of Rabault et al. (2019); Rabault & Kuhnle (2019).The incompressible unsteady Navier-Stokes equations are solved using a finite element method and the incremental pressure correction scheme (Goda 1979).The DNS time step is dt = 0.004.More simulation details are presented in Appendix A, including the mesh and boundary conditions. Two blowing and suction jet actuators are placed on the top and bottom surfaces of the bluff body before separation.The velocity profile U j of the two jets (j = 1, 2; 1 for the top jet and 2 for the bottom jet) is defined as where Q j is the mass flow rate of the jet j, and L = B is the streamwise length of the body.The width of the jet actuator is w = 0.1, and the jets are located at A zero mass flow rate condition of the two jets enforces momentum conservation as (2. 2) The mass flow rate of the jets is also constrained as |Q j | ⩽ 0.1 to avoid excessive actuation. In PM environments, N vertically equispaced pressure sensors are placed on the downstream surface of the bluff body, the coordinates of which are given by where k = 1, 2...., N , and N = 64 unless specified.In FM environments, 64 pressure sensors are placed in the wake region with a refined bias close to the body.The locations of sensors in the wake are defined with sets x s = [0.25,0.5, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0] and y s = [−1.5,−1.0, −0.5, −0.25, 0.25, 0.5, 1.0, 1.5], following the formula where i = 1, 2...., 8 and j = 1, 2...., 8.The bluff body drag coefficient C D is defined as and the lift coefficient C L as where F D and F L are the drag and lift forces, defined as the surface integral of the pressure and viscous forces on the bluff body with respect to the x and y coordinates, respectively. Maximum entropy reinforcement learning of MDPs RL can be defined as policy search in a Markov Decision Process (MDP), with a tuple (S, A, P, R) where S is a set of states, and A is a set of actions.P (s t+1 | s t , a t ) is a state transition function that contains the probability from current state s t and action a t to the next state s t+1 .R(s, a) is a reward function (cost function) to be maximised.The RL agent collects data as states s t ∈ S from the environment, and a policy π (a t | s t ) executes actions a t ∈ A to drive the environment to the next state s t+1 . A state is considered to have the Markov property if the state at time t retains all the necessary information to determine the future dynamics at t+1, without any information from the past (Sutton & Barto 2018).This property can be presented as (2.7) In the present flow control application, the control task can be regarded as an MDP if observations o t contain full-state information, i.e. o t = s t , and satisfy (2.7).SAC and TQC are two maximum entropy RL algorithms used in the present work.TQC is used by default since it is regarded as an improved version of SAC.The maximum entropy RL generally maximises (2.8) where r t is the reward (reward functions given in §2.3), and α is an entropy coefficient (known as "temperature"), which controls the stochasticity (exploration) of the policy.For α = 0, the standard maximum reward optimisation in conventional reinforcement learning is recovered.The probability distribution (Gaussian distribution by default) of a stochastic policy is denoted by π (• | s t ).The entropy of π (• | s t ) is by definition (Shannon 1948) where the term − log π quantifies the uncertainty contained in the probability distribution, and ât is a distribution variable of the action a t .Therefore, by calculating the expectation of − log π, the entropy increases when the policy has more uncertainties, i.e. the variance of π (â t | s t ) increases.SAC is developed based on Soft Policy Iteration (SPI) (Haarnoja et al. 2018b).SPI uses a soft Q-function to evaluate the value of a policy and optimises the policy based on its value.The soft Q-function is calculated by applying a Bellman backup operator T π as (2.10) where γ is a discount factor (here γ = 0.99), and V (s t+1 ) satisfies (2.11) The target soft Q-function can be obtained by repeating 6) and ( 10) in Haarnoja et al. (2018b)).With these gradients, SAC updates the critic and actor networks by (2.12) where λ Q and λ π are the learning rates of Q-function and policy, respectively.Typically, two Q-functions are trained independently, and then the minimum of the Q-functions is brought into the calculation of stochastic gradient and policy gradient.This method is also used in our work to increase the stability and speed of training.SAC also supports automatic adjustment of temperature α by optimisation, This adjustment transforms a hyperparameter-tuning challenge into a trivial optimisation problem (Haarnoja et al. 2018b).TQC (Kuznetsov et al. 2020) can be regarded as an improved version of SAC as it alleviates the overestimation bias of the Q-function on the basic algorithm of SAC.TQC adapts the idea of distributional reinforcement learning with quantile regression, i.e.QR-DQN (Dabney et al. 2018), to format the return function R(s, a) := ∞ t=0 γ t r t (s t , a t ) into a distributional representation with Dirac delta functions as where R(s, a) is parameterised by ψ, and R ψ (s, a) is converted into a summation of M "atoms" as z m ψ (s, a).Here only one approximation of R(s, a) is used for demonstration.Then, only k smallest atoms of z m ψ (s, a) are preserved as a truncation to obtain truncated atoms (2.16) where (2.17) and the algorithm minimises the 1-Wasserstein distance between the original distribution R ψ (s, a) and the target distribution Y (s, a) to obtain a truncated quantile critic.Further details, such as the design of loss functions and the pseudocode of TQC, can be found in Kuznetsov et al. (2020). In this work, SAC and TQC are implemented based on Stable-Baselines3 and Stable-Baselines3-Contrib (Raffin et al. 2021).The RL interaction runs on a longer time step t a = 0.5 compared to the numerical time step dt.This means RL-related data o t , a t and r t are sampled every t a time interval.With a different numerical step and an RL step, control actuation c ns for every numerical step should be distinguished from action a t in RL.There are ta dt = 125 numerical steps between two RL steps, and control actuation is applied based on a first-order-hold function as where n s denotes the number of numerical steps after generating the current action a t and before the next action a t+1 generated.Equation (2.18) smooths the control actuation with linear interpolation to avoid numerical instability.Unless specified, the neural network configuration is set as 3 layers of 512 neurons for both actor and critic.The entropy coefficient in (2.8) is initialised to 0.01 and automatically tuned based on (2.14) during training.See Table 3 in Appendix B for more details of RL hyperparameters. Reward design for optimal energy efficiency We propose a hyperparameter-free reward function based on net power saving to discover energy-efficient flow control policies, calculated as the difference between the power saved from drag reduction ∆P D and the power consumed from actuation P act .Then, the power reward ("PowerR") at the RL control frequency is (2.19) The power saved from drag reduction is given by where P D0 is the time-averaged baseline drag power without control, and ⟨F D0 ⟩ T is the time-averaged baseline drag over a sufficiently long period.P Dt denotes the timeaveraged drag power calculated from the time-averaged drag ⟨F Dt ⟩ a during one RL step t a .Specifically, ⟨ ⟩ a quantities are calculated at each RL step using 125 DNS samples. The jet power consumption of actuation P act (Barros et al. 2016) is defined as where ⟨U j ⟩ a is the average jet velocity, and S j denotes the area of one jet.The reward function given by (2.19) quantifies the control efficiency of a controller directly.Thus, it guarantees the learning of a control strategy which simultaneously maximises the drag reduction and minimises the required control actuation.Additionally, this energy-based reward function avoids the effort of hyperparameter tuning. All the cases in this work use the power-based reward function defined in (2.19) unless otherwise specified.For comparison, a reward function based on drag and lift coefficient ("ForceR") is also implemented, as suggested by (Rabault et al. 2019) with a pre-tuned hyperparameter ϵ = 0.2, as where C D0 and ⟨C Dt ⟩ a are calculated from a constant baseline drag and RL-step-averaged drag and lift.The RL-step-averaged lift |⟨C Lt ⟩ a | is used to penalise the amplitude of actuation on both sides of the body, avoiding excessive lift force (i.e. the lateral deflection of the wake reduces the drag but increases the side force), and indirectly penalising control actuation and the discovery of unrealistic control strategies.ϵ is a hyperparameter designed to balance the penalty on drag and lift force. The instantaneous versions of these two reward functions are also investigated for practical implementation purposes (both experimentally and numerically) because they can significantly reduce memory used during computation and also support a lower sampling rate.These instantaneous reward functions are computed only from observations at each RL step.In comparison, the reward functions above take into account the time history between two RL steps, while the instantaneous version of the power reward ("PowerInsR") is defined as where ∆P D,ins is given by and P act,ins is defined as (2.25) Notice that the definition of reward in (2.23) -(2.25) is similar to (2.19) -(2.25), and the only difference is that the average operator ⟨ ⟩ a is removed.Similarly, the instantaneous version of the force-based reward function ("ForceInsR") is defined as (2.26) In §3.5, we present results on the study of different reward functions and compare the RL performance. POMDP and dynamic feedback controllers In practical applications, the Markov property (2.7) is often not valid due to noise, broken sensors, partial state information and delays.This means the observations available to the RL agent do not provide full or true state information, i.e. o t ̸ = s t , while in MDP o t = s t .Then, RL can be generalised as POMDP defined as a tuple (S, A, P, R, Y, O), where Y is a finite set of observations o t and O is an observation function that relates observations to underlying states. With only PM available in the flow environments (sensors on the downstream surface of the body instead of in the wake), the spatial information is missing along the streamwise direction.Takens' embedding theorem (Takens 1981) states that the underlying dynamics of a high-dimensional dynamical system can be reconstructed from low-dimensional measurements with their time history.Therefore, past measurements can be incorporated into a sufficient statistic.Furthermore, convective delays may be introduced in the state observation since the sensors are not located in the wavemaker region of the flow.According to Altman & Nain (1992), past actions are also required in the state of a delayed problem to reduce it into an undelayed problem.This is because a typical delayed-MDP (DMDP) implicitly subverts the Markov property, as the past measurements and actions only encapsulate partial information. Therefore, combining the ideas of augmenting past measurements and past actions, we form a sufficient statistic (Bertsekas 2012) for reducing the POMDP problem to an MDP, defined as (2.27) which consists of the time history of pressure measurements p 0 , ..., p k and control actions a 0 , ..., a k−1 at time steps 0, ..., k.This enlarged state at time k contains all the information known to the controller at time k. However, the size of the sufficient statistic in (2.27) grows over time, leading to a non-stationary closed-loop system and introducing a challenge in RL since the number of inputs to the networks varies over time.This problem can be solved by reducing (2.27) to a finite-history approximation (White & Scherer 1994).The controller using this finite-history approximation of the sufficient statistic is usually known as a "finite-state" controller, and the error of this approximation converges as the size of the finite history increases (Yu & Bertsekas 2008).The trade-off is that the dimension of the input increases based on the history length required.The nonlinear policy, which is parameterised by a neural network controller, has an algebraic description (2.28) where p t represents pressure measurements at time step t, and N f s denotes the size of the finite history.The above expression is equivalent to a nonlinear autoregressive exogenous model (NARX). A "frame stack" technique is used to feed the "finite history sufficient statistic" to the RL agent as input to both the actor and critic neural networks.Frame stack constructs the observation o t from the latest actions and measurements at step t as a "frame" o t = (a t−1 , p t ), and piles up the finite history of N f s frames together into a stack.The number of stacked frames is equivalent to the size of the finite history N f s . The neural network controller trained as a NARX model benefits from past information to approximate the next optimised control action since the policy has been parameterised as a nonlinear transfer function.Thus, a controller parameterised as a NARX model is denoted as a "dynamic feedback" controller because the time history in the NARX model contains dynamic information of the system.Correspondingly, a controller fed with only the latest actions a t−1 and current measurements p t is denoted as a "static feedback" controller because no past information from the system is fed into the controller. Figure 2 demonstrates three cases with both FM and PM environments which will be investigated.In the FM environment, sensors are located in the wake as P surf given by (2.3).In the PM environment, sensors are placed only on the back surface of the body as P wake given by (2.4).The static feedback controller is employed in the FM environment, and both static and dynamic feedback controllers are applied in the PM environment.Results will be shown with N f s = 27, and in §3.3, a parametric study of the effect of the finite history length is presented. Results of RL active flow control In this section, we discuss the converge of the RL algorithms for the three FM and PM cases ( §3.1) and evaluate their drag reduction performance ( §3.2).A parametric analysis of the effect of NARX memory length is presented ( §3.3) and the isolated effect of including past actions as observations during the RL training and control ( §3.4).Studies of reward function ( §3.5), sensor placement ( §3.6) and generalisability to Reynolds number changes ( §3.7) are presented, followed by a comparison of SAC and TQC algorithms ( §3.8). Convergence of learning We perform RL with the maximum entropy TQC algorithm to discover control policies for the three cases shown in figure 2, which maximise the net-power-saving reward function given by (2.19).During the learning stage, each episode (1 DNS simulation) corresponds to 200 non-dimensional time units.To accelerate learning, 65 environments run in parallel. Figure 3 shows the learning curves of the three cases.Table 1 shows the number of episodes needed for convergence and relevant parameters for each case.It can be observed from the curve of episode reward that the RL agent is updated after every 65 episodes, i.e. 1 iteration, where the episode reward is defined as where k denotes the k th RL step in one episode and N k is the total number of samples in one episode.The root mean square (RMS) value of the drag coefficient, C RM S D , at the asymptotic regime of control, is also shown to demonstrate convergence, defined as , where the operator D detrends the signal with a 9 th -order polynomial and removes the transient part, and ⟨ ⟩ env denotes the average value of parallel environments in a single iteration. In figure 3, it can be noticed that in the FM environment, RL converges after Figure 3: Episode rewards (solid lines) and RMS of drag coefficient (dashed lines) against episode number during the maximum entropy reinforcement learning phase with TQC.approximately 325 episodes (5 iterations) to a nearly optimal policy using a static feedback controller.As will be shown in §3.2, this policy is globally optimal since the vortex shedding is fully attenuated and the jets converge to zero mass flow actuation, thus recovering the unstable base flow and the minimum drag state.However, with the same static feedback controller in a PM environment (POMDP), the RL agent fails to discover the nearly optimal solution, requiring around 1235 episodes for convergence but only obtaining a relatively low episode reward.Introducing a dynamic feedback controller in the PM environment, the RL agent convergences to a near-optimal solution in 735 episodes.The dynamic feedback controller trained by RL achieves a higher episode reward (34.35) than the static feedback controller in the PM case (21.87), which is close to the FM case (37.72).The learning curves illustrate that using a finite horizon of past actions-measurements (N f s = 27) to train a dynamic feedback controller in the PM case improves learning in terms of speed of convergence and accumulated reward achieving nearly optimal performance with only wall pressure measurements. Drag reduction with dynamic RL controllers The trained controllers for the cases shown in figure 2 are evaluated to obtain the results shown in figure 4. Evaluation tests are performed for 120 non-dimensional time units to show both transient and asymptotic dynamics of the closed-loop system.Control is applied at t = 0 with the same initial condition for each case, i.e. steady vortex shedding with average drag coefficient ⟨C D0 ⟩ ≈ 1.45 (baseline without control).Consistent with the learning curves, the difference in control performance in the three cases can be observed both from the drag coefficient C D and the actuation Q 1 .The drag reduction is quantified by a ratio η using the asymptotic time-averaged drag coefficient with control C Da = ⟨C D ⟩ t∈ [80,120] , the drag coefficient C Db of the base flow (details presented in Appendix D), and the baseline time-averaged drag coefficient without control ⟨C D0 ⟩, as • FM-Static: With a static feedback controller trained in a full-measurement environment, a drag reduction of η = 101.96% is obtained with respect to the base flow (steady unstable fixed point; maximum drag reduction).This indicates that an RL controller informed with full-state information can entirely stabilise the vortex shedding and cancel the unsteady part of the pressure drag. • PM-Static: A static/memoryless controller in a partial-measurement environment leads to performance degradation and a drag reduction of η = 56.00% in the asymptotic control stage, i.e. after t = 80, compared to the performance of "FM-Static".This performance loss can also be observed from the control actuation curve, as Q 1 oscillates with a relatively large fluctuation in "PM-Static" while it stays about zero in the "FM-Static" case.The discrepancy between FM and PM environments using a static feedback controller reveals the challenge of designing a controller with a POMDP environment.The RL agent cannot fully identify the dominant dynamics with only partial measurements on the downstream surface of the bluff body, resulting in sub-optimal control behaviour. • PM-Dynamic: With a dynamic feedback controller (NARX model presented in §2.4) in a partial-measurement environment, the vortex shedding is stabilised and the dynamic feedback controller achieves η = 97.00% of the maximum drag reduction after time t = 60.Although there are minor fluctuations in the actuation Q 1 , the energy spent in the synthetic jets is significantly lower compared to the "PM-Static" case.Thus, a dynamic feedback controller in PM environments can achieve nearly optimal drag reduction, even if the RL agent only collects information from pressure sensors on the downstream surface of the body.The improvement in control indicates that the POMDP due to the PM condition of the sensors can be reduced to an approximate MDP by training a dynamic feedback controller with a finite horizon of past actionsmeasurements.Furthermore, high-frequency action oscillations, which can be amplified with static feedback controllers, are attenuated in the case of dynamic feedback control.These encouraging and unexpected results support the effectiveness and robustness of model-free RL control in practical flow control applications, in which sensors can only be placed on a solid surface/wall. In figure 5, snapshots of the velocity magnitude |u| = √ u 2 + v 2 are presented for "Baseline" without control, "PM-Static", "PM-Dynamic" and "FM-Static" control cases.Snapshots are captured at t = 100 in the asymptotic regime of control.A vortex-shedding structure of different strengths can be observed in the wake of all three controlled cases.In "PM-Static", the recirculation area is lengthened compared to the baseline flow, corresponding to base pressure recovery and pressure drag reduction.A longer recirculation area can be noticed in "PM-Dynamic" due to the enhanced attenuation of vortex shedding and pressure drag reduction.The dynamic feedback controller in the PM case renders a 326.22% increase of recirculation area with respect to the baseline flow, while only a 116.78% increase is achieved by a static feedback controller.The "FM-Static" case has the longest recirculation area, and the vortex shedding is almost fully stabilised, which is consistent with the drag reduction shown in figure 4. Figure 6 presents first-and second-order base pressure statistics for the baseline case without control and PM cases with control.In figure 6(a), the time-averaged value of base pressure, p, demonstrates the base pressure recovery after control is applied.Due to flow separation and recirculation, the time-averaged base pressure is higher at the middle of the downstream surface, which is retained with control.The base pressure increase is directly linked to pressure drag reduction, which quantifies the control performance of both static and dynamic feedback controllers.Up to 49.56% of pressure increase at the centre of the downstream surface is obtained in the "PM-Dynamic" case, while only 21.15% can be achieved by a static feedback controller.In figure 6(b), the base pressure RMS is shown.For the baseline flow, strong vortex-induced fluctuations of the base pressure can be noticed around the top and bottom on the downstream surface of the bluff body.In the "PM-Static" case, the RL controller partially suppresses the vortex shedding, leading to a sub-optimal reduction of the pressure fluctuation.The sensors close to the top and bottom corners are also affected by the synthetic jets, which change the RMS trend for the two top and bottom measurements.In the "PM-Dynamic" case, the pressure fluctuations are nearly zero for all the measurements on the downstream surface, highlighting the success of vortex shedding suppression by a dynamic RL controller in a PM environment.The differences between static and dynamic controllers in PM environments are further elucidated in figure 7 by examining the time series of pressure differences ∆p t from surface sensors (control input) and control actions a t−1 (output).The pressure differences are calculated from sensor pairs at y = ±y sensor , where y sensor is defined in Eq. (2.3).For N = 64, there are 32 time series of ∆p t for each case.During the initial stages of control (t ∈ [0, 11]), the control actions are similar for the two PM cases and they deviate for t > 11, resulting in discernible control performance at the asymptotic regime.At the initial stages, the controllers operate in nearly anti-phase to ∆p t , in order to eliminate the antisymmetric pressure component due to vortex shedding.The inability of the static controller to have a frequency dependent amplitude (and phase), manifests as well through the amplification of high frequency noise.For t > 11, the static feedback controller continues to operate in nearly anti-phase to the pressure difference, resulting in partial stabilisation of unsteadiness.However, the dynamic feedback controller adjusts its phase and amplitude significantly, which attenuates the antisymmetric fluctuation of base pressure and drives ∆p t to near zero. Figure 8 shows instantaneous vorticity contours for PM-Dynamic and PM-Static cases, showing both the similarities and discrepancies between the two cases.At t = 2, flow is expelled from the bottom jet for both cases, generating a clockwise vortex, termed V1.This V1 vortex, shown in black, works against the primary counter-clockwise vortex labelled as P1, depicted in red, emerging from the bottom surface.At t = 5.5, a secondary vortex, V2, forms from the jets to oppose the primary vortex shedding from the top surface (labelled as P2).At t = 13, the suppression of the two primary vortices near the bluff body is evident in both cases, indicated by their less tilted shapes compared to the previous time instances.At t = 13, the PM-Dynamic adjusted the phase of the control signal, which corresponds to a marginal action at this time instance at figure 7. Consequently, no additional counteracting vortex is formed in PM-Dynamic.However, in the PM-Static scenario, the jets generate a third vortex, labelled V3, which emerges from the top surface.This corresponds to a peak in the action of the PM-Static controller at this time.The inability of the PM-Static controller to adapt the amplitude/phase of the input/output behaviour results in suboptimal performance. Horizon of the finite-history sufficient statistic A parametric study on the horizon of the finite history in NARX (equation (2.4)), i.e. the number of frames stacked N f s , is presented in this section.Since the NARX model uses a finite horizon of past actions-measurements in (2.27), the horizon of the finite history affects the convergence of the approximation (Yu & Bertsekas 2008).This approximation affects the optimisation during the learning of RL because it determines whether the RL agent can observe sufficient information to converge to an optimal policy. Since vortex shedding is the dominant instability to be controlled, the choice of N f s should intuitively link to the timescale of the vortex shedding period.The "frames" of observations are obtained every RL step (0.5 time units), while the vortex shedding period is t vs ≈ 6.85 time units.Thus, N f s is rounded to integer values for different numbers of vortex shedding periods, as shown in table 2. The results of time-averaged drag coefficients ⟨C D ⟩ after control and the average episode rewards ⟨R ep ⟩ in the final stage of training are presented in figure 9.As N f s increases from 0 to 27, the performance of RL control improves, resulting in a lower ⟨C D ⟩ and a higher ⟨R ep ⟩.N f s = 2 is specially examined because the latent dimension Figure 9: Average drag coefficient ⟨C D ⟩ and average episode reward ⟨R ep ⟩ in PM cases against number history length (numbers of stacked frames) N f s .⟨C D ⟩ is obtained from the asymptotic regime of control.⟨R ep ⟩ is calculated from 2 episodes after convergence of RL.Table 2: Correspondence between the number of vortex shedding (VS) periods and frame stack (history) length in samples N f s .The RL control step size is t a = 0.5, and N f s is rounded to an integer. of the vortex shedding limit cycle is 2.However, the control performance with N f s = 2 is marginally improved to the one with N f s = 0, i.e. a static feedback controller.This result indicates that the horizon consistent with the vortex shedding dimension is not long enough for the finite horizon of past action measurements.The optimal history length to achieve stabilisation of the vortex shedding in PM environments is 27 samples, which are equivalent to 13.5 convective time units or ∼ 2 vortex shedding periods.With N f s = 41 and N f s = 55, the drag reduction and episode rewards drop slightly compared to N s = decline in performance is non-negligible as N f s increases further to 68.This decline shows that excessive inputs to the neural networks (see table 1), may impede training because more parameters need to be tuned or larger neural networks need to be trained. Observation sequence with past actions Past actions (exogenous terms in NARX) facilitate reducing a POMDP to an MDP problem, as discussed in §2.4.In the near-optimal control of a PM environment using a dynamic feedback controller with inputs o t , o t−1 , ..., o t−N f s , a sequence of observations o t = {p t , a t−1 } at step t is constructed to include pressure measurements and actions.In the FM environment, due to the introduction of one-step delayed action due to the first-order-hold interpolation given by (2.18), the inclusion of the past action along with the current pressure measurement, meaning o t = {p t , a t−1 }, is required even when the sensors are placed in the wake and cover the wavemaker region. Figure 10 presents the control performance for the same and without actions included.In the FM case, there is no apparent difference between RL control with o t = {p t , a t−1 } or o t = {p t }, which indicates that the inclusion of the past action is negligible to the performance.This is the case when the RL sampling frequency is sufficiently faster than the timescale of the vortex shedding dynamics.In PM cases, if exogenous action terms are not included in the observations but only the finite history of pressure measurements is used, the RL control fails to converge to a near-optimal policy, with only η = 67.45%drag reduction.With past actions included, the drag reduction of the same environment increases up to η = 97.00%. The above results show that in PM environments, sufficient statistics cannot be constructed only from the finite history of measurements.Missing state information needs to be reconstructed by both state-related measurements and control actions. Reward study In §3.2, a power-based reward function given by (2.19) has been implemented, and stabilising controllers can be learned by RL, as shown.In this section, RL control results with other forms of reward functions (introduced in §2.3) are provided and discussed. The control performance of RL control with the different reward functions is evaluated based on the drag coefficient C D shown in figure 11.Static feedback controllers are trained in FM environments, and dynamic feedback controllers are trained in PM environments.In FM cases, control performance is not sensitive to the choice of reward function (power or force-based).In PM cases, the discrepancies between RL-step time-averaged and instantaneous rewards can be observed in the asymptotic regime of control.The controllers with both rewards (power or force-based) achieve nearly optimal control performance, but there is some unsteadiness in the cases using instantaneous rewards due to slow statistical convergence of the rewards and limited correlation to the partial observations. All four types of reward functions studied in this work achieve nearly optimal drag reduction around 100%.However, the energy-based reward ("PowerR") offers an intuitive reward design, attributable to its physical properties and the dimensionally consistent addition of the constituent terms of the reward function.Further enhancing its practicality, since the power of the actuator can be directly measured, it avoids the necessity for hyperparameter tuning, as in the force-based reward.Additionally, the results show similar performance with both time-averaged between RL steps and instantaneous rewards, avoiding the necessity for faster sampling for the calculation of the rewards.This choice of reward function can be extended to various RL flow control problems and can be beneficial to experimental studies. Figure 10: Curves of drag coefficients after control being applied in both FM and PM environments.Results from FM cases are presented as references, while a performance difference can be observed in the PM cases with and without past actions included. Sensor configuration study with partial measurements In the PM environment, the configuration of sensors (number and location on the downstream surface) may also affect the information contained in the observations and thus control performance.Control results of drag coefficient C D for different sensor configurations in PM-dynamic cases are presented in figure 12.In the configuration with N = 2, two sensors are placed at y = ±0.25, and for N = 1, only one sensor is placed at y = 0.25.Other configurations are consistent with equation (2.3). The C D curves in figure 12 show that, as the number of sensors is reduced from 64 to 2, RL control achieves the same level of performance with minor discrepancies due to randomness in different learning cases.However, if RL control uses observations from only one sensor at y = 0.25, performance degradation can be observed in the asymptotic stage with 19.79% on average less drag reduction.The sub-figure presents the relationship between the number of sensors and asymptotic drag coefficient ⟨C D ⟩. These results indicate a limit on sensor configuration for the use of the NARX-modeled controller to stabilise the vortex shedding. To understand the cause of performance degradation in the N = 1 case, the pressure measurements from two sensors in both baseline and PM-Dynamic cases are presented in figure 13.In the baseline case, two sensors are placed at the same location as the N = 2 case (y = ±0.25)only for observations.It can be observed that the pressure measurements from two sensors are anti-symmetric since they are placed symmetrically on the downstream surface.In the PM-Dynamic case, the NARX controller is used, and control is applied at t = 0.In this closed-loop system, the anti-symmetric relationship between two sensors (from the symmetric position) is broken by the control actuation, and no correlation is evident.This can be seen during the transient dynamics, e.g. in t ∈ [0, 10].Therefore, when the number of sensors is reduced to N = 1 by removing one sensor from the N = 2 case, the dynamic feedback from the removed sensor cannot be fully reflected by the remaining sensor in the closed-loop system.This loss of information affects the fidelity of the control response to the dynamics of the sensor-removing side, causing suboptimal drag reduction in the N = 1 scenario. It should be noted that the configuration of 64 sensors is not necessary for control, as N = 2 or N = 16 also achieves nearly optimal performance.The number of sensors N = 64 in PM-Static environments is used for comparison with the FM-Static configuration (Eq.2.4), which eliminates the effect from different input dimensions between two static cases.Also, 64 sensors sufficiently cover the downstream surface of the bluff body to avoid missing spatial information.The optimal configuration of sensors can be tuned with optimisation techniques such as Paris et al. (2021), but the results in figure 12 indicate that RL adapts with nearly optimal performance to non-optimised sensor placement in the present environment. Performance of RL controllers to unseen Re The RL controller is tested at different Reynolds numbers, in order to examine its generalisability to environment changes.The controllers have been trained at Re = 100 with both FM and PM conditions, and tested at Re = 80, 90, 100, 110, 120, 150.The As shown in figure 14, in both "PM-Dynamic" and "FM-Static" cases, the RL controllers are able to reduce drag by η = 64.68% in the worst case, when Re is close to the training point at Re = 100, i.e. the test cases with Re = 80, 90, 100, 110, 120.However, when applying the controllers trained at Re = 100 to an environment at Re = 150, the drag reduction drops to η = 41.98% and η = 74.04% in PM-Dynamic and FM-Static cases, respectively. Performing CL at Re = 150, the drag reduction is improved to η = 78.07%in PM-Dynamic after 1105 training episodes while η = 88.13% in FM-Static after 390 episodes, with the same RL parameters as the training at Re = 100.Overall, the results of these tests indicate that the RL-trained controllers can achieve significant drag reduction in the vicinity of the training point (i.e.±%20 Re change).If the test point is far from the training point, a CL procedure can be implemented to achieve nearly optimal control. TQC vs SAC Control results with TQC and SAC are presented in figure 15 in terms of C D .TQC shows a more robust control performance.In the case of FM, SAC might demonstrate a slightly more stable transient behaviour attributed to the fact that the quantile regression process in TQC introduced complexity to the optimisation process.Both controllers achieved an identical level of drag reduction in the FM case. However, in the context of the PM cases, it is observed that TQC outperforms SAC in drag reduction with both static and dynamic feedback controllers.For static feedback control, TQC achieved an average drag reduction of η = 56.00%,compared to the η = 46.31%reduction achieved by SAC.The performance under dynamic feedback control conditions is more compelling, where TQC fully reduced the drag, achieving η = 97.00% of drag reduction, reverting it to a near-base-flow scenario.In contrast, SAC managed to achieve an average drag reduction of η = 96.52%.The fundamental mechanism for updating Q-functions in RL involves selecting the maximum expected Q-functions among possible future actions.This process, however, can potentially lead to overestimation of certain Q-functions (Hasselt 2010).In POMDP, this overestimation bias might be exacerbated due to the inherent uncertainty arising from the partial-state information.Therefore, the Q-learning-based algorithm, when applied to POMDPs, might be more prone to choosing these overestimated values, thereby affecting the overall learning and decision-making process. As mentioned in §2.2, the core benefit of TQC under these conditions can be attributed to its advanced handling of the overestimation bias of rewards.By constructing a more accurate representation of possible returns, TQC provides a more accurate Q-function approximation than SAC.This process of modulating the probability distribution of the Q-function assists TQC in managing the uncertainties inherent in environments with only partial-state information.In this case, TQC can adapt more robustly to changes and uncertainties, leading to better performance in both static and dynamic feedback control tasks. Figure 14: Asymptotic drag coefficient ⟨C D ⟩ for "Baseline", "Baseflow", and tests of RLtrained controllers in both FM and PM environments with different Re.The controllers were trained at Re = 100 (dashed line), and tested at Re = 80, 90, 100, 110, 120, 150.The controllers were trained again at Re = 150 (dash-dotted line) and tested at Re = 150 (square and diamond markers).All curves are fitted using a third-order spline. Conclusions In this study, maximum entropy RL with TQC has been performed in an active flow control application with partial measurements to learn a feedback controller for bluff body drag reduction.Neural network controllers have been trained by the RL algorithm to discover a drag reduction control strategy behind a 2D square bluff body at Re = 100.By comparing the control performances in FM environments to PM environments, we showed a non-negligible degradation of RL control performance if the controller is not trained with full-state information.To solve this issue, we proposed a method to train a dynamic neural network controller with an approximation of a finite-history sufficient statistic and formulate the dynamic controller as a NARX model.The dynamic controller was able to improve the drag reduction performance in PM environments and achieve near-optimal performance (drag reduction ratio η = 97% with respect to the baseflow drag) compared to a static controller (η = 56%).We found that the optimal horizon of the finite history in NARX is approximately two vortex shedding periods when the sensors are located only on the base of the body.The importance of including exogenous action terms in the observations of RL is discussed, by pointing out the degradation of η = 29.55% on drag reduction if only past measurements are used in the PM environment.Also, we proposed a net power consumption design for the reward function based on the drag power savings and the power of the actuator.This power-based reward function offers an intuitive understanding of the closed-loop performance, whereas electromechanical losses can also be added directly, once a specific actuator is chosen.Moreover, its inherent feature of being hyperparameter-free contributes to a straightforward reward function design process in the context of flow control problems.Results from SAC are compared with TQC, and we showed the improvement by TQC, which attenuates overestimation in neural networks. It was shown that model-free RL was able to discover a nearly optimal control strategy without any prior knowledge of the system dynamics using partial realistic measurements, exploiting only input-output data from the simulation environment.Therefore, this particular study on RL-based active flow control in 2D laminar flow simulations can be seen as a step towards controlling the complex dynamics of 3D turbulent flows in practical by replacing the simulation environment with the experimental setup.Also, the frame stack method employed here to convert the POMDP to an MDP can be replaced by recurrent neural networks and attention-based architectures, which may further improve control performance in a scenario with complex dynamics. Funding.We acknowledge support from the UKRI AI for Net Zero grant EP/Y005619/1. Figure 1 : Figure 1: Reinforcement learning framework.The RL agent, flow environment and the interaction between them are demonstrated.The partial measurement (PM) case is shown, where sensors are located on the downstream surface of the square bluff body.64 sensors are placed by default, and the red dots only show a demonstration with a reduced number of sensors.Two jets located upstream of the rear separation points are trained to control the unsteady wake dynamics (vortex shedding). Figure 2 : Figure 2: Demonstration of a full-measurement (FM) environment with a static feedback controller ("FM-Static"); a partial-measurement (PM) environment with a static feedback controller ("PM-Static"); and a PM environment with a dynamic feedback controller formulated as a NARX model (case "PM-Dynamic").The dashed curve represents the bottom blowing/suction jet, and the red dots demonstrate schematically the location of the sensors. Figure 4 : Figure 4: Top figure: Drag coefficient C D without control ("Baseline") and with active flow control by RL in both FM and PM cases.In PM cases, control results with a dynamic and static feedback controller are presented.The dot-dashed line represents the base flow C Db .Bottom figure: The mass flow rate Q 1 of one of the blowing and suction jets. Figure 6 : Figure 6: Mean (a) and RMS (b) base pressure for controlled and uncontrolled cases from the 64 wall sensors on the downstream surface of the bluff body base. Figure 7 : Figure 7: Time series of pressure differences ∆p t (blue) and action a t−1 (red) for "PM-Static" (top) and "PM-Dynamic" (bottom) cases.Control is applied at t = 0.The arrows are pointing from low to high values of |y sensor | among ∆p t curves.The vertical dashed lines mark the time instances of the vorticity snapshots in figure 8. Figure 11 : Figure 11: Tests of RL-trained controllers with various reward functions.Drag coefficient C D curves are presented for each case.Dotted lines denote the cases with FM environments, while solid lines denote PM environments.The dash-dotted line represents C D in the base flow which has no vortex shedding.Control starts at t = 0 with the same initial conditions for every case. Figure 12 : Figure 12: Curves of drag coefficients after control applied at t = 0 in PM-Dynamic cases.Sensor configurations with different sensor numbers N = 1, 2, 16, 32, 48, 64 are tested.The dot-dashed line presents C D from the base flow.A sub-figure of asymptotic drag coefficient ⟨C D ⟩ (time-averaged value after t = 80) and probe number N is presented. Figure 13 : Figure 13: Pressure measurements in t ∈ [0, 40] (early transient stage in the controlled case) from 2 surface sensors."Baseline" without control (top); "PM-Dynamic" with a NARX controller (bottom).All curves are detrended by a fifth-order polynomial to reveal the relationship between measurements from two sensors. Figure 15 : Figure 15: Comparison of control performance in terms of C D between SAC and TQC.Control starts at t = 0. Solid curves show the cases using TQC and "Baseline" while dotted curves show SAC.The dash-dotted curve corresponds to the baseflow C D . Figure 16 : Figure 16: Computational mesh of the simulation domain; x ∈ (−20.5, 26.5) and y ∈ (−12.5, 12.5).A zoom-in view around the bluff body is presented in the black rectangle at the right.Boundaries of the simulation domain, bluff body surface and jet area are denoted. different compared to figure4, indicating the adaptability of the controller to different initial conditions.Other parameters in this run are consistent with the results in figure4.The control performance and behaviour in this test are consistent with the results shown in figure4both in the transient stage and asymptotic stage.The drag coefficient C D starts from the condition of steady vortex shedding and drops to the value of the stabilised flow in around 120 time units, with minor fluctuations.After training time (200 time units), the controller is still able to prevent triggering vortex shedding and preserve the drag coefficient near the baseflow values (minimum drag without vortex shedding).The behaviour of the controller is further presented in the subfigures of Q 1 .The controller creates negligible random mass flow after stabilizing the vortex shedding due to the maximum entropy in training. Figure 17 : Figure 17: A long evaluation for 400 non-dimensional time units of the RL-trained dynamic controller in a PM environment.Control starts at t = 0. Solid curves show the controlled C D using TQC and "Baseline" without control.The dash-dotted curve corresponds to the baseflow C D .The mass flow rate Q 1 is presented for t ∈ [0, 200] and t ∈ [200, 400] respectively. Haarnoja et al. (2018b)gence can be referred to as Soft Policy Evaluation (Lemma 1) inHaarnoja et al. (2018b).With soft Q-function rendering values for the policy, the policy optimisation is given as Soft Policy Improvement (Lemma 2 inHaarnoja et al. (2018b)).In SAC, a stochastic soft Q-function Q θ (s t , a t ) and a policy π ϕ (a t | s t ) are parameterised by artificial neural networks θ (critic) and ϕ (actor), respectively.During training, Q θ (s t , a t ) and π ϕ (a t | s t ) are optimised with stochastic gradients ∇ θ J Q (θ) and ∇ ϕ J π (ϕ) designed corresponding to Soft Policy Evaluation and Soft Policy Improvement respectively (see equation ( Table 1 : Number of episodes N c required for RL convergence in different environments.The episode reward R ep,c at the convergence point, the configuration of NN and the dimension of inputs are presented for each case.N f s is the finite-horizon length of past actions-measurements. Table 3 : Hyperparameters used by default in TQC.For SAC, "top quantiles to drop per net" is not used, and other parameters remain the same.For the entropy target, − dim(A) denotes the dimension of action space A.
2023-07-25T07:38:10.579Z
2023-07-24T00:00:00.000
{ "year": 2023, "sha1": "49835a7c1cc439d32d9c9dd1753396acc2d8a7e8", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/F98233D07BAD238143B8C2544DE0BD03/S0022112024000697a.pdf/div-class-title-active-flow-control-for-bluff-body-drag-reduction-using-reinforcement-learning-with-partial-measurements-div.pdf", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "8d6fd9fc50b0a1f6820f04c9e58b307b2722c0f0", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
229685720
pes2o/s2orc
v3-fos-license
Comprehensive Analysis of mRNA Expression Profiles in Head and Neck Cancer by Using Robust Rank Aggregation and Weighted Gene Coexpression Network Analysis Background Head and neck squamous cell cancer (HNSCC) is the sixth most common cancer in the world; its pathogenic mechanism remains to be further clarified. Methods Robust rank aggregation (RRA) analysis was utilized to identify the metasignature dysregulated genes, which were then used for potential drug prediction. Weighted gene coexpression network analysis (WGCNA) was performed on all metasignature genes to find hub genes. DNA methylation analysis, GSEA, functional annotation, and immunocyte infiltration analysis were then performed on hub genes to investigate their potential role in HNSCC. Result A total of 862 metasignature genes were identified, and 6 potential drugs were selected based on these genes. Based on the result of WGCNA, six hub genes (ITM2A, GALNTL1, FAM107A, MFAP4, PGM5, and OGN) were selected (GS > 0.1, MM > 0.75, GS p value < 0.05, and MM p value < 0.05). All six genes were downregulated in tumor tissue (FDR < 0.01) and were related to the clinical stage and prognosis of HNSCC in different degrees. Methylation analysis showed that the dysregulation of ITM2A, GALNTL1, FAM107A, and MFAP4 may be caused by hypermethylation. Moreover, the expression level of all 6 hub genes was positively associated with immune cell infiltration, and the result of GSEA showed that all hub genes may be involved in the process of immunoregulation. Conclusion All identified hub genes could be potential biomarkers for HNSCC and provide a new insight into the diagnosis and treatment of head and neck tumors. Introduction Head and neck squamous cell carcinoma (HNSC) is the sixth most common cancer in the world [1]. Worldwide, more than 300000 patients die of HNSC every year [2]. Although many treatments for HNSC such as surgery, chemotherapy, and radiotherapy have obtained some success, the 5-year survival rate is still only 40-50% [3]. The chances of survival for patients with HNSCC depend largely on the initial stage of cancer. Therefore, early detection and accurate diagnosis are crucial for patients with HNSCC to receive treatment. In the past twenty years, with the application of microarray and next-generation sequencing technologies, a great number of novel diagnostic or therapeutic biomarkers have been identified in HNSCC [4]. However, small samples in independent research, different platform technologies, and different screening criteria have a great impact on the research results. To solve this problem and obtain stable biomarkers, researchers proposed a novel rank aggregation method: robust rank aggregation (RRA) [5], which has been implemented as an R package (RobustRankAggreg) [5], to identify the overlapping genes among ranked gene lists [6], thus making the result more reliable. WGCNA [7] is an effective method to find the clusters of highly correlated genes and identify the hub genes of each cluster [7]. This method has been widely applied in various biological contexts. In our study, a total of 24 independent gene datasets were included in RRA analysis to identify robust DEGs. We used these DEGs to predict the potential small molecular drugs. The coexpression network was then established by WGCNA to identify hub genes in these robust DEGs. The role of all hub genes in HNSCC was then validated by other independent databases. Furthermore, we also utilized multiple online tools such as DiseaseMeth [8], MEXPRESS [9], and MethSurv [10] to evaluate the methylation level of hub genes. TIMER was used to assess the association between immune infiltration and hub genes. GSEA [11] analysis was applied to explore the biological functions of these hub genes. To the best of our knowledge, this is the first time to utilize RRA and WGCNA simultaneously for screening biomarkers of HNSCC. Functional Enrichment Analysis. We select the top 300 DEGs to perform GO and KEGG enrichment analyses. Among the KEGG pathway database, we can find that these DEGs were enriched in multiple cancer-related pathways like focal adhesion, PI3K-Akt signal pathway, pathway in cancer, small-cell lung cancer, transcriptional misregulation in cancer, and chemical carcinogenesis (Figure 3(a)). Furthermore, in all terms of KEGG and GO, we found that these metasignature genes mostly involved in pathways associated with the construction of ECM such as ECM receptor interaction, 24 BioMed Research International extracellular matrix organization, collagen catabolic process, collagen binding, collagen trimer, extracellular region, and extracellular exosome ( Figure 3). (Table 1). Their 2D structures are visualized in Supplementary Fig 1. These potential drugs can to some extent reverse the robust dysregulated genes in HNSCC, thus providing suggestions for us to develop targeted drugs. Identification of Hub Genes in HNSCC Patients. To identify the hub genes, we performed WGCNA on the GSE65858, which included 270 samples from HNSCC patients with complete clinical data. Six different gene modules were identified ( Figure 4) according to the result of cluster analysis on expression data of metasignature DEGs. The correlation coefficients between each module and each clinical trait were calculated, and it is clear that only the blue module and gray module were significantly associated with T grade of HNSCC ( Figure 4(e)). Because genes in the gray module are not significantly coexpressed with each other, we only chose the blue module as a key module. A total of 102 genes were included in blue modules, and the result of enrichment analysis for these genes showed that the most significant GO and KEGG terms were related to cell metabolism, chemokine activity, and transmembrane transport (Supplementary Fig 2). According to the value of GS and MM (GS > 0:1, MM > 0:75, GS p value < 0.05, and MM p value < 0.05), 6 genes (ITM2A, GALNTL1, OGN, FAM107A, MFAP4, and PGM5), which were also significantly correlated with each other (Figure 4(f)), were selected from the blue module. BioMed Research International expression level of these genes between normal tissue and tumor tissue in the TCGA database. Considering the result of WGCNA which revealed a negative association between hub genes and tumor T grade, we also used the TCGA database to validate the role of hub genes in TN grade of HNSCC. In Figure 5(a), it is clear that all 6 hub genes were remarkably different between normal and tumor tissues, and the ROC curve indicates a high diagnostic value of all hub genes (Supplementary Fig 4A). In Figure 5(b), we can see that five hub genes (ITM2A, GALNTL1, FAM107A, MFAP4, and PGM5) were upregulated in the T1-T2 stage and downregulated in the T3-T4 stage, which is consistent with the result in WGCNA. However, there is no correlation between hub genes and tumor N stage ( Supplementary Fig 3). The result above indicated that these hub genes may affect the growth rather than metastasis of the tumor. Fig 4B). We also explore the prognostic role of all these hub genes by using the GEPIA website [12]. The KM curve showed that the lower expression of four hub genes (ITM2A HR = 0:72, GALNTL1 HR = 0:74, FAM107A HR = 0:72, and MFAP4 HR = 0:76) was significantly associated with poor overall survival ( Figure 6(a)). DNA Methylation and Expression of Hub Genes. As we all know, methylation can significantly affect the expression of multiple genes; therefore, we at first used DiseaseMeth 2.0 to explore the mean methylation level of hub genes. Because OGN was not included in DiseaseMeth, we only explore the other 5 genes. We found that the mean methylation level of ITM2A, GALNTL1, FAM107A, and MFAP4 was significantly higher in tumor tissue while the methylation level of PGM5 was higher in normal tissue ( Supplementary Fig 5). This indicates that the low expression PGM5 in HNSCC may not be caused by methylation. We next explore the relationship between four hub genes and their methylation site. From Supplementary Fig 6, we can see that various methylation sites on each gene were negatively correlated with the expression level of the corresponding gene, indicating that downregulation of four hub genes (ITM2A, GALNTL1, FAM107A, and MFAP4) may be caused by hypermethylation. To find the key methylation site of hub genes, we also used MethSurv to explore the prognostic role of these methylation sites (r < 0 and adjusted p value < 0.05). A total of 15 methylation sites were found to be important prognostic factors for HNSCC (Figure 7). Immune Infiltration and Hub Genes. The tumor microenvironment comprises multiple kinds of cells such as epithelial cells, fibroblasts, and immune cells. A great number of studies have revealed the significant role of immune cells in various cancers. Therefore, we used TIMER to investigate the association between hub genes and different kinds of cells. It is interesting that we found that all hub genes were negatively correlated with tumor purity. On the contrary, 6 hub genes were all positively related to the infiltration of immune cells (Figure 8). GSEA Revealed Pathway Dysregulated by Hub Genes. To further explore the expression pathway of all 6 hub genes, GSEA analysis was performed for each gene. Supplementary Fig 7 represents the top 10 enriched pathways in each hub gene (ranked by enrichment score). According to the result of GSEA, we found that multiple immune-related pathways were significantly enriched in the higher expression group of hub genes like allograft rejection, primary immunodeficiency, intestinal immune network for IgA production, T cell receptor signaling pathway, B cell receptor signaling pathway, autoimmune thyroid disease, graft-versus-host disease, human T cell leukemia virus 1 infection, leukocyte transendothelial migration, Th1 and Th2 cell differentiation, Th17 cell differentiation, and asthma. Discussion To identify the robust dysregulated genes in HNSCC, we included a total of 24 independent datasets for RRA analysis. A total of 466 upregulated genes and 396 downregulated genes were identified. The top 5 upregulated genes mostly 12 BioMed Research International came from matrix metalloproteinase (MMP) families. Its family members have been proved to play a vital role in the progression, invasion, and metastasis of HNSCC [13]. The most downregulated gene is TMPRSS11B, a member of the type II transmembrane serine protease family. It has been reported to be downregulated in multiple epithelial cancers [14]. To further understand the biological function of these metasignature genes, we performed GO and KEGG analyses on the top 300 metasignature DEGs. Multiple cancer-related pathways such as transcriptional misregulation, PI3K-Akt signaling pathway, pathways in cancer, and ECM receptor interaction were significantly enriched, confirming the important role of these DEGs in HNSCC. Furthermore, many enriched terms were associated with the construction of ECM, indicating the importance of the microenvironment in the development of HNSCC. According to the results of enrichment analyses, we confirmed that these metasignature DEGs are significantly related to the occurrence and development of HNSCC. After identifying the robust DEGs in HNSCC, we try to use the expression pattern of these genes to predict the potential small molecule drugs. The CMap database was used, and six small molecule drugs were selected because they can reverse the expression pattern of metasignature DEGs. Among all these drugs, four of which have been studied in HNSCC previously. For instance, thiostrepton has been reported to affect the proliferation, apoptosis, and radiosensi-tivity in head and neck cancer [15,16]. Levamisole also has been used in HNSCC before, but its effect is still controversial [17]. Cyproterone and cortisone are both hormone medicines. However, there is no strong evidence that hormone therapy is effective for head and neck tumors. Repaglinide is a hypoglycemic agent while zimeldine is a kind of antidepressant drug, both of which have not been studied as a drug for HNSCC. Considering that the mortality rate of head and neck tumors has not improved significantly in the past ten years, traditional treatment methods like surgery and radiotherapy may not be enough for HNSCC; it is meaningful to further reveal the potential of chemical molecules in targeted therapy of HNSCC. To identify the hub genes among all 862 metasignature DEGs, WGCNA was utilized to construct a coexpression network. Finally, we identified 6 hub genes (ITM2A, GALNTL1, OGN, FAM107A, MFAP4, and PGM5) according to our selection criteria. We used other independent databases to validate the expression pattern and clinical relationship of these hub genes. The result showed that all hub genes were downregulated in tumor tissue and were negatively correlated with tumor T stage. Furthermore, compared with tumor tissue, these 6 hub genes were also downregulated in dysplasia tissue. The ROC curve indicated that these genes may help us better identify the HNSCC. Besides, four genes (ITM2A, GALNTL1, FAM107A, and MFAP4) also performed well in the prognosis prediction of HNSCC. Interestingly, all 14 BioMed Research International 6 hub genes were seldom explored in HNSCC previously. ITM2A, a family member of BRICHOS, has been reported to be downregulated in both breast and ovarian cancers which may affect the proliferation and autophagy process of tumors [18,19]. However, its role in HNSCC has not been fully studied. Similarly, the role of another 5 hub genes in cancer also has been reported previously to varying degrees. For example, PGM5 was identified as a diagnostic and prognostic biomarker in liver and colorectal cancers [20,21]. The higher expression of antisense chain of PGM5 was showed to inhibit the proliferation and metastasis of tumors [22]. The higher expression of OGN was also reported to inhibit the process of EMT through the EGFR/Akt pathway [23]. However, the role of these genes in the development of HNSCC remains unclear. As we all know, hypermethylation is an important cause of the downregulation of gene expression. A recent study showed that hypermethylation may lead to the low expression of FAM107A in laryngeal cancer [24], which is consistent with our results. Through methylation analysis, we also find that the low expression of another three hub genes (ITM2A, GALNTL1, and MFAP4) may be significantly associated with hypermethylation in multiple methylation sites. Because DNA methylation is a reversible process, targeted therapies for the unique methylation site of the tumor are promising. To further screen out methylation sites with research potential, we also performed survival analysis and found that hypermethylation of 15 methylation sites in FAM107A, GALNTL1, and MFAP4 was significantly associated with poor overall survival. All selected hub genes and their methylation conditions may help us better judge the state of HNSCC (inert or invasive), so as to develop a more appropriate treatment strategy. A great number of previous researches have revealed that the infiltration of immune cells in the tumor microenvironment could largely affect the development of cancer cells [25,26]. Therefore, we used TIMER to explore the relationship between hub genes and immune cell infiltration. Interestingly, all six hub genes were positively correlated with infiltration of B cell, CD8+ T cell, CD4+ T cell, macrophage, neutrophil, and dendritic cells, indicating that our hub genes may to some extent play a role in immunological regulation. The results of GSEA further support this hypothesis; a great number of immune-related pathways were significantly enriched in higher expression groups of hub genes. A recent study confirmed our result; Hu et al. point out that higher expression of OGN can promote the infiltration of CD8+ T cells thus inhibiting the formation of new blood vessels in colorectal cancer [27]. Some studies also have described the role of some hub genes (ITM2A, MFAP4) in immunoregulation [28,29]. However, the role of these genes in tumor immune regulation is still not fully illustrated; we need more Conclusion In conclusion, by utilizing the RRA method, we identified a series of robust DEGs in HNSCC. Based on WGCNA, 6 hub genes (ITM2A, GALNTL1, OGN, FAM107A, MFAP4, and PGM5) in the blue module were selected. All hub genes were significantly downregulated in tumor tissue of HNSCC. The expression pattern of four hub genes (ITM2A, GALNTL1, FAM107A, and MFAP4) may be caused by hypermethylation. All six hub genes may play a role in immuno-logical regulation in the microenvironment of HNSCC which need more experiment to verify. Materials and Method 5.1. Selection of Included Datasets. The mRNA expression profile-related datasets were searched in the GEO database by using the keywords head and neck cancer, larynx, laryngeal, tongue, mouth, oral, oropharynx, tonsil, hypopharynx, and hard palate. Two people independently screened the datasets based on the inclusion criteria as follows: (1) Included datasets must provide the gene expression profile of HNSCC and corresponding normal tissue control. (2) Each group of one dataset should contain at least 5 samples. 16 BioMed Research International (3) The platform of each study should contain more than 8000 genes. Finally, a total of 25 studies were included in our research, and among which, 24 independent studies were used for RRA analysis; one dataset (GSE30784) with gene expression data of dysplasia tissue was used for further validation and exploration. Detailed information of included GEO datasets is shown in Table 2. Identification of Robust DEGs. R package "GEOquery" was used to directly obtain series matrix files, sample pheno-type data, and corresponding platform information from the GEO database. We used "limma" R package to normalize the data and obtain DEGs of each study (p value < 0.05). The upor downregulated genes were arranged from large to small according to the absolute value of logFC. The "RobustRan-kAggreg" package in R was created for comparison of ranked gene lists and identification of metasignature genes. The result of RRA can help us identify more robust genes from different studies, and the detailed method of the RRA method has been described by previous articles [30]. In the end, the p 17 BioMed Research International value of the output result was subjected to Bonferroni correction, and mRNA with adjusted p value < 0.05 was considered significantly dysregulated. Furthermore, "OmicCircos" R package was utilized to visualize the expression patterns of the top 100 metasignature DEGs in each included study (dysregulated genes according to adjusted p value). Enrichment Analysis. We used DAVID Bioinformatics Resources 6.8 (DAVID; http://david.abcc.ncifcrf.gov/) to annotate the top 300 metasignature genes. GO and KEGG enrichment analyses were performed by using the prediction tool on the website. Bubble charts were used to visualize the top 20 terms of enrichment results. Identification of Potential Drug for HNSCC. The Connectivity Map (CMap) [31] database (http://www.broadinstitute .org) can help us to predict the potential drugs which can reverse the expression of specific genes. In this study, we input the top 300 metasignature genes (165 upregulated and 135 downregulated genes) into the online tool of CMap for gene set enrichment analysis. Each small molecule will be assigned an enrichment score between -1 and 1. The lower the enrichment score, the better the drug effect to reverse the state of HNSCC cells. In our study, drugs with p value < 0.01 and the enrichment score < −0:7 were considered potential small molecules. We also used PubChem (http://www .pubchem.ncbi.nlm.gov) to visualize the 2D structure of selected small molecules. Key Module and Hub Genes Identified by WGCNA. A total of 862 metasignature genes were included for WGCNA with expression data from GSE65858. We construct a gene coexpression network for all metasignature DEGs; "WGCNA" R package was applied to explore the relationship between each coexpression module and clinical phenotype. A correlation matrix was constructed which was subsequently transformed to a TOM matrix based on the soft threshold (β = 7, R 2 = 0:9). All metasignature genes were distributed in different gene modules according to the value of the TOM matrix. Here, we set the minimal module size as 15 and cut height as 0.5. The module with a significant correlation with clinical characteristics was selected. GO and KEGG enrichment analyses were performed on the clinicalrelated modules. We selected the hub gene according to the value of GS and MM (GS > 0:1, MM > 0:75, GS p value < 0.05, and MM p value < 0.05). 5.6. Verify the Clinical Relevance of Hub Genes. We used the TCGA database at first to validate the diagnostic role of hub genes and the relationship between hub genes and clinical characteristics. We also used an independent dataset (GSE30748) to explore the hub genes' expression levels Tumor type Tumor sample number Control sample number 2008 GSE10121 GPL6353 33484 Germany OSCC 35 6 2017 GSE103412 GPL23978 39321 Denmark OSCC 23 9 2008 GSE13399 GPL7540 36197 USA HNSCC 8 8 2008 GSE13601 GPL8300 12625 USA OSCC 31 26 2019 GSE138206 GPL570 9442 China OSCC 6 6 2020 GSE143224 GPL5175 19076 Brazil LSCC 14 11 2010 GSE23036 GPL571 22277 Spain HNSCC 63 5 2010 GSE23558 GPL6480 41000 India OSCC 27 5 18 BioMed Research International between dysplasia tissue and tumor tissue. The Student t-test or one-way analysis of variance (ANOVA) was used appropriately to test the result of the comparison. Furthermore, we also plot the ROC curves to assess hub genes' diagnostic value; the area under the ROC curve (AUC) was calculated by the "pROC" R package. Survival analysis was also performed on all hub genes by using GEPIA (a visualization website based on the TCGA database: http://gepia.cancerpku.cn/). The median is considered to be the cutoff for high and low expression of hub genes. Methylation Analysis. In order to further explore the reason for the dysregulation of hub gens, we performed methylation analysis on all hub genes based on DiseaseMeth 2.0 [8] (http://bioinfo.hrbmu.edu.cn/diseasemeth/), which is a website focusing on collecting methylation data from various tumor tissue. We compare the mean value of methylation between HNSCC and corresponding normal tissue. Furthermore, we also used MEXPRESS [9] (http://mexpress.be) to explore the association between the expression level of hub genes and the methylation level of the corresponding methylation site. Those methylation sites that are negatively correlated with gene expression are defined as candidate sites. To further screen potential key methylation sites, we also conducted survival analyses on these candidate sites by using MethSurv [10] (https://biit.cs.ut.ee/methsurv/). Immune Cell Infiltration and Hub Genes. To explore the association between immune cell infiltration and expression level of hub genes, we used TIMER [32] (https://cistrome .shinyapps.io/timer/), an online tool based on the TCGA database, to evaluate the infiltration score for six kinds of important immune cells (B cells, CD4+ T cells, CD8+ T cells, neutrophils, macrophages, and dendritic cells). The Pearson correlation coefficient between hub genes and the infiltration score were then calculated. 5.9. Gene Set Enrichment Analysis. According to the mean expression value of 6 hub genes, all HNSCC samples in the TCGA database were divided into high expression groups and low expression groups. GSEA analysis was performed and visualized by using the "clusterprofiler" R package. The KEGG gene set was directly downloaded from MSigDB (http://software.broadinstitute.org/gsea/msigdb/index.jsp). of hub genes. A: diagnostic role of hub genes between normal tissue and tumor tissue. B: diagnostic role of hub genes between tumor tissue and dysplasia tissue. Supplementary Fig 5: mean methylation level of hub genes between normal and tumor tissues. Supplementary Fig 6:
2020-12-17T09:07:52.213Z
2020-12-07T00:00:00.000
{ "year": 2020, "sha1": "e80bdc13820fcab6df09a3a7064ec3e2e2bf8d32", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/bmri/2020/4908427.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e52fbf872ba62c9ed115528d94517c5130c7c697", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
244520473
pes2o/s2orc
v3-fos-license
Numerical simulation of time partial fractional diffusion model by Laplace transform Abstract: In the present work, the authors developed the scheme for time Fractional Partial Diffusion Differential Equation (FPDDE). The considered class of FPDDE describes the flow of fluid from the higher density region to the region of lower density, macroscopically it is associated with the gradient of concentration. FPDDE is used in different branches of science for the modeling and better description of those processes that involve flow of substances. The authors introduced the novel concept of fractional derivatives in term of both time and space independent variables in the proposed FPDDE. We provided the approximate solution for the underlying generalized non-linear time PFDDE in the sense of Caputo differential operator via Laplace transform combined with Adomian decomposition method known as Laplace Adomian Decomposition Method (LADM). Furthermore, we established the general scheme for the considered model in the form of infinite series by aforementioned techniques. The consequent results obtained by the proposed technique ensure that LADM is an effective and accurate technique to handle nonlinear partial differential equations as compared to the other available numerical techniques. At the end of this paper, the obtained numerical solution is visualized graphically by Matlab to describe the dynamics of desired solution. Introduction Those dynamical and biological phenomena that involve the rate of change are usually modeled by Ordinary Differential Equations (ODEs) or Partial Differential Equations (PDEs). Differential Equations (DEs) have the ability to predict all types of dynamic phenomena and are used to describe the exponential growth and decay over time. DEs have a wide range of applications in various fields, such as physics, engineering, biology, etc. Furthermore, some useful applications of DEs to model the engineering and physical phenomena can be found in some recent articles, see [1][2][3][4][5]. ODEs often model one dimensional dynamical system, such as moment, flow of electricity, motion of an object and motion of pendulum to explain thermodynamics concepts, for detail, see [6][7][8]. DEs are also used in the formulation of biological fields, such as to check growth and decay of diseases. On the other hand PDEs are also used to model multidimensional system. One of the important and significant application of PDEs is that it can be used to formulate natural phenomena, such as sound waves, heat transform, electrostatics, electrodynamics, quantum mechanics and flow of fluid, see [9][10][11][12][13][14]. An essential aspect of the proposed field is to investigate Fractional Partial Differential Equations (FPDEs). The researchers have made considerable attention to investigate the concerned class of DEs from both theoretical and application point of view. Fractional operators have some advantages over conventional derivatives, such as hereditary properties, memory effects, globle in nature and great degree of freedom. Keeping this importance in view, researchers paid more attention to investigate the considered class of PFDEs. The mathematical models involving fractional-order derivatives are more reliable and accurate as compared to traditional derivatives. In some situations, a mathematical model involving integer-order derivative does not describe the real situation. In this connection, fractionalorder derivatives are more efficient to describe such a real-world problems, see [15][16][17][18][19][20][21]. Diffusion equation is one of the well known PDEs, that is related to the flow of fluid from the higher density region to the region of lower density; macroscopically it is associated with the gradient of concentration. Fractional Partial Diffusion Models (FPDMs) are used in different branches of science for the modeling and better description of those processes that involve flow of substances. For example, the time-fractional diffusion model describes the diffusion of porous media in physics, relaxation phenomena, and model anomalous diffusion. Apart from this, FPDMs are also used in the modeling of a variety of biological processes, like the transport of fusion plasma, see [22]. FPDDEs are the generalization of diffusion models of integer order derivatives. S. Kumar et al. [23,24] studied the time FPDDEs under external force, that is given by where the probability density function is represented by ψ(x, t) for the particle at a point 'x' and time 't'. The constant K > 0 depends upon the universal gas constant, which is a coefficient of fraction, temperature, external force F(x) and Avogadro number. The researchers take a keen interest in analysis of FPDDEs, due to numerous applications in physical science, biology and medicine. Moreover, fractional diffusion equations are used to model turbulent flow, groundwater contaminant transport, and the chaotic dynamics of the classical conservative system. Therefore, different analytic and numerical techniques are used for the investigation of such a models [25]. The researchers studied different features of FPDEs by various numerical techniques, such as the New Homotopy Perturbation Method (NHPM) [26], Finite Difference Method (FDM) [27], New Iterative Method (NIM) [28] and many more [29,30]. Recently, S. Kumar et al., studied the time diffusion equation via Homotopy Perturbation Transform Method (HPTM) [23], with fractional derivative term of independent variable 't' only. In this paper, the authors have generalized the idea used in [22], and introduced the novel concept of fractional-order derivatives in the sense of both independent variables 't' and 'x'. The newly constructed FPDDE is given by The authors used the tool of LADM, to investigate the considered model. Laplace Adomian decomposition is the best tool to obtain the approximate solution of nonlinear PFDEs. LADM is the combination of two powerful techniques, the Adomian decomposition method and Laplace transform. LT inversion is a well-known ill-posed problem. Here we assume to compute an exact inverse from LT tables. When a closed form for the solution is unknown, LT numerical inversion is needed. It requires attention and different methods, algorithms, and software elements are available in the literature, see [31,32]. Using the proposed techniques, we can obtain both analytical, if exists and approximate solutions for nonlinear DEs. The considered technique is more powerful as compared to the other available techniques, because it gives us particular solutions without finding general solution for DEs. Furthermore, it does not require predefined size declaration like Runge-Kutta method, possesses less parameters, no need of discretization and linearization. As compared to other analytical techniques, the proposed technique is efficient and simple to investigate numerical solution of nonlinear fractional partial differential equations. The results obtained by this method ensure the capability and reliability of the proposed method for nonlinear fractional partial differential equations, see [21,33,34]. Preliminaries The concerned section, is devoted to the well-known definitions related to fractional calculus that will be useful in further correspondence in this work. Definition 2.1. The LT of a function g(x, t) defined ∀ t ≥ 0, is denoted by G(x, s) = L {g(x, t)} and is given as where 'L ' is called LT operator or Laplace transformation and 's' is the transformed variable. Definition 2.2. In Caputo sense, the fractional order derivative for the function ψ on the interval is the integral part of α. As α −→ m the Caputo fractional derivative becomes conventional nth order derivative of the function. Particularly for α ∈ (0, 1), Definition 2.3. The LT of Caputo derivatives is given by where m = [α] + 1 and [α] denote the integral part of α. Analysis of time partial fractional diffusion equation This section of the work, is devoted to the general scheme for solution of time FPDDE (1.2), via LT. The proposed time FPDDE under external force is given by subjected to the initial condition(IC) Where the probability density function is represented by ψ(x, t) for the particle at a point 'x', time 't' and the constant K > 0 depends upon universal gas constant, coefficient of fraction, temperature, external force F(x) and Avogadro number. Applying LT to (3.1), we get By using the properties of LT, the above relation becomes Applying inverse LT and using IC Assuming the solution ψ(x, t) in the form of infinite series, we have Thus, Eq (3.2) becomes Comparing the i th terms of Eq (3.3), we have By simple computation, we get Continuing on the same fashion, we obtain the series solution as The error in the infinite series can be calculated from the n th term of the series truncated by n − 1 terms. Following truncation error in Taylor series, power series etc we can say that the order of error is O(z nα ). Consequently, we can say that the error in the series after truncating it to first four terms is given by Now the truncated approximate solution can be expressed as , z ∈ (t 0 , t f ). Numerical discussion This section is comprised of few examples to elaborate the constructed scheme for considered model, we have also presented plots to illustrate the computation for proposed examples. subjected to IC: In applying LT to (4.1), we get By using properties of LT, the above relation becomes From the inverse LT and using the IC, we obtain Let us assume the solution ψ(x, t) in terms of infinite series as given by Comparing the terms of Eq (3.3), we have Calculating the terms, we have Continuing this manner, the desired solution of Eq (4.1), is The truncated solution after four terms is For the considered example, Error is . For classical order α = 1, the obtained solution is Example 2. Consider the time PFDE with K = 2, F(x) = −e −x , β = 2 and γ = 1, the proposed equation is given as 5) subjected to IC: Applying LT to (4.5), we get By using properties of LT, the above relation becomes Applying now the inverse LT and using IC, we have Assuming the solution ψ(x, t) in terms of infinite series Thus, the Eq (4.6) becomes Comparing both sides of Eq (4.7), we have Calculating the terms, we have Continuing the same fashion, the solution of Eq (4.5), is obtained as The solution truncated after three terms is For classical order α = 1, the obtained solution is Discussion The obtained results reveal the complete agreement with the results obtained by Das [24] and S. Kumar, et al. [23] by VIM and LHPM respectively. The plots obtained for the LADM solution and VIM solution also show complete similarity, which justifies the claim of complete agreement. LADM solution is almost valid for a wide range of nonlinear DEs. The consequences obtained by the proposed technique ensure that the scheme is very effective and easy to implement in considering a class of nonlinear PFDDE models. In comparison with other analytical techniques, the proposed technique is an efficient and simple tool to investigate numerical solution of nonlinear fractional partial differential equations with a high degree of accuracy. The plots illustrate the possible concentration of finding the flowing particles of a fluid at any point and time, based on the behavior of the approximated solution. Figures 1 and 2 show the behavior of the solution of example 1, in which the dependent variable has direct linkage with space and time variables. Where as Figures 3 and 4 show the behavior of solution of example 2, in which the dependent variable has direct linkage with time and inverse with space variables. For different values of α, the two dimensional behavior of the corresponding approximated solutions is also displayed in the given plots. Conclusions In this paper, the authors developed the scheme for generalized time FPDDE involving more than one fractional derivative. In order to obtain the desired results, the authors utilized the tools of well-known numerical techniques so-called LADM. To elaborate our main results, we provided some numerical problems for illustrative purposes in few iterations. Which provides the reliability of proposed techniques. To describe the dynamics of the proposed model, we visualized the obtained results graphically via Matlab.
2021-11-24T16:34:27.790Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "15f1d2d02ea847ee42b7b54495bc53e6fa6fcac7", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.3934/math.2022159", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1d13f9fcc95405ad6097047418012469141281b7", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }
237591395
pes2o/s2orc
v3-fos-license
Optical and Electrical Properties of the films of ALTiO2 prepared by Magnetron Sputtering In the present work, ALTiO2 thin films were deposit On a glass substrate by using a DC magnetron sputtering at varying currents (170,180,200) mA. The thickness of ALTiO2 thin films was calculated using an optical interferometer system that used a He-Ne laser with a wavelength λ of (632.8)nm. The thickness of the thin films was (340,162,129) nm. UV-Vis spectroscopy was used to investigate optical properties. The wavelength spectrum between (300 -1000)nm was used to record the absorption and transmittance spectra of ALTiO2 thin films. With increasing thickness and currents, the optical band gap decreases from (3.2 to 1.9) ev. The electrical properties indicate that as the ALTiO2 film thickness increases (340,162,129) nm respectively (1.19 to 9.84) the resistivity decreases A large range of parameters, such as time, surface, thickness, and various substrates, can influence the electrical and structural properties of thin metal films. A composition of two or more thin metal film phases can improve the structural, electrical, and optical properties [1,2] Any attempts have been made to determine the structural and optical properties of Al203 [3]. For electronic device applications, aluminum thin films and Al alloys are commonly used [4].both experimentally and theoretically. Thanks to the rise in thrust-to-weight ratio, their low density and oxidation resistance at 1000 K TiAlbased materials have gained popularity in aerospace applications and automotive industries [5]. In the recent past, a variety of studies have been carried out on the preparation of titanium aluminides by reaction synthesis [6]. Crystalline TiAl films, however, are produced by sputtering from a stoichiometric target [7]. or by Multilayered coatings solid-state reaction [7,8] In a simple sputtering process, energetic ions bombard a target material to be deposited on the substrate, usually inert gas ions such as argon gas (Ar). The target atoms are removed by the strong collision of these inter gas ions on the target, which condenses on the substrate as a thin film of the same stoichiometry as the target substance [9]. Magnetron sputtering devices provide a strong magnetic field close to the target, allowing mobile electrons to spiral along magnetic flux lines. This structure keeps the plasma away from the target area, preventing contamination of the thin films being deposited on the substrate and maintaining thin film thickness uniformity during deposition [10]. The sputtering technique of DC 2 magnetron has the benefit of much higher efficiency than other methods of deposition, it is commonly used as mass processing processes [11]. DC magnetron sputtering route coatings have the advantage of uniform and homogeneous coatings over a wide field [12]. This process has many benefits, such as high deposition rate, high film purity and homogeneity, high adhesion and high accuracy of the thickness or grain size regulation of the films obtained [13]. Materials and Methods A thin film of ALTiO2 was deposited: The 99 percent (5 cm) of the pure aluminum objective was cleaned with soft sanding and ethanol, and the cleaned substrate was put on the sample stage using ultrasonic with deionized water and ethanol 9 cm diameter at the center of the chamber 27x30 cm Close the chamber cover and the doorknob clockwise until tightly, the rotary vacuum began after the pressure has reached a value of 2.5X10-2 mbar. The distance between the target and the substrate was (2cm) we used the technique of magnetic field plasma sputtering AL2O3 compound precipitation was conducted on a group of vitreous samples to research the effect of increasing the atomization pressure on the sample thickness over time and current used. My agency: Three samples were deposited using a pressure ( 6X10-1) bar, and the sedimentation time was(15) min under different pressures. -Thin films thickness measurements The thickness was measured using the optical interference method, where the rays that were reflected from the base and the thin film deposited on it interfere. He-Ne laser with a wavelength of (632.8)nm with an incidence angle of 45˚as shown schematically in figure(1) thin-film thickness (t) was determined using the following formula [14] t= λ/2 .Y/X (1) where t: thin-film thickness λ:incident laser light wavelength. Y: dark fringe width X: luminous fringe Optical properties The prepared films were optically measured using a UV-Vis-NIR (300-1000 nm) Spectroscope, and the spectral dependence of the transmittance (T) and absorption (A) for all ALTIO2 films were calculated. the absorption spectra of ALTiO2 thin films as a function of wavelength ranging from(300- 1000)nm are shown in figure2 As the current and thin film thickness are increased, the absorption will increase This result accordant to with research [14]. Figure 3 displays the transmittance spectra for the same wavelength range (300-1000)nm. We can see that transmittance has the opposite behavior as optical absorption; it reduces at short wavelengths when it reaches its lowest point, then begins to rise as the wavelength is increased, and also increases as the thickness and current are increased. Calculation of reflectivity The reflectivity spectrum is the ratio between the intensity of the reflected radiation to the intensity of the incident radiation and can be found depending on the absorption spectrum and the optical transmittance by using the following equation [15] R=1-A-T (2) Figure 4 displays the reflectivity film as a function of wavelength with and. We can see that the reflectivity values climb before they exceed their peak value at a short wavelength (300)nm, and begin to decrease as the wavelength increases at long wavelengths. Calculation of absorption coefficient (α) The absorption coefficient of ALTiO2 films was calculated from the equation [16] : α=2.303 A⁄ t where:-A: absorption t:-thin-film thickness Figure 5 shows absorption coefficient(α) as a function of wavelength for samples of varying current and thickness. The absorption coefficient rises at short wavelengths before reaching the maximum peak at (400), then declines as the wavelength is increased. Also, note that the absorption coefficient decreases steadily as thin-film thickness and current is increased in certain films. The graphical relationship between (αhv)2 vs. (hv) was drawn by extending the best straight line whose extension crosses the axis of the photon energy hv. The energy gap〖(E〗_(g )) value was determined from the point of intersection with the x-axis at which ( αhv)2 = 0, as shown Figure (6) shows the relation between (αhv)2 and hv of ALTiO2 thin films prepared on glass substrates from the figure the energy gap was determined by the intercept of the linear part of the curve with X-axis and found to be (3.2,2.7, 1.9) ev at different current (170,180,200)mA respectively with allowed direct transition. The value of the optical energy gap was found to be decreased by increases current and thickness. As a result of the formation of new levels in the bandgap, the transfer of electrons from the valence band to these local levels into the conduction band is facilitated, resulting in an increase in conductivity and a decrease in the bandgap. Table 1 summarizes the optical band gap values for ALTiO2 thin films. The electrical properties of ALTiO2 thin metal films of various thicknesses were determined using a two-point probe directly attached to an avometer, where the resistance between the two props was measured and the electrical conductivity was estimated using the equation(5) [17] Where R: is resistance( ), L: distance between electrodes (cm) ρ: is electrical resistivity σ: is electrical conductivity. The AL TiO2 film resistance is dependent on film thickness, as seen in Figure 7. As a result, as AL Ti film thickness increases to 300nm, the ALTiO2 thin film resistivity decreases, with the decreasing tendency due to the inverse linear dependency of film resistivity increase on grain size. This result accordant to with research [18]. As for the relationship of the electrical resistivity with the thickness of the prepared films, it is shown in Figure8 Figure 9. shows electrical conductivity, as functions of increases thickness ALTiO2 films deposited on a glass substrate Conclusion The optical and electrical properties for ALTiO2 thin films with different current were prepared by magnetron sputtering DC characteristic The optical properties exhibited that the optical absorption spectra have different values according to the thickness of the deposited thin films, showed that the absorption coefficient increased with the increasing wavelength and the presence of absorption peaks above (300)nm, the bandgap decreased by increases current and thickness from ( 3.2 to 1.9) ev. The electrical findings indicate that as the thickness of thin films(t) increases. the resistance decreases, and the electrical conductivity increases.
2021-09-22T20:07:55.924Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "73409533e9d0fc5763bfe455bf3b3a4d4b2105fb", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/1999/1/012043/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "73409533e9d0fc5763bfe455bf3b3a4d4b2105fb", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
261538563
pes2o/s2orc
v3-fos-license
Financial distress in the banking industry: A bibliometric synthesis and exploration Abstract Recent financial upheavals and economic downturns have triggered and sped up the research on financial distress in general and in the Banking industry in specific. The current review attempts to gauge and map performance trends and intellectual knowledge structure of the financial distress research in the banking industry. A hybrid approach was adopted to inventorize, analyse and evaluate the financial distress literature pertaining to the banking industry (1982–2022); the authors apply bibliometric analysis to identify critical financial distress literature articles and journals, followed by the identification of central financial distress literature research themes through co-citation analysis. We found that financial distress researchers have published 11 papers each year since 1982, and the number of citations received by the research domain has significantly risen, adding more importance to this research domain. Further, the analysis helps delineate four thematic knowledge clusters throwing light on the nomological network of the field and giving a bird’s eye-view of the intellectual structure of the field. Since this review utilised data from a single database, i.e. Scopus, any shortcomings associated with the database would undoubtedly impact the results. Introduction Accounting and finance researchers have studied financial distress prediction for the last five decades (Shi & Li, 2019).Numerous academic works have focused on corporate bankruptcy because of the potentially devastating effects it can have on creditors and economies.Many financial scandals had a serious impact on all economies (Al-Absy & Ntim, 2020;Al-Absy et al., 2020).Foster (1986) defined financial distress as a "serious liquidation problem which is unable to be resolved without a large-scale restructuring of the operation or structure of economic entities" (Sun et al., 2014, p. 42).Financial distress indicates approaching bankruptcy, and its prediction can alert investors and creditors of probable losses (Zhou et al., 2022).The initial journey started by Beaver (1966) and then extended by Altman (1968) for bankruptcy prediction using financial ratios.After the global financial crisis that resulted in the recession in 2008-09, there is more need for bankruptcy prediction and corporate financial distress.In the preliminary phases of financial distress, a company's long-term debt exceeds its cash flow, making it unable to meet its financial compulsions (Whitaker, 1999).Financial distress is the incapacity of a corporation to make its payments when they become due.In a broad sense, it refers to the inability to pay obligations (such as debt) when they are due (Beaver et al., 2010).In reality, many struggling businesses may come back from the edge of financial distress and resume normal operations.Also, 52% of sampled firms with poor financial states undergo management turnover (Gilson, 1989) and extensive alterations to the leadership and structure of the organisation (Wruck, 1990).Selfefficacy prevents firms from financial distress (Kuhnen & Melzer, 2018).Debt-restructured companies that continue to operate with high leverage, low investment, and poor performance may frequently enter into financial distress again (Kahl, 2002).A growing body of evidence suggests that mergers and acquisitions can help financially distressed and also reduce their chances of bankruptcy (Zhang, 2022) and; in the United States, total investment in merger and acquisition was about 5.8 trillion dollars between 2010 and 2018 and out of this total 23% firms are distressed. Our research has revealed the five most relevant reviews of financial distress (Altman et al., 2017;Habib et al., 2020;Keasey & Watson, 1991;Mallinguh & Zéman, 2020;Sun et al., 2014).Altman et al. (2017) in their review talked about the Z-Score model utility in an international context, particularly for banks, Sun et al. (2014) reviewed the domain of financial distress prediction; Keasey and Watson (1991) discuss the managerial applications and limitations of adopting financial distress prediction models, Habib et al., (2020) examines determinants and consequences of financial distress, and Mallinguh and Zéman (2020) discusses financial distress prediction and mitigation strategies. Existing reviews have added to the current literature on financial distress (Altman et al., 2017;Habib et al., 2020;Yousaf et al., 2022).We believe a more thorough and systematic discussion in this growing field would be beneficial, so we endeavour to conduct a statistics-based analytical technique, i.e., bibliometric analysis.This technique is superior to the systematic literature review, as bibliometric reviews are "highly efficient and objective as they leverage on the power of technology for data collection" (Chopra et al., 2021, p. 2), especially for review articles with the large corpus (i.e., high hundreds to thousands of articles). We argue that our study is novel and timely based on specific reasons (See Table 1, which provides a bird's eye view of the prior reviews on financial distress.).First, the author found no bibliometric review published in top-ranked journals, which may have explored the performance and science of financial distress and the banking industry.Second, per the author's knowledge, no review published in the finance discipline (Journals ranked in ABDC-2022 list) has used the combination of two software, i.e., VOSviewer and bibliometrix-R software, for carrying out the analysis.Third, the authors present a hybrid assessment of quality articles covering the breadth and depth of financial distress in the banking industry, which incorporates a review procedure (i.e., bibliometric review procedure Donthu et al., 2021) and a review protocol (i.e., SPAR-4-SLR Paul et al., 2021).Fourth, we provide directions for future researchers to consider for empirical and conceptual explorations.Fifth, our research aids researchers and policymakers interested in financial distress in the banking industry.The techniques used in the study also involve specialised analysis, such as co-citation analysis, that facilitates mapping and identifying boundary lines in emerging and well-established research fields.Aligning with the above discussion and following state-of-the-art bibliometric reviews (Boubaker et al., 2023;Khanra et al., 2020;Kumar et al., 2019;Rasul et al., 2022;Trinarningsih et al., 2021), we intend to address the following six research questions in the present review article: RQ1: What are the publication and citation trends of financial distress research in the Banking Industry?RQ2: Which are the most significant articles and outlets (journals) regarding financial distress in the Banking Industry?RQ3: Which authors, institutions, and nations have contributed the most to the academic field of financial distress in the Banking Industry?RQ4: What are the descriptive trends (through several metrics like authors, words etc.) existing in the academic field of financial distress in the Banking Industry?RQ5: What are the most important themes (intellectual structure) regarding financial distress in the Banking Industry? RQ6: What are the future avenues for financial distress research within the Banking Industry? The following section principally outlines the background of financial distress research and its existing trajectories.Further, the review methodology is elucidated and elaborated, including the protocol and procedures employed in the current review.Next, we discuss the existing trends in the field by using performance mapping techniques and present the intellectual structure through knowledge clusters (themes).The later section discusses the research gaps and potential future avenues.We conclude the review by describing the limitations and putting forward the final word. Financial distress Financial distress is a scenario where the current cash flow is insufficient to pay for immediate obligations.These include non-payment debt to contractors and workers with missed principal or interest payments (Wruck, 1990).The definition of financial distress also provides a company's ability to pay for its obligations and falling market value between consecutive periods (Pindado et al., 2008).Companies encounter financial distress due to poor operating results or external forces (Platt & Platt, 2008).To conceptualise corporate financial distress, let's use the terms default, insolvency, bankruptcy, and failure (Jackson & Lee, 1963;Rajasekar et al., 2014).A default can be legal and technical; technical default means a company broke a contract condition, whereas legal default is when loan payments are missed.Insolvency is the inability to pay its debts due to liquidity issues.Filling bankruptcy signifies a company's financial distress.Failure occurs when income isn't enough to cover costs (Altman & Hotchkiss, 2005).Difficulty in paying the dividend and firms not fulfilling their financial responsibility are initial signs of financial distress (Baldwin & Mason, 1983).In the early stages of financial distress, long-term debt is more than the firm's cash flow; therefore, firms cannot perform financial responsibilities (Whitaker, 1999).Financial distress can cause a shakeup in the corporate hierarchy and even result in the departure of top executives (Wruck, 1990).Financial distress prediction models are accounting-based and market-based.Accounting-based models are (Altman, 1968;Beaver, 1966;Dietrich, 1984;Ohlson, 1980) and market-based (Bharath & Shumway, 2011;Black & Scholes, 1973;Hillegeist et al., 2004;Merton, 1974).The Z-Score model linear discriminant analysis method was used to predict financial distress.The O-Score method for analysis changed to logit analysis, and Zmijewski (1984) used probit in his model analysis (Pindado et al., 2008).Both market-based and accounting models are very close in predictive accuracy, but the accounting model produces economic benefits over the market-based (Agarwal & Taffler, 2008).One of the published works by (Hillegeist et al., 2004) compares Z-Score (Altman, 1968) and O-Score (Ohlson, 1980) with BSM-Prob (Black & Scholes, 1973) and claims that the BSM-Prob model provides more information than either of the accounting-based model.Most researchers have used accounting-based models to predict financial distress, whereas market-based models are rarely used (Habib et al., 2020), (see Table 2. which provides a few seminal conceptualisations of Financial Distress.). Costs related to financial distress are direct and indirect.Administrative and legal costs are direct and easily quantifiable (Almeida & Philippon, 2008;Bhagat et al., 1994;Chen & Merville, 1999;Pindado & Rodrigues, 2005).Indirect cost is due to the unpredictability of explicit and implicit warranties and includes lost sales when customers switch to competing products (Altman, 1984;Chen & Merville, 1999;Hoshi et al., 1990).During the economic distress period, overleveraged firms lose more market share from their modestly financed competitors (Opler & Titman, 1994).While addressing the issue of financial distress, the company faces a cash crisis, and during that period company is unable to fulfil its obligations (Pindado et al., 2008).Non-distressed firms didn't hold more cash from cash flow than financially distressed firms (Almeida et al., 2004).Financial distress is not sudden for a company; there are many symptoms and signs before it faces it (Elloumi & Gueyié, 2001;Pindado et al., 2008).The predominant source of financial distress is leverage, and large firms have low chances of default (Pham Vo Ninh et al., 2018).Due to uncertain payoffs, R&D investments increase financial distress, and this relationship fortifies during economic downturns for constrained firms (Zhang, 2014).Currency hedging will reduce the chances of financial distress to a certain extent (Magee, 2013).Companies with good employee relations have lower chances of experiencing financial distress in the future (Kane et al., 2005).A positive amount of CSR activity reduces the likelihood of financial distress.A negative relationship is found between financial distress and positive CSR performance in the latter stage of firm life (Al-Hadi et al., 2019).Chang et al. (2013) also find a negative relationship between distress risk and CSR score.Firms performing CSR practices are associated with lower distress and default risk (Boubaker et al., 2020).Managers in distressed firms use more earnings management techniques (Habib et al., 2013).Agarwal et al. (2007) find different practices among Japanese banks subject to varying economic conditions.Financial statements may partially reflect the increased risk of bank distress (Ryan, 2016). Banking industry A bank is an assemblage of traders whose combination forms an institution that provides a credit deposit facility and safeguards the borrower's actions (Boot & Thakor, 1997).Various financial institutions are involved in different activities like deposit taking and lending, and the most common forms of institutions are the central bank, commercial bank, retail bank, etc (Laplante & Kshetri, 2021).Bank performance can be measured by productivity, efficiency, profitability, and competencies (Alam et al., 2021).It is inefficient if a bank is not producing the desired output with the available input (Bhattacharyya & Pal, 2013).Return on assets can measure banking industry profitability (Alam et al., 2021).Profitability affects financial stability, which boosts growth and also helps influence gross domestic product (GDP) (Flannery & Rangan, 2008).The bank is the credit and liquidity provider to various institutions and individuals and can influence other economic actors (Zhelyazkova & Kitanov, 2015).Banks are service sector organisations, their socioeconomic impact is significant and varied, and they have various operations like debt management, cash flow management, and other services (Othman & Asutay, 2018).The nation's growth depends on bank performance and financial activities (Hou & Cheng, 2017).The banking sector is prominent in the financial industry because it attracts many developing nations (Alam et al., 2021).Bank return on equity and return on assets both increased as a result of the creation of liquidity, effective asset management, asset quality, and bank size (El-Chaarani et al., 2023).El Shaarani (2023) findings show that financial indicators in Islamic banks fell precipitously during the pandemic; liquidity risk, bank size, managerial efficiency ratio, and oil price shocks were the main factors influencing Islamic banks' profitability prior to the emergence of COVID-19. Financial distress in the banking industry Bank's financial distress depends on the board structure, ownership, and other top positions in banks and the different persons holding these positions (Simpson & Gleason, 1999).Based on a recent study, local private and public banks can better weather financial distress due to small and long-term loans from the foreign bank that provide their services (Ullah et al., 2021).Respondents with high perceived financial distress had a positive attitude towards banking but negatively affected service patronage (Babalola, 2009).Financial distress shoots up when we give extra care to investors.The distribution of work on the bank's board between the chairman and CEO should be clearly defined, and they should work on accommodating risk (Baklouti et al., 2016).The failure of banks during the last financial crisis is due to weak corporate governance (OECD) report 2009.This also creates less interest in the financial market, which leads to financial distress (Kirkpatrick, 2009).Bank size may impact the likelihood of financial distress, and large banks positively affect financial distress (Barros et al., 2007).The occurrence of financial distress depends on the capital structure, i.e., the loan-to-asset ratio and its positive relation (Al-Saleh & Al-Kandari, 2012).Financial distress in the banking industry is due to unequal sharing of power at the top management level, and this power imbalance is due to the high percentage of shareholding (Muranda, 2006).Factors for bank default occurrence are macroeconomic like default risk, credit risk, and market risk of bank's return (Porath, 2006).In some studies, macroeconomic information does not impact financial distress (Zaki et al., 2011).The financial health of banks depends on a few indicators like operating efficiency, loan management, and capital adequacy; these measures decide whether a firm falls into financial distress (Rahman et al., 2004).Factors important in the banking industry which affect financial distress are ownership, size, merger, and acquisition process (Wanke et al., 2015).High capital requirements and bank size don't reduce financial distress but increased profitability and income diversification can do (Koju et al., 2018).The study suggests that conventional banking in Indonesia is facing financial distress due to the size of the board's direction, the return on assets, and the capital adequacy ratio (Hatta et al., 2021).For financial stability in Brazilian banks, balance-sheet indicators are essential for early warning signs of financial distress (Rosa & Gartner, 2018).Paule-Vianez et al. (2020) suggested that shortterm financial distress can be predicted more accurately (El-Chaarani & Ragab, 2018).study shows that political crises and economic recession have a negative impact on the performance of Islamic banks. Systematic review A systematic literature review (hereafter SLR) as a methodology (Snyder, 2019) has a significant advantage over traditional literature reviews as systematic reviews ensure coherence, generativity, and reproducibility; it not only brings transparency but minimises biases as the SLRs stand on a firm structural ground of protocols and procedures (Fan et al., 2022;Harari et al., 2020).Various forms of systematic review exist, namely domain-based, theory-based, and method-based reviews, are available to provide an overview of the literature on specific topics (Palmatier et al., 2018;Snyder, 2019), while more refined forms have also been added like meta-analysis, mixed review, conceptual review, framework-based, bibliometric and structured theme based (Paul & Criado, 2020).We employed the domain-based review approach to gain a holistic overview of the scientific contribution related to the research domain (Palmatier et al., 2018).Bibliometric analysis is essential because it highlights quantitative data and analytical tendencies in the research field (Ding et al., 2022;Goyal & Kumar, 2021).Compared to the traditional literature review methods, the bibliometric analysis yields more objective and quantitively-backed findings (Ramos-Rodríguez & Ruíz-Navarro, 2004).The bibliometric techniques deal with impacts (e.g., citations), locations (e.g., countries), and stakeholders (e.g., institutions) (Donthu et al., 2021(Donthu et al., , 2021)).Pritchard (1969) introduced the term bibliometric in the literature, which paved the way for future research.The bibliometric analysis includes performance analysis and science mapping (Baier-Fuentes et al., 2019).Performance analysis uses citations and publications to compare articles, authors, countries, and journals (Baier-Fuentes et al., 2019;Donthu et al., 2021), whereas in science mapping, relationships between the different parts are highlighted (e.g., co-authorship, co-word analysis, bibliographic coupling, citation analysis, co-citation analysis) (Donthu et al., 2021).Descriptive analysis is also essential to bibliometric investigations (Donthu et al., 2020).Bibliometric methods are utilised in mapping the field of study, and then the central themes are identified to help get an essence of the domain's knowledge structure (Baker et al., 2020).In a sense, quantitative analysis of the trends and knowledge structure of the relevant research domain is possible through the application of bibliometric techniques (Donthu et al., 2021). Review protocol (SPAR-4-SLR) This study adopts the "Scientific Procedure and Rationales for Systematic Literature Review" (SPAR-4-SLR) to provide a structural and systematic outlook to our review (Paul et al., 2021); it includes the following sections: assembling, arranging, and assessing stage (See Figure 1). Assembling: This stage of SPAR-4-SLR includes sub-stages of identification and acquisition. 3.2.1.1.Identification.This article delegates the area of financial distress in Banking as the primary focus (i.e., domain).The research questions will help to recognise the domain's characteristics, relationships, agendas, and bibliometrics (Harju, 2022).For the collection of data, the Scopus database is used.The Scopus database was suitable for our research in contrast to WOS (Web of Science), its primary opponent, and it covers a broader range of topics and groups (Kumar et al., 2022).In addition, it helps in identifying journals specialising in review articles (Paul et al., 2021).The Scopus database includes subjects like art, science, technology, and humanities, and it has the most comprehensive citation and abstract (Fahimnia et al., 2015).Some features include the complete availability of bibliometric data, quality information, and stringent content selection with the procedure (Kumar et al., 2021). Acquisition. Scopus was used for the search and data acquisition due to the breadth of the outcomes provided (Paul et al., 2021).Data was collected from Scopus, and the search period was kept till July 2022, since the field's inception, to (1) incorporate early and seminal works in the domain and ( 2) not miss any relevant study in the domain.We used Boolean operators (Ramos et al., 2021) and truncation techniques (Frerichs & Teichert, 2021) to search the articles on Scopus.Our search keywords were "financial distress", "financial distress prediction", "financial distress strategies", "Bank*", "Bank* industry", and "Bank* sector".This filtration and keyword strategy resulted in 1250 articles. Arranging: This stage of SPAR-4-SLR includes sub-stages of organisation and purification. Purification. We excluded non-English articles, and this filtration follows the suggestions by Donthu et al. (2021).The document types considered were reviews and articles, and we have considered taking articles from subject areas of 'Economics, Econometrics, Finance & 'Business, Management, and Accounting.Applying these filters resulted in a corpus of 850 articles. Assessing: This stage of SPAR-4-SLR includes sub-stages of evaluation and reporting Evaluation.We conducted performance analysis, descriptive analysis, and science mapping to evaluate the corpus of the literature on financial distress in the banking industry.In performance analysis, we check what value is added by research components, such as metrics related to citations, publications and most influential metrics (Noyons et al., 1999).We also conducted descriptive analysis and analysed specific patterns related to authors and publications concerning the domain under review.In contrast, relations between various parts are highlighted with the help of science mapping techniques, such as co-authorship, co-citation, authorship, and citation. Reporting.In this section, we present the review's findings using words, tables, and figures, while at the end, we point out article limitations and cite supporting articles for evidence. Publication and citation trends (RQ1) After retrieving the bib file, the first step was gathering preliminary data on the research topic.The compilation included 850 documents written by 1718 authors and published between 1982 and 2022, approximately 39.8 years.The annual growth rate for article publications is 11.17%, and there are 366 total journals comprising these articles.The average number of citations per article received is 23.95, and the total number of citations received is 19,854 (See Table 3). Most influential articles and outlets (Journals) on financial distress in the banking industry (RQ2) Article influence was evaluated using citation counts (Chabowski et al., 2013).Articles with the most citations in the Scopus database were considered the most influential.Table 4 shows the highly cited papers, and most of these papers are discussed in other sections.The most significant article, "The role of banks in reducing the costs of financial distress in Japan," has gained total 574 citations authored by Hoshi et al. (1990), followed by "A Theory of Bank Capital" with 530 citations written by Diamond and Rajan (2000.Scopus-based citations of leading journals in the field of financial distress in the banking industry are presented in Table 5. Journal of Financial Economics ranked first with 3505 papers, whereas the Journal of Banking and Finance ranked second with 1847 citations.Journal of Finance and Journal of Corporate Finance have secured 3 rd and 4 th positions with citations of 1329 and 753, respectively.In terms of productivity, the Journal of Banking and Finance, with 33 publications, Article title Author(s) TC The role of banks in reducing the costs of financial distress in Japan (Hoshi et al., 1990) 574 A Theory of Bank Capital (Diamond & Rajan, 2000) 530 Management turnover and financial distress (Gilson, 1989) 505 Tracing the impact of bank liquidity shocks: Evidence from an emerging market (Khwaja & Mian, 2008) 471 Anatomy of Financial Distress: An Examination of Junk-Bond Issuers (Paul et al., 1994) 410 Corporate governance and board effectiveness The ability of banks to lend to informationally opaque small businesses (Berger et al., 2001) 395 Performance Feedback, Slack, and the Timing of Acquisitions (Iyer & Miller, 2008) 319 Affiliated Firms and Financial Support: Evidence from Indian Business Groups (Gopalan et al., 2006) 295 Inter-firm linkages and the wealth effects of financial distress along the supply chain (Hertzel et al., 2008) 291 Notes: This table depicts the top 10 articles with the highest citations, TC: Total citations. Most prolific authors, institutions and nations on financial distress in the banking industry (RQ3) Table 6 also shows the most prolific authors who have gained the highest number of citations are David Scharfstein having affiliation from the Massachusetts Institute of Technology (USA), having a total of 971 citations with two publications, followed by Takeo Hoshi, having affiliations from University of California (USA) with total citations 566 on 1 article.Table 6 also shows the most prolific authors in terms of publications.David Scharfstein has topped the list with the highest number of citations.Table 7 shows the institutions with the most publications; Boston College and the University of Chicago have topped the list with 11 publications each.New York University and the University of California ranked second in the table with 10 publications each. The top 10 countries with the highest publication numbers and citations are also shown in Table 8.The United States of America, with 507 publications, garnered 8795 citations in total, followed by the United Kingdom, with 133 publications garnering 1502 citations; these two countries are the leading contributors.Researchers from developing and developed countries participating in financial distress research indicate its global significance. Annual scientific production The annual scientific production of research publications related to financial distress in the banking industry began in 1982, the growth in the literature base was observed, but it was slow until 2010; since then, there has been an exponential rise with an uptrend after that.The probable reason behind this surge could be the global financial crisis of 2008, which caused a lot of chaos and fear in the banking industry and may have caught the attention of finance researchers.Further, the recent rise in the literature on financial distress may be attributed to the COVID-19 pandemic, which also caused fear in the markets and industries worldwide (Goyal et al., 2021).The highest number of published articles was 85 last year, i.e., 2021 (See Figure 2).The annual growth rate of financial distress in the banking industry, as indicated by BiblioShiny software, is 87.89%. Authors' production over time Figure 3 shows the writers' active chronology across the years regarding the number of articles. The bubbles and their sizes correspond to the number of articles, while the line shows an author's timeline.The number of citations every year is precisely proportional to the colour intensity.Since 2010, there has been a lot of activity, and 2021 has been the most fruitful year.From 2017 to 2021, Carmona P., Climent F., and Momparler A. were the most successful authors.All of the top 10 authors' writings about financial distress in the banking industry are included in this collection. Identifying the writers and researchers in the field who have recently published using interpretations of this data may be possible.Furthermore, related publications can be used as reference materials by upcoming researchers. Country-wise scientific production Figure 4 shows the outcomes of the contributions made by various nations to the study of financial distress in the banking sector.The United States, United Kingdom, Italy, China and France are the top five nations making the most significant contributions to this recently burgeoning field of research. Word cloud in financial distress in the banking industry Figure 5 depicts a word cloud of the authors' keywords.Potential and significant research on financial distress in the banking industry revolves around bankruptcy, forecasting, financial markets, capital market etc.Altman's model, signalling theory, and information asymmetry are the primary theoretical lenses that help understand financial distress-related issues.Financial distress research has also been conducted in countries from the European region and far-east Asian nations like Japan and South Korea.banking was the most used author keyword with a frequency of 25, followed by the terms like financial crisis and finance with 23 and 19 occurrences, respectively. Word dynamics in financial distress in the banking industry The rise and fall in word usage by authors between 1982 and 2022 regarding the financial distress in the banking industry research are depicted in the word dynamics graph based on author keyword occurrences each year.The growth of the most frequently used words is shown in Figure 7. Since the domain's inception, banking, bankruptcy, and debt have consistently grown as the most commonly used words.At the same time, there has been a drastic rise in terms like the financial market, financial systems and forecasting after 2015.Source: Biblioshiny. Co-occurrence (co-word) analysis Keyword co-occurrence involves the analysis of author keywords which may reveal specific and valuable patterns in the evidence available for disseminating knowledge (Kumar et al., 2020).Strozzi et al. (2017) verified that the author's chosen keyword is a highly illustrative indicator of the paper's subject matter or the article's relevance to the research question.The co-occurrence of the Source: Biblioshiny. author keyword may indicate that the documents share a common research theme, which may indicate a general pattern in the field's research (Ding et al., 2001).As a result, we have also utilised author keyword analysis to determine the direction of research on financial distress in the banking industry.First, we extracted the author keywords from 829 relevant articles; then, using VOSviewer software, we constructed an author keyword network.For this analysis, we required a minimum of five co-occurrence keywords.The 829 papers revealed a co-occurrence network for author-generated keywords that occurred more than five times each.Out of 1813 keywords, 71 met our criteria.The keywords "financial distress" and "bankruptcy" were the most frequently cooccurring, appearing 257 and 152 times, respectively.The author's keyword co-occurrence network is depicted in Figure 8. The map shows that the largest node on the network is "financial distress" followed by "bankruptcy."Keywords with the same colour belong to the same group.A keyword network analysis yielded numerous results.First, it demonstrated that the concept of financial distress had been studied along the related topics such as bankruptcy, financial crisis, corporate distress, credit risk, bankruptcy prediction, financial distress risk, and systematic risk.Second, artificial neural networks, bankruptcy prediction, financial distress prediction, financial ratios, Z-score, and logistic regression are used to measure financial distress.Third, capital structure, insolvency, liquidation, risk management, reorganisation, and corporate governance, form distinct groups that represent the Source: Biblioshiny.evolution of the study of this diverse field of financial distress.SMEs, earning management, covid-19, and survival analysis are also found in the same group.These examples highlight the wide variety of topics covered by studies of financial distress in the banking industry. Co-citation analysis To create a scientific road map, scientists use co-citation analysis, which operates under the premise that highly-cited works are conceptually related (Hjørland, 2013).Through references or co-citations, semantic or cognitive associations between articles reveal the theoretical underpinnings of a scientific field (Goodell et al., 2021).A co-citation analysis determines how often two articles are cited together (Small, 1973).Co-cited papers are two papers cited by a third.The more they appear together in citations, the closer they are.The co-citation analysis clusters consisted of co-cited papers with similar concepts and methodologies (Small, 1980).Co-citation analysis focuses only on highly-cited publications, leaving out recent or niche publications (Donthu et al., 2021).Co-citation analysis helps business scholars find seminal publications and knowledge foundations.Articles with at least 20 citations were used to create a co-citation network, and 94 articles met the 20-citation criteria out of 32,723 cited references (see Figure 9 for the co-citation network).. Due to the sheer volume of cited papers in review, it is necessary to set the standard that financial distress articles are included when cited a minimum of x times by other financial distress articles.This criterion is also congruent with the method adopted by Hollebeek et al. (2022). Cluster 1: FD prediction/models of FD (red in the figure) The first cluster, FD prediction/Models of FD (shown in Red), included 30 articles.Several articles in this collection create FD prediction models, accounting and market-based.Accounting-based models (Altman, 1968;Beaver, 1966;Ohlson, 1980) and these model use financial ratios for the prediction of financial distress, and market-based models (Bharath & Shumway, 2011;Black & Scholes, 1973;Hillegeist et al., 2004;Merton, 1974) and these models have used market-based variable for predicting financial distress.Although the forecast accuracy of the market-based and accounting models is relatively similar, the accounting model generates more significant economic benefits (Agarwal & Taffler, 2008).To predict financial distress, most academicians have employed accounting-based models; in contrast, market-based models are rarely used (Habib et al., 2020). Cluster 2 financial distress and bankruptcy (green in the figure) The second cluster, titled banking industry financial distress and Bankruptcy (shown in green in the figure), included 25 articles.Most FD researchers extended their research to find bankruptcy and economic crisis in the banking industry (Berger et al., 2001;Cao, 2021;Diamond & Rajan, 2000;Kroszner & Strahan, 2001).Bank distress doesn't appear to impact small and large borrowers more.However, small firms may react to bank distress by borrowing from numerous banks (Berger et al., 2001).A recent study in China shows that the banking sector's distress depends on real estate losses, suggesting that a rise in real estate losses would increase banking sector losses (Cao, 2021).Firms with well-built ties with banks perform better in distress than firms with weak ties (Hoshi et al., 1990).Banks take on financially distressed companies with various approaches, some of which are quite distinct depending on the organisation structure and lending technologies, and bank size appears to be much less relevant (Micucci & Rossi, 2016).Although more bank capital lowers the likelihood of financial distress, it also slows the rate at which new liquidity can be created.The optimal capital structure of a bank balances the impact of liquidity creation and the cost of bank distress (Diamond & Rajan, 2000). Cluster 3: Resolution of financial distress/rejuvenating from bankruptcy (yellow in the figure) The third cluster consists of 20 articles representing financial distress and bankruptcy rejuvenation.Significant studies in this cluster are Hoshi et al. (1990); Paul et al. (1994);DeAngelo et al. (1995); Claessens et al. (1999); Bolton and Freixas (2000); Baird and Morrison (2001); Couwenberg and Lubben (2015); Liu et al. (2021); Meuleman et al. (2022) and Welch (1997).During bankruptcy, the shutdown decision as the use of actual option offers new justification for being dubious about the merit of Chapter 11 (Baird & Morrison, 2001).Companies use the bank as a source of investment when facing financial distress, and the banks take responsibility for the cost of capital (Bolton & Freixas, 2000).Bank-owned and group-affiliated firms are less likely to file for bankruptcy after controlling some firms' characteristics (Claessens et al., 1999).Capital expenditure reduction, merger, asset sales, and private and public debt restructuring are a few ways to avoid bankruptcy (Paul et al., 1994).The lower the cost of financial distress, the low the creditor conflict among themselves (Hoshi et al., 1990).Private equity firms engaged in fundraising activities during financial distress are less likely to declare bankruptcy in the following years (Meuleman et al., 2022). Cluster 4: Post-financial distress/bankruptcy conditions of a firm (blue in the figure) The fourth cluster comprises 19 articles devoted to post-financial distress and bankruptcy conditions.This cluster consists of several significant articles such as Kaplan (1994);DeAngelo et al. (2002); Asvanunt et al. (2011); Blazy et al. (2011); Zhang (2022) and Zhou et al. (2022).Firms facing financial distress are more likely to make acquisitions to diversify their product offerings and reduce their reliance on a single revenue stream due to the pressure to meet their debt obligations (Zhang, 2022).Financial distress may occur any number of times in the same firm.Accounting and market-based variables are not particularly useful for forecasting future distress.Still, factors like recovery time, restructuring events, and their interaction with accounting and macroeconomic factors substantially impact recurrence risk (Zhou et al., 2022).During the situation of financial distress, if default may result in still more chances are in favour of the continuation of the firm, and the global recovery rate is primarily determined by the firm's ex-ante characteristics at the time of triggering of financial distress (Blazy et al., 2011).All immediate and long-term expenses related to the bankruptcy and financial distress are factored into the post-bankruptcy, indicating minimal costs associated with the bankruptcy (Kaplan, 1994). Future research directions (RQ6) Future research is encouraged to evaluate further the efficacy of various artificial intelligence or machine learning algorithms for financial distress prediction leading to bankruptcy.Our findings also suggest that the effectiveness of several accounting-based and market-based variables is significantly influenced by any litigation and restructuring events that distressed firms may have experienced in the past.This insight offers future researchers a fresh perspective from which to investigate the field of financial distress.Future research could include an automatic process that searches articles based on meta keywords and co-occurrences of keywords from literature databases.Importantly, suppose we see that research in the field of financial distress has been conducted in developed countries.In that case, there is still a dearth of literature and research from developing nations.Future researchers may also study the impact of the COVID-19 pandemic on causing financial distress in the banking industry. Further, Not only are the financial and macroeconomic circumstances at the time of crisis turnaround connected to the probability of a recurrence of the distress but changing operational conditions and the broader economic environment are also directly tied to this risk.As a result, a time-dependent model could be considered for the pursuance of future research (Zhou et al., 2022).Also, Researchers have suggested that future studies should incorporate endogeneity issues and competing risks while looking to mitigate the risk of distress (Yousaf et al., 2022). Conclusion Based on the literature on financial distress, the authors proposed that research interest in financial distress (and their prediction) is increasing and will continue to increase in the future.Over the past half-century, researchers worldwide have focused on predicting financial distress.Bibliometrics has a leg up on competing methods because it can collect seemingly objective data with minimal effort from the researchers (Farrukh et al., 2023;Zaidi & Azmi, 2022).Many scholars have tried refining the financial distress prediction models over time (Shi & Li, 2019).To contribute to the existing knowledge, this study provides a comprehensive review of literature about financial distress published over the last 40 years using different bibliometric techniques.As a part of this investigation, we answered six research questions by providing a snapshot of state of the art in the field of financial distress research; this study answers RQ1 as a publication and citation trend in the research domain by providing a broad overview of the current literature on financial distress in the banking industry, identifying the most significant articles, most essential journals, relevant affiliations.RQ2 and RQ3 are addressed in sections 4.1 and 4.2, in which the top papers and journals are identified in terms of citations.RQ3 is discussed in terms of two parameters: the highest number of publications and the highest number of citations by author, institution, and nation, as shown in Tables 4-8. We found that David Scharfstein tops the list regarding the highest citations received.Mike Wright tops the list of the highest publications during this period.Boston College has topped the list of institutions with the most publications, and the University of Chicago has 11 publications each.The United States of America has the highest number of publications.i.e., 502.To address RQ4, authors have conducted a descriptive analysis to find the most important words, authors and countries.Authors have also performed co-citation analysis and identified the themes as per the literature (RQ5), as shown in (Figure 6).Based on cluster analysis, four clusters have emerged.Addressing RQ6 provides a concise and non-exhaustive overview of the study's implications and potential future directions for research in financial distress in banking. Implications for policymakers in the banking industry This study highlighted the substantial progress in the domain of financial distress recorded by several finance journals throughout the years.It supplied the necessary information for future researchers to consider and publish.Additionally, this work is meant to direct academics working in the field of financial distress in the banking industry toward new topical areas, and it is also meant to support the development of knowledge on financial distress by enabling more space for empirical and conceptual articles.This study provides a few practical implications for bank managers and policymakers by building on the work of previous academics. First, the policy of disclosures should be chosen by managers; however, this choice should be made with prudence since there is a nonlinear connection between transparency and disclosure and financial difficulty (Rastogi & Kanoujiya, 2022).Second, Unwise disclosures may not improve the banks' capacity to remain stable.In light of the banking sector's obvious competitive fragility, regulators may decide to limit competition there.The banking business is unlike other industries where competition benefits all parties involved; hence policymakers shouldn't encourage competition there (Rastogi & Kanoujiya, 2022).Third, banks and financial institutions must have a financial distress model for SMEs to minimise their anticipated and unexpected losses (Ragab & Saleh, 2022).SME managers may be concerned about adopting financial distress models for corrective action planning and planning and regulating present operations to prevent potential financial collapse.Fourth, Our research also has consequences for society at large since increasing diversity in organisations may help protect the interests of various stakeholders by reducing the likelihood that businesses would experience financial distress (Guizani & Abdalkrim, 2023).Fifth, According to the data, the firm's FD is improved by more promoter ownership.As a result, any unusual behaviour by managers acting in their interests might worsen the firm's financial situation.It won't be advantageous for managers in such a circumstance.Alternatively, it might result in future corporate failure (Kanoujiya et al., 2022). Theoretical and research contributions We followed Mukherjee et al. (2022) to incorporate and highlight the relevant contributions made by the current review.This bibliometric review contributes to the financial distress literature in myriad ways.First, we made an objective discovery of thematic knowledge clusters in the domain; these clusters help us unpack the most relevant insights which may give a reader a bird's eye view of the domain.Second, we objectively assessed and reported the impact and productivity of research in the domain through performance analysis.Third, we also delineated crucial knowledge gaps that may help future researchers conduct empirical and non-empirical investigations in the domain.Fourth, this research demonstrated the necessity to see "financial distress" as a significant area of investigation, which has become more relevant than ever due to recent events. Limitations of the study The discussion so far highlights the study's contribution but has some limitations.First, the analysis only considers papers that use the phrase "financial distress," financial distress prediction," and "financial distress strategies" somewhere in the paper's title, abstract, or keyword, leaving out articles that use the keywords financial crisis, failure, default, and bankruptcy.So, future researchers should also incorporate these keywords while conducting the research.Secondly, this study presents a future research direction for financial distress in banking based on our knowledge of the field and the trends in the keywords used to find that knowledge.Given that author could not read everything published on the topic, we recommend that future research conduct an objective and systematic literature review to provide a comprehensive overview of the topic's major contextual and geographical themes utilising CAQDA (Computer-assisted qualitative data analysis) methods.Finally, the analysis uses bibliographic data from 1982 to 2022.Only articles from peer-reviewed journals were included in our study, and articles from conferences and books were ignored; these can be considered in future works. Figure 6 Figure Figure6depicts the top words by frequency, shown in the treemap format.Author keywords, which include words that most accurately summarise the document's content from the author's perspective, were selected to list the most often used words.For instance, Figure6depicts that words like banks, debt, financial markets, and forecasting are among the most frequently used terms in the domain, but they all refer to the same entity under study.With this consideration, Figure Figure 4. Country scientific production of financial distress in banking industry.Source: Biblioshiny. Table 1 . Existing review papers relevant to the field Sr. No. Source: Author's compilation. Table 5 . Top outlets (Journals) leads the list.The second spot is jointly held by the Journal of Corporate Finance and the Journal Of Financial Economics, with 29 publications each.
2023-09-06T15:08:30.139Z
2023-09-03T00:00:00.000
{ "year": 2023, "sha1": "f3360a3c1b63d482d18538d45341b8d3efc78c12", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23322039.2023.2253076?needAccess=true&role=button", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "377f2237fbd0fc0a24feefca7ba024b8c74c4a7c", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [] }
119199190
pes2o/s2orc
v3-fos-license
Survival of the impactor during hypervelocity collisions II: An analogue for high porosity targets We investigated how a target's porosity affects the outcome of a collision, with respect to the impactor's fate. Laboratory impact experiments using peridot projectiles were performed at a speed range between 0.3 and 3.0 km/s, onto high porosity water-ice (40%) and fine-grained calcium carbonate (70%) targets. We report that the amount of implanted material in the target body increases with increasing target's porosity, while the size frequency distribution of the projectile's ejecta fragments becomes steeper. A supplementary Raman study showed no sign of change of the Raman spectra of the recovered olivine projectile fragments indicate minimal physical change. INTRODUCTION The discovery of multi-lithology meteorites, such as the Almahata Sitta (from the asteroid 2008 TC3, Jenniskens et al. 2009;Bischoff et al. 2010) and Benesov (Spurný et al. 2014), and the identified exogenic material on asteroids, e.g. on Vesta (Reddy et al. 2012;Palomba et al. 2014), Itokawa (Fujiwara et al. 2006;Hirata & Ishiguro 2011) and Lutetia (Belskaya et al. 2010; Barucci et al. 2012;Schröder et al. 2015), raises fundamental questions: what is the possibility of forming these objects by collisions between bodies of different composition? Are asteroids with mixed mineralogies more abundant than it was previously thought? However, the formation mechanism(s) for these bodies remain a mystery (Horstmann & Bischoff 2014). If the formation mechanism via impacts of bodies with diverse compositions is effective, the discovery of impactor residues on a target could reveal details about the impact history of the body and/or the impactors populations. Can these bodies be formed in the current asteroid belt or they are formed 4.5 Ga when small bodies where more numerous and impacts more frequent? When considering the implantation of exogenic material on an asteroid, we currently assume only very low-speed collisions (Gayon-Markt et al. 2012). This is due to the preconception that, during hypervelocity impacts (a few km/s), the projectile is totally vaporised (e.g. Ammannito et al. 2013). However, recent work by Avdellidou et al. (2016) shows that this is not necessarily the case. Additionally, Daly & Schultz E-mail: ca332@star.kent.ac.uk (2015,2016) used aluminum and basalt projectiles which were fired onto pumice and highly porous water-ice targets, simulating the implantation of an impactor's material onto the vestan and Ceres' regolith. They found that material can be deposited via impacts, with the amount decreasing with impact angle, using impact speeds in a narrow regime between 4.4 and 4.9 km/s. The main question that is addressed here is how much of the impactor's material is embedded on/into the target as a function of its porosity. EXPERIMENTS We carried out low-and hyper-velocity impact experiments (0.30-3.0 km/s) similar to that of Avdellidou et al. (2016) (hereafter Run#1); with the main difference that we used targets with moderate to high porosity. Both experiments were performed by using the horizontal two-stage Light Gas Gun (LGG) of the University of Kent (Burchell et al. 1999) and two different setups were built to capture the projectile's ejecta for further examination (see Fig. 1). Our aims were to: (a) study the fragmentation of the projectile, (b) derive its energy density at the catastrophic disruption threshold, Q*im, (c) measure the size frequency distributions (SFDs) of the projectiles' fragments in the ejecta, (d) estimate the implanted mass in the target, and (e) examine the physical state of the surviving projectile fragments. arXiv:1612.05060v1 [astro-ph.EP] 15 Dec 2016 Figure 1. The experimental setup, used for the icy (a) and regolith-like (b) targets, showing the projectile, which was placed in a sabot, inside the two-stage LGG, and the configuration of the target chamber. The projectile impacts the target at 0 • with respect to its trajectory (dashed line). The ejecta collection funnel (a), the same used for Run#1 with low porosity water-ice targets as described in Avdellidou et al. (2016), was aligned with the flight path of the projectile and the centre of the target. It contained water-ice layers in order to collect the projectile's debris after the impact. For the regolith shots (b) a plastic tube was used to capture all the ejecta with no internal ice coating. In that way the loss of projectile fragments was minimised. We acknowledged that the ejecta collecting systems could possibly lead to a secondary fragmentation, but this is considered minimal due to the low ejection speeds that were observed using porous targets. Materials and setup We used olivine projectiles because, together with pyroxene, olivine is one of the most common minerals in the Solar System and is found in asteroids (Petrovic 2001;Gaffey et al. 2002;Nakamura et al. 2011), comets (e.g. Stardust Mission, Zolensky et al. 2006) and planets. The projectiles were 3 mm peridots, high purity Mg-rich olivine. All projectiles were examined by Raman spectroscopy at IR (784 nm) and Energy-dispersive X-ray spectroscopy (EDX), revealing homogeneity in the same projectile and identical composition between them, a very important aspect for the reproducibility of our experiments. We used two different types of target and ejecta collection setups: (a) In Run#2 water-ice targets with porosity 35-40% were prepared by spraying high purity water into liquid nitrogen. The ice grains had a range of sizes from sub-mm to a few mm, comparable with the projectile's size. After each shot, in order to recover the projectile's fragments, the icy target and the ice from the ejecta collection setup were left to melt. (b) In Run#3 CaCO3 powder with 70% porosity simulating a regolith-like surface. Using this target we followed a slightly different procedure, as in order to collect the projectile's fragments, we had to dissolve it in nitric acid to leave the olivine projectile fragments behind. The grainy material was held in horizontal place with no help of another layer (e.g. membrane), but only due to the compaction (Run#3) and the air condensation due to low temperature (−130 • C) that were kept in prior to each shot (Run#2 and Run#3). Each target's temperature dur-ing impact was −50 • C and the target chamber's pressure was set to 50 mbar for all experiments. From both experiments the water-ice melt or CaCO3 solution in nitric acid were filtered through PTFE (polytetrafluorethylene) filters with 0.1 and 5 µm pore-size for the target and ejecta liquids respectively. As experiments by Bland et al. (2008) and Daly & Schultz (2015, have shown that the largest portion of the impactors mass is kept on the target at 90 • with respect to the target's surface, we carried out our experiments using the same configuration. Impactor's fragmentation After collection of the impactor's fragments from the ejecta, their sizes were measured using the same technique as described in Avdellidou et al. (2016). The olivine fragments were identified with Energy-dispersive Spectroscopy (EDX maps) as forsterite gives a strong signal in Mg. The Mg maps were processed with SOURCE EXTRACTOR (SEX-TRACTOR), an astronomical software which specialises in photometry and extraction of the light of irregular sources in dense fields (Bertin & Arnouts 1996). Figure 2 shows that the range of slopes of the SFDs per experimental Run does change with increasing target porosity -distributions become steeper -and was calculated to be between −2.5 and −4.0 for Run#2 and −3.0 and −4.8 for Run#3, consistently higher compared to Run#1 where they lie between −1.04 and −1.68 (Avdellidou et al. 2016). Surprisingly, there is no trend in the slopes of the SFDs at different impact speeds in the same Run. The same result was obtained for olivine projectiles fired onto the nonporous water-ice targets (Avdellidou et al. 2016) implying that the impact speed (up to 3 km/s used here) does not substantially affect the fragmentation behaviour of the peridot projectiles. This result is in contrast to the 'common-sense' assumption that the impactor should produce more numerous, and smaller, ejecta fragments -and thus steeper ejecta SFDs -when it hits the same target at higher speeds. One explanation could be that olivine debris underwent secondary fragmentation on the ejecta collecting systems. However, we expect this secondary fragmentation to be limited, due to the low ejecta speeds, which are only a small fraction of the incident speed. Another explanation is that the peridot projectiles have a fragmentation behaviour different from that of more ductile (i.e. metal) projectiles (Hernandez, Murr & Anchondo 2006;Kenkmann et al. 2013;McDermott et al. 2016). Nevertheless, there is a clear trend of increasing steepness of the slopes with increasing target porosity. This means that the fraction of the small fragments produced is greater than the larger ones. As the target's porosity increases, the bulk target itself becomes weaker and thus is easier to penetrate. But, on the other hand, the increased porosity makes the target to 'be seen' harder from the projectile's perspective. The increased macroporosity, means larger voids inside the target that dissipate the energy that is delivered by the impact. As both porous targets consisted of grains, the impact mechanism was not the same as on a non-porous target. During an impact onto a solid material, a shockwave is produced and penetrates the target as well as the Figure 2. SFD of the ejecta fragments of indicative shots in Run#2 and Run#3, showing no significant change of the slope with increasing speed in the same Run. The red dashed lines indicate the threshold range of the detection limit which are approximately at 2.2-2.7 µm and 6.2-7.4 µm respectively. Different detection limit for different magnification. It is debatable whether these turnovers are real and not due to the resolution limit of the EDX mapping and/or the SEXTRACTOR software cutoff. A small shift towards larger sizes exists in the ejecta SFD for the shot with the lowest impact speed tested in Run#2, similar result with Run#1 (usually for the shot with the lowest speed the projectile was recovered completely intact). Note that the largest recovered fragments are not included in this distributions. projectile. At progressively higher speed impacts, a stronger shock-wave will be produced and (depending on the projectile size) will totally penetrate the projectile backwards and when the shock-wave reaches the rear to move again forward and so forth: this causes the fragmentation of the projectile. Whilst impacts on non-porous materials will only 'see' one target, in porous targets, comprised of grains, with a size comparable to the projectile (as in Run#2), multiple impacts may occur as the impactor penetrates the target material. Each of these impacts will cause the production of a new shock-wave. Therefore, the projectile will suffer multiple shock-events, something that will lead to higher fragmentation. Another way to compare the fragmentation of the peridot projectiles during the three Runs was to look for differences in the largest surviving fragments after each shot, and also what speed (or energy) is required for its catastrophic disruption to occur, when M l,f /Mim=0.5 (see Table 1). In Figure 3 we present the mass of the largest fragment we retrieved as a fraction of the initial impactor's mass, in relation to the energy density, which, for a given impact speed υ (m/s), is defined as Qim = 1 2 v 2 . The synthetic basalt projectiles, used in Run#1, have a comparable size to the peridots, but require an order of magnitude higher energy to retain 50% of their initial mass. So catastrophic disruption for peridot projectiles occurs at 1.14 km/s, whereas for synthetic basalt this happens at 2.33 km/s, indicating that peridots are more fragile than basalt. This is in agreement with the comparison of the compressive strengths of both materials; which are 80 MPa and 100-250 MPa for olivine and basalt respectively (Petrovic 2001;Schultz 1993). There is a small shift of the data towards smaller energies (see Fig. 3) when the peridots impacted the porous water-ice targets (Run#2), in comparison with the same projectiles onto the non-porous water-ice targets (Run#1), and this difference is more obvious at lower speeds. Moreover the tail of the plot for the porous target appears to be less steep: 50% of the initial impactor mass is preserved at collisional speeds of 1.14 km/s and 0.60 km/s respectively, giving a reduction of the energy density of ∼3. Upon increasing the porosity of the target (70%) when the CaCO3 powder was used, we expected to see a further shift of the energies towards lower values, following the same behaviour as stated earlier. However this is not observed. On the contrary, the whole dataset shifts to the right (relative to the data from the non-porous water-ice targets) with the collision speed for catastrophic disruption occurring at 2.27 km/s, where we find the large fragments of the synthetic basalt, giving an increase in the energy density of an order of magnitude (see Table 1). Therefore target's porosity is not the only parameter for the impactor's fragmentation. From the shift towards higher energies in Figure 3 for the regolith targets, it is apparent that the target material and the target's material grain size also contribute to the result. In Run#2, where peridots impacted onto ∼40% porosity water-ice targets, the ice grain sizes were in the size range from a few mm (similar to the impactor's size: 3 mm) down to 10s of microns. While in Run#3, where peridots hit the regolith CaCO3 powder (highest porosity, finest grained), the average grain size dropped significantly to microns, similar to the finest water-ice 'grains'. Further investigation of all the large fragments, using the Raman spectrometer as detailed in Avdellidou et al. (2016), showed no indication of impact melting, as the Raman spectra show no change in the position or mutual distance of the two characteristic olivine peaks P1 and P2 ( Fig. 4 and 5). We assume that, up to the tested impact speeds (that gave large identified fragments), the velocity is too low to produce any significant change to the peridot impactor material. The other explanation is that the origin of the material examined under Raman spectrometer was 'far' from the impact point (i.e. originated at the middle or back of the projectile) and, thus, was not affected strongly by the impact shock. Inspection of the recovered fragments gave no indication where they were originally located within the projectile. Implantation of material in the target The total mass was estimated as the sum of the mass of the fragments that were directly recovered visually from the target immediately after the shot, and the mass calculated following the same procedure as described above, analysing the filters using the EDX and SEXTRACTOR technique, measuring their x and y dimensions, assuming that the produced fragments are cuboids and have constant density ρ=3.18 g cm −3 . In Run#1 the projectile leaves a few per cent of its initial mass in the targets even at impact speeds >2.0 km/s. In the subsequent Runs, where the porosity is higher, the amount of implanted material increased considerably. It should be mentioned though that only in Run#3 the embedded mass decreased consistently with increasing impact speed, as was expected. In Run#1 and Run#2, where waterice targets were used, there is no clear trend observed, but the implanted mass fluctuates in the range of tested speeds. This result may be biased up to some extent, as from Run#3 there is no mass estimation of the very small fragments that remained in the target, as due to the contaminating residue left after dissolving CaCO3, it was not possible to perform . Raman spectra of large fragments that recovered after shots (solid line) during Run#2 and Run#3, in comparison with the reference (red dashed line). No significant change above the instrument precision was observed in the P 1 and P 2 olivine lines. Figure 5. The change in separation, ω, of the P 1 and P 2 olivine lines was calculated for all the largest recovered fragments in the range of impact speeds 0.60-3.08 km/s including results from Avdellidou et al. (2016). The dashed lines indicate the sensitivity limit of the spectrometer. the EDX mapping. However, the missing is small and the recovered mass of the large fragments found in targets, consisted of a very large fraction of the initial impacting mass. Moreover, as mentioned before, in Run#2, the fact that the target's grain size was similar to the impactor's, also contributed to the fluctuation of the implanted masses. From the above, it is clear that the target's porosity plays a definite role also in the degree of implantation of the impactor's material on the target after a collision. The ejecta velocities also decreased as the target's porosity increased. During Run#1 ejecta flew backwards ∼50 cm, but in Run#3, no ejecta was recovered further more than a few cm (∼5 cm) from the impact point. The low ejection ve- locities may also have contributed to the non-escape of the projectile's material from the target. For a porous material the ejecta velocities have been measured to be up to two orders of magnitude less than the ejecta speeds measured for rocks. This means that for the impact speeds tested in a laboratory the ejecta cannot fly with a speed beyond a few m/s (Holsapple et al. 2002). Another contributing factor could be that the largest fragments of the ejecta travel with lower velocities compared to the small ones (Benz & Asphaug 1999). Also, this last factor was partly responsible for the majority of the largest fragments of Run#2 and Run#3 being recovered directly from the target. It has already been shown in previous experiments (Schultz et al. 2007) that the projectile penetrate the porous targets in greater depths and this leads to greater retention of its mass. The effect of the Earth's gravity is an important extra factor in these experiments: as the gun is fired horizontally, loosened material that might otherwise remain in a crater in the case of impact on a minor body, will be lost in our experiments. This is because the Earth's gravity acts to reduce the amount of material remained in the crater when the impact is occurred horizontally. This indicates that our results for the implantation of the impactors material correspond to a lower value. CONCLUSIONS We confirm that, in the vertical impacts with 90 • impact angle, porosity plays a significant role in the fragmentation of the impactor but, more importantly, on the amount of the implanted mass on the target. This result has implications for studies on large-scale collisions between asteroids in Main Belt. Although it was initially believed that the impactor after a high-speed collision is pulverised (and/or vaporised) and not able to embed material into the target body, it is shown herein that such studies should be revised, thus altering the big picture of collisions in the Main Belt, providing formation scenarios for the observed spectral vari-ability of some asteroids or even the formation of multilithology objects. Future spacecraft observations of asteroid surfaces and sample-return missions, such as Hayabusa II and OSIRIS-REx, will provide invaluable information also for the collisional history of such bodies. Can we find exogenous material on C-type asteroids? It is hoped that the work presented here will help to interpret the data from such space missions.
2016-12-15T13:37:15.000Z
2016-12-15T00:00:00.000
{ "year": 2016, "sha1": "76b3bf96ae5f28af1915740c3b5f841cb3b2a795", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Arxiv", "pdf_hash": "76b3bf96ae5f28af1915740c3b5f841cb3b2a795", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
55784683
pes2o/s2orc
v3-fos-license
Antimicrobial Evaluation of Crude Methanolic Leaf Extracts from Selected Medicinal Plants Against Escherichia coli This study was aimed at determining the antibacterial effects of the selected plant leaf extracts of Tagetes minuta, Aloe secundiflora, Vernonia lasiopus and Bulbine frutescens against clinical isolate of Escherichia coli. The plants materials were obtained from Kenyatta University arboretum and identified by University taxonomist and voucher specimen deposited in the University. Methanol was used as the solvent for the extraction process and the antimicrobial activity test carried out using the disc diffusion method. All the plants extracts analysed had antimicrobial activity against Escherichia coli with Tagetes minuta extract being the most active at low concentrations (8.7 mg/ml). The standard antibiotic used for positive control was ciprofloxacin (5 μg/ml) while distilled water and dimethyl sulphoxide were used as negative control. Screening of phytochemical showed the presence of four phytochemicals; saponins, tannins, alkaloids and flavonoids. Introduction Medicinal plants are used by almost 80% of the world's population for their basic health care because of their low cost and ease in availability [1]. Herbal drugs made from medicinal plants have been used from ancient times to treat various diseases and their antimicrobial properties make them a rich source of many potent drugs [2]. The use of herbal medicinal plants has always played a positive role in the control or prevention of diseases such as diabetes, heart disorders and various cancers [3]. The genus Tagetes belongs to the Asteraceae family which presently comprises of 56 species, 27 biennials and 29 perennials [4]. Tagetes species and chemo-types from its genus have been largely examined for biological active metabolites that can be used in industry and medicine [5]. Compounds that have antimicrobial activity in the Tagetes minuta plant are said to be accumulated in the organs of the plant and their essential oils have not only antimicrobial effect but also insecticidal properties [6]. Extracts from Tagetes minuta leaf flowers and stem extracted using methanol have shown to contain secondary metabolites including terpenes which are thought to be responsible for antibacterial activities [7]. The genus Aloe is common in Kenya; with about 60 taxa recognized [8]. Aloe species have antibacterial, antifungal, anticancer, antiviral and immunomodulatory properties [9]. Aloe secundiflora other synonyms are; Aloe floramaculata, Aloe engleri and Aloe marsabitensis [10]. Aloe secundiflora leaf components have been credited for antibacterial, antifungal and antiviral and antihelmintic medicinal properties [10]. Herbalists from the Lake Victoria region have traditionally used Aloe secundiflora to treat ailments including chest problems, polio, malaria and stomach ache [11]. Vernonieae is a tribe of about 1300 species of plant in the Asteraceae (compositae) family which mostly contains herbaceous plants [12]. Vernonia laisopus decoctions from the stems and leaves have been traditionally been used by herbalist in East Africa to treat, malaria, worms and gastrointestinal problems [11]. Its extracts have also been used in treating some of the sexually transmitted diseases in southern parts of Africa [13]. Bulbine is a genus of plants in the family xanthorrhoeaceae and sub family asphodeloideae and its members are well known for their medicinal value [14]. Bulbine plant has been used for medicinal purposes in the early stages of the eighteen century by Dutch and British settlers of South Africa in treating various ailments [15]. The leaves of the plant have been used in the treatment of wound thought to be infected with bacterial pathogens and it has shown antibacterial properties [16]. Some of the species of the plant found in South Africa have been used for blood cleansing, treatment of ringworms and gravel rush by some local communities such as the Xhosa [15]. A decoction of bulbs and roots of some of the species has been used in the treatment of some of the venereal diseases in women and stomach upsets [17]. Escherichia coli are normal flora in the body of human beings and they can be nonpathogenic, commensal or pathogenic [18]. When pathogenic they usually cause urinary tract infections, systematic infections and enteric infections [19]. The development of resistance by Escherichia coli due to increase in use of antimicrobial agents has led to the use of medicinal plants extracts against it [20]. Medicinal plant extracts have shown to have antimicrobial activity against enteropathogenic Escherichia coli found in food material [21]. This study aided in deposited in the university herbarium in Plant Sciences Department for future reference. The plants were brought to the laboratory and thoroughly washed in running water to remove debris and dust particles and then rinsed using distilled water and finally air dried. Plant extract preparation The air dried plant materials were grinded into powder and soaked in methanol for 72 hours while placed in a Gallenkamp shaker at 65 revolutions per minute. Thereafter, the contents were homogenized and filtered using Whattman filter paper no. 1. The filtrate was poured into a round bottom flask and concentrated using a vacuum evaporator and stored in a labelled amber glass bottle at room temperature away from light and heat before being used for antibacterial efficacy test. Antimicrobial evaluation The microorganism used was clinical isolate of Escherichia coli obtained from Kenyatta University Health Centre Laboratory, Nairobi. It was tested against methanolic leaf extracts of Tagetes minuta, Aloe secundiflora, Bulbine frutescens and Vernonia lasiopus. Escherichia coli inoculum was concentrated by comparing it with a 0.5 McFarland standard. Discs of 6 milliliters were prepared from Whattman no.1 filter paper. The discs were sterilized by autoclaving. After sterilization the moisture discs were dried on hot air oven at 50°C [22]. The various solvent extracts discs prepared were impregnated with the extracts from 1000 mg/ml [23]. The antibacterial efficacy test was carried out using disc diffusion method [24]. Muller Hinton agar was used in the spread plate technique where the clinical isolate of Escherichia coli was spread using sterilized cotton wool swabs and exposed to extracts impregnated discs in milligrams per microliter from Aloe secundiflora, Tagetes minuta, Vernonia lasiopus and Bulbine frutescens. The discs were placed with equal distance between them on agar plates inoculated with Escherichia coli. Positive control discs used contained ciprofloxacin while negative control discs were impregnated with distilled water and dimethyl sulphoxide. The Petri dishes were incubated at 37°C for 24 hours. Zones of inhibition were measured in millimetres and their average determined. The experiment was carried in duplicates and the diameter of zones of inhibition formed measured. Minimal inhibitory concentration (MIC) was evaluated using the microplate method [25]. 100 µl of 250 mg/ml of methanol extract was added to 100 µl of sterile bacteriological peptone in the first well of the 96 well microplate and mixed well with a micropipette. 100 µl of this dilution was transferred subsequently to wells two folding each dilution of the original extract. This was done to the extracts of Aloe secundiflora, Bulbine frutescens, Vernonia lasiopus, and Tagetes minuta. An inoculum of 100 µl (0.5 McFarland standard) of overnight clinical culture of Escherichia coli was added in each of the wells. Triplicate of each micro plate were made and the procedure repeated for the test organism. The plates were then incubated at 37°C for 24 hours. After incubation 40 g/µl of 0.2 mg/µl of INT was added in each of the wells and the plates examined after an additional sixty minutes of incubation. Growth was indicated by a red colour (conversion of INT to formazan). The lowest concentration at which the colour was apparently invisible as compared to the next dilution was taken as the minimum inhibitory concentration [26]. Minimum bactericidal concentration (MBC) was determined by taking 100 µl of suspension from micro plate wells that demonstrated no growth and inoculated on agar plates. The plates were incubated at 37°C for 24 hours. In the case where there was no bacterial growth with the value greater than minimum inhibitory concentration the concentration was used as the maximum bacterial concentration [26]. Phytochemical analysis Presence of saponins, tannins, flavonoids and alkaloids in the crude extract were determined [27]. Tannins: Each of the extracts was weighed to 5 mg and dissolved in 1 ml of distilled water. Filtration was carried out after 2 ml of FeCl 3 was added. If there was presence of a blue or black precipitate then it indicated the presence of tannins [27]. Flavonoids: Each of the extracts was weighed to 5 mg and dissolved in 1 ml of ethanol and filtered. 2 ml of 1% HCl and magnesium ribbon was added to the filtrate. If there was formation of a pink or red colour it indicated the presence flavonoids [27]. Alkaloids: Each of the extracts was weighed to 5 mg and dissolved in 2 ml of methanol and filtered. 1% HCL was added to the filtrate and the solution heated. Mayor`s reagent was added drop wise and if there was formation of any colored precipitate it indicated the presence of alkaloids [27]. Saponins: Each of the extracts was weighed to 5 mg and dissolved in 2 ml of methanol and filtered. Distilled water was added and shaking done for a few minutes. If there was persistence frothing then it indicated the presence of saponins [27]. Results All the plants extracts showed a considerable antibacterial activity against Escherichia coli. The antibacterial activity of the extracts greatly varied on the Muller Hinton agar plates. The plant extract from Tagetes minuta was more active in low concentrations against Escherichia coli as compared to the other extracts. The standard antibiotic used as positive control (ciprofloxacin) produced significantly sized zones of inhibition (20 ± 0.97 mm). The negative control (Distilled water and Dimethyl sulphoxide) did not produce any zone of inhibition. The antimicrobial activity of the plant extracts against Escherichia coli was significant P (0.001) showing that the extracts had different pronounced antibacterial activity against Escherichia coli (Table 1). The plant leaf extracts from Tagetes minuta, Aloe secundiflora, Bulbine frutescens and Vernonia lasiopus when evaluated for the presence of phytochemicals shown in Table 2. Discussion Enteric bacterial pathogens are disease causing microorganisms that are usually located in the intestinal tracts of either animals or human beings. They belong to the family Enterobacteriaceae which is a large family of Gram-negative bacteria along with many harmless symbionts. The pathogenic members are usually associated with infections that are characterized by enteric fevers, abdominal pain and diarrhoea and vomiting. Escherichia coli is a gram negative facultative anaerobic enteric bacteria that is commonly found in the lower intestinal tract of endotherms. They are normal flora of the gut and they benefit host by producing vitamin K2. However, some of the serotypes of the bacteria have been known to be cause diseases with the worst the being a bloody diarrhoea that can lead also to kidney failure in children and immunocompromised individuals. There has been emergence of antibiotic resistant strains of Escherichia coli hence need of finding new resources for the manufacture of antibiotics [28]. Medicinal plants have been used to produce herbal drugs that has been used from ancient times to treat various diseases and there antimicrobial properties make them a rich source of many potent drugs [2]. From the study carried out, the extracts from the four medicinal plants showed antimicrobial activity against Escherichia coli. Tagetes minuta was more active at low concentrations as compared to the other extracts. The extract from Tagetes minuta also had secondary metabolites alkaloids, saponins, tannins and flavonoids which have been known to contain antimicrobial activities. Secondary metabolites such as flavonoids have been found to contain antimicrobial activity against both Gram negative and Gram positive bacteria [29]. Aloe secundiflora also showed antimicrobial activity against Escherichia coli producing the largest average zone of inhibition against Escherichia coli (Table 1). Extracts from Aloe secundiflora have been found to contain antimicrobial activity due to the presence of secondary metabolites such as saponins and anthraquinones [30]. From the study the extract from Aloe secundiflora contained saponins, alkaloids, tannins and flavonoids which might be responsible for its antimicrobial activity against Escherichia coli. Vernonia lasiopus and Bulbine frutescens also showed a pronounced level of antimicrobial activity against Escherichia coli. The extracts also contained the secondary metabolites as shown in Table 2. The extracts from Bulbine frutescens have been used in treating stomach upsets which may be due to food poisoning cause by enteric bacteria in Southern Africa [17]. The secondary metabolites from the Bulbine frutescens might be responsible for its antimicrobial activity [31]. A decoction from its roots and bulbs has also been found to contain antimicrobial activity [31]. Vernonia lasiopus also showed antimicrobial activity against Escherichia coli but produced the smallest average zone as compared to other plant extracts. Aqueous extracts from the plant have shown antimicrobial activity against bacterial pathogens [32]. The presence of the secondary metabolites from the extracts also might be attributed to its antimicrobial activity. Conclusion This study has revealed extract from the medicinal plants can be used in treating diseases caused by some of the pathogenic serotypes of Escherichia coli. It further elucidated that secondary metabolites from medicinal plant parts might be responsible for the antibacterial activity of the plant extracts against Escherichia coli. Therefore, there is need for further evaluation of the purified bioactive components of the extracts that can be exploited as new potent raw materials for the manufacture of herbal drugs and antimicrobial agent's productions.
2019-01-02T21:40:53.113Z
2016-05-03T00:00:00.000
{ "year": 2016, "sha1": "5348112da6ae1b3a85574384f60059cd0181147f", "oa_license": "CCBY", "oa_url": "https://www.omicsonline.org/open-access/antimicrobial-evaluation-of-crude-methanolic-leaf-extracts-from-selectedmedicinal-plants-against-escherichia-coli-2155-9597-1000272.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5348112da6ae1b3a85574384f60059cd0181147f", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
5744668
pes2o/s2orc
v3-fos-license
Tickborne Lymphadenopathy Complicated by Acute Myopericarditis, Spain To the Editor: Dermacentor-borne necrosis erythema lymphadenopathy/tickborne lymphadenopathy (DEBONEL/TIBOLA) is an apparently benign, self-limiting rickettsial disease transmitted by Dermacentor ticks (1,2). Rickettsia slovaca was the first etiologic agent isolated, but other species, such as R. raoultii and Candidatus R. rioja, also might be involved (3–6). If the scalp is affected, a larger number of agents (including Francisella tularensis, Bartonella henselae, R. massiliae, R. sibirica mongolitimonae, and Borrelia burgdorferi) should be considered within the differential diagnosis of a similar syndrome recently named scalp eschar associated with neck lymphadenopathy after a tick bite (SENLAT) (7). Nevertheless, in Spain, only R. slovaca, Candidatus R. rioja, and F. tularensis are known to cause DEBONEL/TIBOLA/SENLAT (4,6). This entity is considered an emerging rickettsiosis in Europe; cases have been reported from Italy, France, Hungary, Germany, and Portugal (8). We recently saw a patient in whom acute myopericarditis developed after he was bitten by a large tick on the scalp and showed clinical signs of DEBONEL/TIBOLA/ SENLAT, most likely attributable to R. slovaca or Candidatus R. rioja infection. The patient, a previously healthy 28-year-old man, went on a day-long hiking trip to the northern mountains of Madrid (central Spain; mean altitude 1,300 m) on November 2, 2014. Three days later, he noticed a mild ache on the occipital area of his scalp and found an attached tick that he removed with his fingers. A week later, he sought care from an infectious disease specialist because of itchy discomfort at the area of the tick bite. Examination revealed an erythematous and elevated punctiform lesion with mild fluctuation in the occipital region accompanied by tender, small lymph node enlargement of both occipital lymphatic chains ( Figure). No widespread rash was present. DEBONEL/TIBOLA/SENLAT was diagnosed, and doxycycline (100 mg every 12 hours) was initiated. IgG titer against spotted fever group Rickettsia (SFGR) was 1:128. Four days later, the patient sought care at an emergency department, reporting retrosternal chest pain. Electrocardiogram revealed a diffuse ST-segment elevation with PR-segment depression; serum creatine phosphokinase and troponin T levels were 327 IU/L (reference range 10-190 IU/L) and 420 ng/mL (reference <14 ng/mL), respectively. Myopericarditis was diagnosed. A transthoracic echocardiogram ruled out pericardial effusion, valve vegetations, and left ventricular dysfunction; cardiovascular magnetic resonance imaging performed 4 days later showed myocardial inflammation. Blood cultures were sterile, pneumococcal urinary antigen test result was negative, and IgM against coxsackievirus and Mycoplasma pneumoniae were not detected. Nonsteroidal antiinflammatory drugs were prescribed. The patient improved clinically, and electrocardiogram findings resolved. The patient received doxycycline for 4 weeks. On a convalescent-phase serum specimen collected after 8 weeks, indirect immunofluorescence assays (IFA) for IgG against SFGR were performed in Spain's national reference center for rickettsioses (Hospital San Pedro-Centro de Investigación Biomédica de La Rioja [CIBIR], Logroño, Spain). Commercial (Focus Diagnostics, Cypress, CA, USA) and in-house R. conorii, R. slovaca, and R. raoultii antibody testing showed an IgG titer of 1:512 against the 3 species. A subsequent cross-adsorption assay using R. slovaca, R. raoultii, and R. conorii antigens prepared on the basis of strains from the collection at Hospital San Pedro-CIBIR showed a decrease in IgG titers against R. conorii and R. raoultii to 1:64 and 1:256, respectively, whereas titer against R. slovaca remained at 512. IFA against Bartonella spp. and C. burnetii (Focus Diagnostics), chemiluminescence immunoassay for B. burgdorferi (Liason, DiaSorin, Spain), and in-house microagglutination assay for F. tularensis were not reactive. The patient recovered, with only a residual scarring alopecia on the occipital region of the scalp and without cardiac dysfunction after 9-month follow-up. Myopericarditis is a rare complication of rickettsiosis, usually associated with R. rickettsii and R. conorii (9). Although tetracycline-induced cardiac adverse reactions have been described (10) and the patient reported here had signs of myopericarditis shortly after the initiation of doxycycline, he completed a 4-week treatment without recurrence. Therefore, the clinical picture seems unlikely to be attributable to doxycycline-induced toxicity. Because the patient was bitten in November (when only Dermacentor spp. ticks are active in central Spain), we have further epidemiologic evidence for attributing the infection to SFGR causing DEBONEL/TIBOLA/SENLAT. After serum adsorption, IFA titer against R. slovaca was 3-fold higher than that against R. conorii. R. slovaca and Candidatus R. rioja are the species most commonly found in D. marginatus ticks and in cases of DEBONEL/TIBOLA/SENLAT in Spain (8). In view of the seroconversion to Rickettsia spp. with negative test results for other possible causative agents and the clinical response to doxycycline, rickettsiosis caused by R. slovaca or Candidatus R. rioja remains the most probable diagnosis. Because DEBONEL/TIBOLA/SENLAT is an emerging disease, physicians should consider that this entity may be associated with systemic complications similar to those of other tickborne rickettsioses.
2015-11-19T21:22:29.539Z
2015-12-01T00:00:00.000
{ "year": 2015, "sha1": "d78f33ebceb287c440e99bee42ebe337e52215e5", "oa_license": "CCBY", "oa_url": "https://wwwnc.cdc.gov/eid/article/21/12/pdfs/15-0672.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "55f16c168662e42851f50cfe741d5df9aecf1703", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
221995900
pes2o/s2orc
v3-fos-license
High metal content of highly accreting quasars We present an analysis of UV spectra of 13 quasars believed to belong to extreme Population A (xA) quasars, aimed at the estimation of the chemical abundances of the broad line emitting gas. Metallicity estimates for the broad line emitting gas of quasars are subject to a number of caveats, although present data suggest the possibility of an increase along the quasar main sequence along with prominence of optical Fe II emission. Extreme Population A sources with the strongest Fe II emission offer several advantages with respect to the quasar general population, as their optical and UV emission lines can be interpreted as the sum of a low-ionization component roughly at quasar rest frame (from virialized gas), plus a blueshifted excess (a disk wind), in different physical conditions. Specifically, in terms of ionization parameter, cloud density, metallicity and column density. Capitalizing on these results, we analyze the component at rest frame and the blueshifted one, exploiting the dependence (of several intensity line ratios on metallicity $Z$). We find that the validity of intensity line ratios as metallicity indicators depends on the physical conditions. We apply the measured diagnostic ratios to estimate the physical properties of sources such as density, ionization, and metallicity of the gas. Our results confirm that the two regions (the low-ionization component and the blue-shifted excess) of different dynamical conditions also show different physical conditions and suggest metallicity values that are high, and probably the highest along the quasar main sequence, with $Z \gtrsim 10 Z_{\odot}$. We found some evidence of an overabundance of Aluminium with respect to Carbon, possibly due to selective enrichment of the broad line emitting gas by supernova ejecta. INTRODUCTION Thanks to large public databases as the Sloan Digital Sky Survey (SDSS) catalogs, we have unrestricted access to a large wealth of astronomical data (for example, several editions of quasar catalogues, Schneider et al. 2010;Pâris et al. 2017, and of value-added measurements by Shen et al. 2011). SDSS spectra of high redshift quasars (z 2) cover the rest frame UV spectral range. It is known since the 1970s that measurements of UV emission lines can be used to explore the physical and chemical properties of active galactic nuclei (AGN). Landmark papers provided the basic understanding of line formation processes due to photoionization (e.g., Wills & Netzer 1979;Davidson & Netzer 1979;Baldwin et al. 2003). The chemical composition of the line emitting gas is an especially intriguing problem from the point of view of the evolution of cosmic structures, but also from the technical side. Nagao et al. (2006b) investigated BLR 2Śniegowska et al. metallicities using various emission-line flux ratios and claimed that the typical metallicity of the gas in that region is at least super-solar, with typical Z ∼ 5Z . Moreover, studies of metallicity-redshift dependence (Nagao et al. 2006b;Juarez et al. 2009) show a lack of metallicity evolution up to z ≈ 5. Similar results are obtained for (Nagao et al. 2006a). The highest-redshift quasars (z 5 Bañados et al. e.g., 2016;Nardini et al. e.g., 2019) are known to show UV spectra remarkably similar to the ones observed at low-redshift, especially the ones accreting at high rate and radiating at high Eddington ratio (Diamond-Stanic et al. 2009;Plotkin et al. 2015;Sulentic et al. 2017). 1 Perhaps surprisingly, these sources are suspected to have high metal content in their line emitting gas, due to the consistent values of several diagnostic ratios measured in quasars with similar spectral properties at low and high z (Martínez-Aldama et al. 2018), and indicating highly super-solar metal content. Several techniques are applied to estimate the chemical composition in Galactic nebulae (see e.g., Feibelman & Aller 1987 for planetary nebulae). Classical techniques used for Hii and other nebulae (including the Narrow Line Regions, NLRs) are unfortunately not applicable to the broad line regions of quasars. Permitted and inter-combination lines are too broad to resolve fine structure components of doublets; line profiles are composites and may originate in regions that are spatially unresolved, and unresolved or only partially resolved in radial velocity as well. However, quasar emission line profiles still offer important clues in the radial velocity domain. The shape of the profile is strongly dependent on the ionization potential of the ionic species from which the line is emitted: it is expedient to subdivide the broad lines in low-and high ionization lines (LILs and HILs). The LIL group in the spectral range under analysis (1200Å -2000 A) includes the following lines: Siiiλ1263, Siiiλ1814, Aliiλ1671, Aliiiλ1860, Siiii]λ1892, Ciii]λ1909. High ionization lines are Niv]λ1486, Oiv]λ1402, Civλ1549, Siivλ1397, Oiii]λ1663, and Heiiλ1640 (for detailed discussion see Collin-Souffrin & Lasota 1988;Gaskell 2000). The Aliii, Siiii], and Ciii] lines sometimes referred to as "intermediate ionization lines:" even if they are mainly produced within the fully ionized region of the emitting gas clouds (Negrete et al. 2012), the ionization potential of their ionic species is closer to the ones of the LILs, and typically 20 eV. The two groups of lines (HILs and LILs) do not only show different kinematic properties (Sulentic et al. 1995), but their emission is also likely to occur in fundamentally different physical conditions . The HILs are characterized also by the evidence of strong blueshifted emission, very evident in Civ (e.g., Sulentic et al. 2007;Richards et al. 2011;Coatman et al. 2016). Therefore, a careful line comparison/decomposition is necessary, lest inferences may be associated with a non-existent region with inexplicable properties. The interpretation of two line components involves a virialized region, of relatively low ionization (hereafter referred to the virialized, low-ionization BLR associated with a symmetric broad component, BC), possibly including emission from the accretion disk, and a region of higher ionization, associated with a disk wind or a clumpy outflow, a scenario first proposed by further developed by Elvis 2000), and observationally supported by reverberation mapping (e.g., Peterson & Wandel 1999) and the apparent lack of correlation between HILs and LILs in luminous quasars (e.g., Mejía-Restrepo et al. 2016;Sulentic et al. 2017). Even if all lines were emitted by a wind (Murray et al. 1995;Murray & Chiang 1997;Proga 2007a), the conditions at the base of the textcolorwind may strongly differ from the ones downstream in the outflow. While each UV metal line contains information related to composition (Hamann & Ferland 1992), not all of the lines listed above can be used in practice. For instance, the Nv and Siiiλ1263 lines are strongly affected by blending with Lyα; other lines such as Siiiλ1814 and Niv]λ1486 are usually weak and require high S/N to be properly measured. The choice of diagnostic ratios used for metallicity estimates will be a compromise between S/N, easiness of deblending, and straightforwardness of physical interpretation. In practice, apart from Lyα, only the strongest broad features will be considered as potential metallicity estimators in this work (Sect. 3). The ratio (Siiv+Oiv])/Civ has been widely used in past studies (Hamann & Ferland 1999, and references therein); this ratio is relatively easy to measure and seems to be the most stable ratio against distribution of gas densities and ionization parameter in the BLR (Nagao et al. 2006b). The ratios involving NVλ1240, like Nv/Civ, are apparently more sensitive to ionization parameter and sensitive to the nitrogen abundance (e.g Dietrich et al. 2003;Wang et al. 2012a). We will rediscuss the use of these ratios in the context of the xA quasar spectral properties (Sect. 5.7). Both physical conditions and chemical abundances vary along the quasar main sequence (see e.g., Sulentic et al. 2000b;Kuraszkiewicz et al. 2009;Shen & Ho 2014;Wildy et al. 2019;Panda et al. 2020b). Solar and even slightly subsolar values are possible toward the extreme Population B, where Feii emission is often undetectable above noise (e.g., Hamann et al. 2002;Punsly et al. 2018). At the other extreme, where Feii is most prominent, estimates suggest Z 10Z (Panda et al. 2018. Baldwin et al. (2003) derived Z ≈ 15Z , although in the particular case of a "nitrogen-loud" quasars. Apart from the extremes, it is not obvious whether there is a continuous systematic trend along the sequence. Previous estimates consistently suggest super-solar metallicity up to Z 10 Z (Warner et al. 2004). Other landmark studies consistently found supersolar metallicity: Hamann & Ferland (1992) derived Z up to 15Z ;Nagao et al. (2006b) found typical values Z ≈ 5Z , with Z ∼ 10Z for the most luminous quasars from the (Siiv+Oiv])/Civ ratio. Sulentic et al. (2014) inferred a large dispersion with the largest value in excess of 10Z . Similar results were reached by Shin et al. (2013) whose Siiv+Oiv]/Civ ratio measurements suggested Z 10Z . Most interesting along the quasar main sequence are the high accretors. They are selected according to empirical criteria (e.g., Wang et al. 2013;Wang et al. 2014;Du et al. 2016a), and defined by having R FeII > 1, that is with the Feiiλ4570 blend on the blue side of Hβ (as defined by Boroson & Green 1992) flux exceeding the flux of Hβ. In the optical diagram of the quasar main sequence (Sulentic et al. 2000b;Shen & Ho 2014) they are at the extreme tip in terms of Feii prominence, and identified as extreme Population A (hereafter xA), following Sulentic et al. (2002). Depending on redshift, we look for high accretors using different criteria. In case of z 1, it is expedient to use a criterion based on two UV line intensity ratios: • Aliii/Siiii]> 0.5 • Ciii]/Siiii]< 1.0, following . These criteria are met by the sources identified as xA Population by Sulentic and collaborators. xA quasars are radiating at the highest luminosity per unit mass, and, at low z they are characterized by relatively low black hole masses for their luminosities and high Eddington ratios (Mathur 2000;Sulentic et al. 2000a). There is evidence that xA sources tend to have high-metallicity (Shemmer et al. 2004;Martínez-Aldama et al. 2018). Similar properties have been identified as characteristic of narrow-line Seyfert 1 galaxies (NLSy1s) with strong Feii emission. NLS1s also have unusually high metallicities for their luminosities. Shemmer & Netzer (2002) have shown that NLSy1s deviate significantly from the nominal relationship between metallicity and luminosity in AGN. As several studies distinguish between NLSy1s and "broaderlined" AGN, we remark here that all Feii strong NLSy1s meeting the selection criterion R FeII > 1 are extreme Pop. A sources. 2 The aim of this work is to investigate the metallicitysensitive diagnostic ratios of the UV spectral range for extreme Population A quasars i.e., for highly accreting quasars. Section 2 defines the selection of our sample, and provides some basic information on the sample quasars. In Sect. 3 we define the diagnostic ratios, and describe the basic observational results. In Section 4 we compare measured diagnostic ratios and we compare them with the ones obtained from photoionization simulations. In Sect. 5 we discuss our results in terms of method caveats, metal enrichment, accretion parameters and their implications on the nature of xA sources. We show the UV spectra in Appendix A along with the multicomponent fit analysis of the emission blends, and in the Appendix B we show the trend of Z-sensitive ratios as a function of ionization parameter, density, and metallicity. Sample definition Qualitatively, extreme Pop. A objects show prominent Aliii and weak or absent Ciii] emission lines. In general, they show low emission line equivalent widths (≈ 1 2 of them meet the W (Civ) 10Å, and qualify as weak-lined quasars following Diamond-Stanic et al. 2009), 3 and a spectrum that is easily recognizable even by a visual inspection, also because of the "trapezoidal" shape of the Civ profile and the intensity of the λ1400 blend, comparable to the one of Civ ( (Sulentic et al. 2000a). Imposing a fixed limit on line FWHM, although very convenient observationally, has no direct physical meaning, and its interpretation might be sample dependent. See Marziani et al. (2018) for a discussion of the issue. 3 Weak-lined quasars are mostly xA sources, judging from their location along the MS (Marziani et al. 2016a), and that the limit at W ≈ 10Å separates the low-W side of a continuous distribution of the xA Civ equivalent width peaked right at around 10Å (Martínez-Aldama et al. 2018). obtained by the splot task with a cursor script within the IRAF data reduction package. We focus on the spectral range from ≈ 1200Å to 2100Å, where (1) UV lines used for xA identification are present; (2) the strongest emission features helpful for metallicity diagnostics are also located. The Lyα + Nv blend is usually too heavily compromised by absorptions which make it impossible to reconstruct the emission components especially for Lyα. We will make some consideration on the mean strength of the Nv with respect to Civ and Heiiλ1640 (Sect. 5.3), but will not consider Nv as a diagnostics. We selected SDSS DR12 4 spectra in the redshift range 2.15 < z < 2.40, relatively bright (r < 19) to ensure moderate-to-high S/N in the continua (in all cases S/N 5 in the continuum, and the wide majority with S/N 10), and of low declination δ < 10. The redshift range was chosen to allow for the possibility of Hβ coverage in the H band by eventual near-IR spectroscopic observations. The DR12 sample selected with these criteria is ≈ 500 sources strong. xA sources were selected out of this sample with an automated procedure, inspected to avoid broad absorption lines, and further vetted for obtained a small pilot sample of ∼ 10 sources. A larger sample of xA sources will be considered in a subsequent work (Garnica et al., in preparation). The final selection includes 13 sources. With the adopted selection criteria in flux and redshift, we expect a small dispersion in the accretion parameters (especially luminosity; Sect. 5.2). Indeed, the selected sources are rather homogeneous in terms of spectral appearance, with a few sources included in our sample that however show borderline criteria. They will be considered is Sect. 4.1.1 in terms of their individual U , n H . Table 1 provides basic information for the 13 sources of our sample: SDSS name, redshift from the SDSS, the difference between our redshift estimation using Aliii (described in 3.1) and the SDSS redshift δz = z − z SDSS , the g-band magnitude provided by Adelman-McCarthy et al. (2008a), the g − r color index, the rest-framespecific continuum flux at 1700Å and 1350Å measured on the rest frame, the S/N at 1450Å. All other sources were covered by the FIRST (Becker et al. 1995), but undetected. Considering that the typical rms scatter of FIRST radio maps is ≈0.15 Jy, and the typical fluxes of in the g band, we have upper limits 5 in the radio-to-optical ratio, qualifying the sample sources as radio quiet. Distances were computed using the formula provided by Sulentic et al. (2006, their Eq. B.5), and ΛCDM cosmology (Ω Λ = 0.7, Ω M = 0.3, H 0 = 70 km s −1 Mpc −1 ). The bolometric luminosity is around ∼ 10 47 erg s −1 , assuming a bolometric correction B.C. 1350 = 3.5 (Richards et al. 2006). The sample rms is just ≈ 0.2 dex: all sources are in a narrow range of distances and have observed fluxes within a factor ≈ 2 from their average. This is, in principle, an advantage for the estimation of the physical parameters such as L/L Edd , considering the large uncertainty and serious biases associated with the estimation of M BH from UV high-ionization lines. Accretion parameters will be discussed in Sect. 5.2. Redshift determination The estimate of the quasar systemic redshift in the UV is not trivial, as there are no low-ionization narrow lines available in the spectral range (Vanden Berk et al. 2001). In practice, one can resort to the broad LIL. Negrete et al. (2014) and Martínez-Aldama et al. (2018) consider the Siiiλ1263 and Oiλ1302 lines to obtain a first estimate. A re-adjustment is then made from the wavelength of the Aliii doublet which is found, in almost all cases, to have a consistent redshift. To determine the Aliii shift those authors used multicomponent fits with all the lines in the region of the blend λ1900 included. The peak of Aliii is clearly visible in the spectra of our sample, since in high accretors emission of Aliii is strong with respect to the other lines in the blend at λ1900Å. We decided to use only this method for redshift estimation (in Tab. 1) and to measure the peak we use single Gaussian fitting from the splot task of the Aliii doublet and/or of the Siiii] line, depending on which feature is sharper. The obtained values are usually ≥ z SDSS (Table 1). This is not a surprise as z SDSS is based on lines that are mainly blueshifted in xA sources, and hence is a systematic underestimation of the unbiased redshift. 3.2. Diagnostic ratios sensitive to U , density, Z Line ratios are sensitive to different parameters. In the UV range, three groups of diagnostic ratios are defined in the literature (e.g. Negrete et al. 2012;Martinez-Aldama et al. 2018). In principle, Civ/Heii and Siiv/Heii should be sensitive to C and Si abundance because the He abundance can be considered constant. The ionization potentials of C 2+ and He + are similar. The main difference is that the Heii line is a recombination line, equivalent to Hi Hα, and the regions where they are formed are not coincident (see Fig. 4). • Ratios involving Nv, Nv/Civ and Nv/Heii have been also widely used in past work, after it was noted that the Nv line was stronger than expected in a photoionization scenario (e.g., Osmer & Smith 1976). A selective enhancement of nitrogen (Shields 1976 munds 1993;Izotov & Thuan 1999). This process might be especially important at the high metallicities inferred for the quasar BLR. Therefore estimates based on Nv may differ in a systematic way from estimates based on other metal lines (e.g., Matsuoka et al. 2011). In the present sample of quasars, contamination by narrow and semi-broad absorption is severe, and even if we model precisely the high ionization lines, it might be impossible to reconstruct the unabsorbed profile of the red wing of Lyα. In addition, S/N is not sufficient to allow for a careful measurement of Niv]λ1486 and Niii]λ1750 lines. We defer the systematic analysis of nitrogen lines to a subsequent work, while discussing the consistency of the Nv measures in a high-Z scenario (Sect. 5.3). • Siiii]/Siiv, Siiiλ1814/Siiii], and Siiiλ1814/Siiv are sensitive to the ionization parameters and insensitive to Z, as they are different ionic species of the same element. Other intensity ratios entail a dependence on metallicity Z, but also on ionization parameter U and density n H . Line interpretation and diagnostic ratios The comparison between LILs and HILs has provided insightful information over a broad range of redshift and luminosity (Corbin & Boroson 1996;Marziani et al. 1996Marziani et al. , 2010Sulentic et al. 2017;Bisogni et al. 2017;Shen 2016;Vietri et al. 2018). A LIL-BLR appears to remain basically virialized (Marziani et al. 2009;Sulentic et al. 2017), as the Hβ profile remains (almost) symmetric and unshifted with respect to rest frame even if Civ blueshifts can reach several thousands of km s −1 . In Population A, the lines have been decomposed into two components: • Sulentic et al. 2002;Zhou et al. 2006), and is believed to be associated with a virialized BLR subsystem. • The blue shifted component (BLUE). A strong blue excess in Pop. A Civ profiles is obvious, as in some Civ profiles -like the one of the xA prototype I Zw1 or high luminosity quasars -BLUE dominates the total emission line flux (Marziani et al. 1996;Leighly & Moore 2004;Sulentic et al. 2017). For BLUE, there is no evidence of a regular profile, and the fit attempts to empirically reproduce the observed excess emission. BLUE is detected in a LIL such as Hβ at a very low level, and is not strongly affecting FWHM measurements (Negrete et al. 2018). Broad component Diagnostic ratios are not equally well measurable for the BC and the BLUE. For the BC, the following constraints and caveats apply: Civ/, Siiv/, Aliii/ over Heii -Heii is weak but measurable in most of the objects. Ratios such as Civ/Heiiλ1640, Siiv/Heiiλ1640, Aliii/Heiiλ1640 (Udependent) offer Z indicators, although, as mentioned, we have to consider that the metal line formation zones are not always coincident with the one of Heiiλ1640 (see Figure 4). Especially for the low-ionization conditions of the BC emitting gas, these ratios are well-behaved (Sect. 3.5 and 3.6) and will form the basis of the Z estimates presented in this paper. Siiv/Civ -There are problems in estimating the Siiv line intensity: an overestimation might be possible because of difficult continuum placement (see, for example, the case of SDSSJ085856.00+015219.4 in Appendix A). The relative contribution of Siiv to the blend at λ1400 is unclear (Wills & Netzer 1979). A strong BC contribution of Oiv] is unlikely, as this line has a critical density n c ∼ 10 10 cm −3 (Zheng 1988, see also the isophotal contour of Siiv/Oiv] in Appendix B). Our measurements are nonetheless compared to Siiv+ total Oiv CLOUDY prediction. Aliii/Siiii] -This ratio is sensitive to density in the lowionization BLR domain (Negrete et al. 2012). Values Aliii/Siiii]>1 are possible if density is higher than 10 11 cm −3 , the critical density of Siiii]. We will not use this parameter as a metallicity estimator, although, in principle, for fixed physical conditions (setting n H and U ) the Aliii/Siiii] and Siiii]/Ciii] ratios may become dependent mainly on electron temperature and so on metallicity (Sect. 3.5). The ratio of the total emission in the 1900 blend Aliii+Siiii] +Ciii] over Civ has been used as a metallicity estimator . Considering the uncertain contribution of Feiii emission and especially of the Feiii λ1914 line in the xA spectra, we will not use the total intensity of the λ1900 blend as a diagnostic. Civ/Aliii -Biases might be associated with the estimate of the Civλ1549 BC especially when BLUE is so prominent that Civλ1549 BC contributes to a minority fraction. BLUE component Civ/Heiiλ1640 -The Heiiλ1640 BLUE is well-visible merging smoothly with the red wing of Civ. The ratio Civ/Heiiλ1640 might be affected by the decomposition of the blend, leading to an overestimate of the Heii emission. This ratio is in principle sensitive to metallicity. However, the increase is not monotonic at high U (Fig. 2). The resulting effect is that the Civ/Heiiλ1640 ratio within the uncertainties leaves the Z unconstrained between 0.1 and 100 solar. Civ/(Oiv] + Siiv) -The blueshifted excess at λ1400 is ascribed to Oiv + Siiv emission. A significant contribution can be associated with Oiv] and several transitions of Oiv that are computed by CLOUDY (see e.g., Keenan et al. 2002) are especially relevant at high U values and moderately low n H (∼ 10 8 cm −3 ). The blue side of the line is relatively straightforward to measure for computing Civ/λ1400 with a multicomponent fit, although difficult continuum placement, narrow absorption lines, and blending on the blue side make it difficult to obtain a very precise measurement. A total λ1400 BLUE emission exceeding Civ is possible if, assuming log U 0, log n H 9 [cm −3 ], the metallicity value is very high 20 Z 100Z , (Sect. 3.6). Analysis via multicomponent fits We analyze 13 objects using the specfit task from IRAF (Kriss 1994). The use of the χ 2 minimization is aimed to provide a heuristic separation between the broad component (BC) and the blue component (BLUE) of the emission lines. After redshift correction following the method described in Sect. 3.1, for each source of our sample we perform a detailed modeling using various components as described below, including computation of asymmetric errors (Sect. 3.4.1). As mentioned in Sect. 3.2, in our analysis we consider five diagnostic ratios for the BC: Civ/λ1400, Civ/Heiiλ1640, Aliii/Heiiλ1640, λ1400/Heiiλ1640, λ1400/Aliii, and three for the BLUE: Civ/λ1400, Civ/Heiiλ1640, λ1400/Heiiλ1640. The Civ/Heiiλ1640 is used with care, as it may yield poor constraints. In addition, it is important to stress that, of the five ratios measured on the BC, only three (the ones dividing by the intensity of Heiiλ1640 BC) are independent. We compare the fit results with arrays of CLOUDY (Ferland et al. 2013) simulations for various metallicities and physical conditions (Sect. 3.5). For each source we perform the multicomponent fitting in three ranges described below. The best fit is identified by the model with the lowest χ 2 i.e., with minimized difference between the observed and the model spectrum. Following the data analysis by Negrete et al. (2012), we use the following components: The continuum -was modeled as a power-law, and we use the line-free windows around 1300 and 1700Å (two small ranges where there are no strong emission lines) to scale it. If needed, we divide the continuum into three parts (corresponding to the three regions mentioned below). Assumed continua are shown in the Figures of Appendix A. Feii emission -usually does not contribute significantly in the studied spectral ranges. We consider the Feii template which is based on CLOUDY simulations of Brühweiler & Verner (2008) when necessary. In practice the contamination by the blended Feii emission yielding a pseudo-continuum is negligible. Some Feii emission lines were detectable in only a few objects and around ≈ 1715Å, at 1785Å, and at 2020Å. In these cases, we model them using single Gaussians. Feiii emission -affects more the 1900Å region and seems to be strong when AlIII λ1860 is strong as well (Hartig & Baldwin 1986). To model these lines we use the template of Vestergaard & Wilkes (2001). Region 1300 -1450Å -is dominated by the Siiv+ Oiv] high ionization blend with strong blueshifted component. The fainter lines as Siiiλ1306, Oiλ1304 and Ciiλ1335 are also detectable. For the broad and blueshifted components we use the same model as in case of Civ and Heiiλ1640. This spectral range is often strongly affected by absorption. Region 1450 -1700Å -is dominated by Civ emission line which we model as a fixed in the rest-frame wavelength Lorentzian profile representing the BC and two blueshifted asymmetric Gaussian profiles vary freely. The same model is used for Heiiλ1640. Region 1700 -2200Å -is dominated by Aliii, Siiii] and Feiii intermediate -ionization lines. We model Aliii and Siiii] using Lorentzian profiles, following Negrete et al. (2012). Ciii] emission is also included in the fit, although the dominant contribution around λ1900 is to be ascribed to Feiii (Martínez-Aldama et al. 2018, and references therein). We use the template of Vestergaard & Wilkes (2001) to model Feiii emission. No BLUE is ascribed to these intermediate ionization lines. Absorption lines -are modeled by Gaussians, and included whenever necessary to obtain a good fit. The fits to the observed spectral ranges are shown in the Figures of Appendix A. Error estimation on line fluxes The choice of the continuum placement is the main source of uncertainty in the measurement of the emission line intensities. The fits in Appendix A show that, in the majority of cases, the FWHM of the Aliii and Siiii] lines (assumed equal) satisfy the condition FWHM(Aliii) ∼ FWHM(Civ BC ∼ FWHM(Siiv BC ). Figure 1 shows the best fit, maximum and minimum placement of the continuum, which we choose empirically. With this approach the continua of Figure 1 should provide the continuum uncertainty at a ±3σ confidence. The continuum placement strongly affects the measurement of an extended feature such as the Feiii blends and the Heiiλ1640 emission. Figure 1 makes it evident that errors on fluxes are asymmetric. The thick line shows the continuum best fit, and the thinner the minimum and maximum plausible continua. Even if the minimum and maximum are displaced by the same difference in the intensity with respect to the best fit continuum, assuming the minimum continuum would yield an increase in line flux larger than the flux decrease assuming the maximum continuum level. In other words, a symmetric uncertainty in the continuum specific flux translates into an asymmetric uncertainty in the line fluxes. To manage asymmetric uncertainties, we assume that the distribution of errors follows the triangular distribution (D' Agostini 2003). This method assumes linear decreasing in either side of maximum of the distribution (which is the best fit in our case) to the values obtained for maximum and minimum contributions of the continuum. We motivate using the triangular error distribution as a relatively easy analytical method to deal with asymmetric errors. For each line measurement we calculate the variance using the following formula for the triangular distribution: where ∆x + and ∆x − are differences between measurement with maximum and best continuum and with best and minimum continuum respectively. To analyze error of diagnostic ratios we propagate uncertainties using standard formulas of error propagation. Photoionization modeling To interpret our fitting results we compare the line intensity ration for BC and BLUE with the ones predicted by CLOUDY simulations (Ferland et al. 2013). 5 An array of simulations is used as reference for comparison with the observed line intensity ratios. It was computed under the assumption that (1) column density is N c = 10 23 cm −2 ; (2) the continuum is represented by the model continuum of Mathews & Ferland (1987) which is believed to be appropriate for Population A quasars, and (3) microturbulence is negligible. The simulation arrays cover the hydrogen density range 7.00≤ log(n H ) ≤14.00 and the ionization parameter −4.5 ≤ log(U ) ≤ 1.00, in intervals of 0.25 dex. They are repeated for values of metallicities in a range encompassing five orders of magnitude: 0.01, 0.1, 1, 2, 5, 10, 20, 50, 100, 200, 500 and 1000 Z . Extremely high metallicity Z 100Z is considered physically unrealistic (Z ≈ 100Z implies that more than half of the gas mass is made up by metals!), unless the enrichment is provided in situ within 5 The arrays were computed over several years with CLOUDY 13.05, in large part before CLOUDY 17 became available. the disk (Cantiello et al. 2020). In several cases, simulations suffered convergence problems if Z 500Z . The behavior of diagnostic line ratios as a function of U and n H for selected values of Z is shown in Fig. 18 of Appendix B. Basic Interpretation The line emissivity coll (ergs cm −3 s −1 ) of a collisionally excited line emitted from an element X in its i−th ionization stage has a strong temperature dependence. In the high density limit the line is said to be "thermalized," as its strength depends only on the atomic level population and not on the transition strength (Hamann & Ferland 1999). β is the photon escape probability and A X i ,ul is the spontaneous decay coefficient. At low densities we have, The recombination lines considered in our analysis are Hβ and Heiiλ1640, for which the emissivity (with an approximate dependence of radiative recombination coefficient α on electron temperature, Osterbrock & Ferland 2006) becomes: and n Y j is the number density of the parent ion. Under these simplifying, illustrative assumption we can write: for the low-density case, and for the high density case. Similarly, for the ratio of two collisionally excited lines at frequencies ν 0 and ν 1 , where κ = 1, 2 in the high-and low-density case respectively. Connecting the relative chemical abundance to the line emissivity ratios in the previous equation requires the reconstruction of the ionic stage distribution for each element, i.e., the computation of the ionic equilibrium, as well as the consideration of the extension of the emitting region within the gas clouds i.e., that the line emission is not cospatial, and possible differences in optical depth effects. This is achieved by the CLOUDY simulations. However, we can see that the main variable parameter for a given relative emissivity is T e . In other words, electron temperature is the main parameter connected to metallicity. This is especially true for fixed physical condition (U , n H , N c = 10 23 , SED given). This is most likely the case of xA sources: the spectral similarity implies that the scatter in physical properties odest. We further investigate this issue in Sect. 4.3. Explorative analysis of photoionization trends at fixed ionization parameter and density One of the main results of previous investigations is the systematic differences in ionization between BLUE and BC Negrete et al. 2012;Sulentic et al. 2017). Previous inferences suggest very low ionization (U ∼ 10 −2.5 ), also because of the relatively low Civ/Hβ ratio for the BC emitting part of the BLR, and high density. A robust lower limit to density n H ∼ 10 11.5 cm −3 has been obtained from the analysis of the CaII triplet emission (Matsuoka et al. 2007;Panda et al. 2020a). Less constrained are the physical conditions for BLUE emission. Apart from Civ/Hβ 1 and Lyα/Hβ and Civ/Ciii] also 1, little constraints exist on density and column density. This result hardly comes as a surprise considering the difference in dynamical status associated with the two components. While it is expected that the BC is emitted in a region of high column density logN c 23 [cm −2 ], not last because radiation forces are proportional to the inverse of N c (Netzer & Marziani 2010). Following Netzer & Marziani (2010) we wrote the equation of motions for a gas cloud under the combined effect of gravitation and radiation forces, and showed that the acceleration term due to radiation is inversely proportional to N c (see also Ferland et al. 2009), this high N c region is expected to be relatively stable (at rest frame, with no sign of systematic, large shifts in Population A) and presumably devoid of lowdensity gas (considering the weakness of Ciii], Negrete et al. 2012). The same cannot be assumed for BLUE. BLUE is associated with a high radial velocity outflow, probably with the outflowing streams creating BAL features when intercepted by the line-of-sight (e.g., Elvis 2000). Here we consider log U = −2.5, log n H =12 (-2.5,12), and log U = 0, logn H =9 (0,9) as representative of the low and high-ionization emitting gas. Fig. 2 illustrates the behavior of the Civ/Hβ, Heiiλ1640/Hβ and Civ/Heiiλ1640 in the high and low-ionization cases as a function of metallicity. The Civ intensity with respect to Hβ has a steep drop around Z 1Z , after a steady increase for sub-solar Z. The Heiiλ1640/Hβ ratio decreases steadily, with a steepening at round solar value. Physically, this behavior is due to the high value of the ionization parameter (assumed constant), while the electron temperature decreases with metallicity, implying a much lower collisional excitation rate for Civ production. The dominant effect for the Heiiλ1640 decrease is likely the "ionization competition" between Civ and Heiiλ1640 parent ionic species (Hamann & Ferland 1999). As a consequence, the ratio Civ/Heiiλ1640 has a non-monotonic behavior with a local maximum around solar metallicity. At low ionization and high density, the behavior is more regular, as the steady increase in Civ/Hβ is followed by a saturation to a maximum Civ/Hβ. The Heiiλ1640/Hβ ratio is constant up to solar, and steadily decreases above solar, where the ionization competition with triply ionized carbon sets on. The result is a smooth, steady increase in the Civ/Heiiλ1640 ratio. Fig. 3 shows the behavior of the other intensity ratios used as metallicity diagnostics, for BLUE and BC. Siiv+Oiv]/Civ and Siiv+Oiv]/Heiiλ1640 saturate above 100 Z . Only around Z ∼ 10Z values Civ/Siiv+Oiv] 1 are possible, but the behavior is not monotonic and the ratio rises again at Z 30Z , with the unpleasant consequence that a ratio Civ/Siiv+Oiv]≈ 1.6 might imply 10 Z as well as 1000 Z . The ratios usable for the BC also show regular behavior. The Civ/Aliii ratio remains almost constant up Z ∼ 0.1Z , and the starts a regular decrease with increasing Z, due to the decrease of T e with Z (Civ is affected more strongly than Aliii). Interestingly, Aliii/Heiiλ1640 shows the opposite trend, due to the steady decrease of the Heiiλ1640 prominence with Z. Especially of interest is however the behavior of ratio Aliii/Heiiλ1640 that shows a monotonic, very linear behavior in the log-log diagram. As for the high-ionization case, values (Siiv+Oiv])/Civ 1 are possible only at very high metallicity, although the non monotonic behavior (around the minimum at Z ≈ 200Z ) complicates the interpretation of the observed emission line ratios. The ionization structure within the slab remains self similar over a wide metallicity range, with the same systematic differences between the high and low-ionization Figure 2. Computed intensity ratios involving Civ and Heiiλ1640 as a function of metallicity, for physical parameters U and nH fixed: (log U , log nH) = (-1,9) (top) and (log U , log nH) = (-2.5,12) (bottom). Columns from left to right show Civ/Hβ, Heiiλ1640/Hβ, Civ/Heiiλ1640. case (Fig. 4), consistent with the assumption of a constant ionization parameter. As expected, the electron temperature decreases with metallicity, and the transition between the fully and partially ionized zone (FIZ and PIZ) occurs at smaller depth. In addition, close to the illuminated side of the cloud the electron temperature remains almost constant; the gas starts becoming colder before the transition from FIZ to PIZ. The depth at which T e starts decreasing is well-defined, and its value becomes lower with increasing Z (Fig. 4). The effect is present for both the low-and high-ionization case, although it is more pronounced for the high-ionization. Fig. 5 shows that an increase in metallicity is affecting the T e in the line emitting cloud. Fig. 5 reports the behavior of T e at the illuminated face of the cloud (τ ∼ 0) and at maximum τ (corresponding to N c = 10 23 cm −2 , the side facing the observer) for the high-ionization and low-ionization case. The T e monotonically decreases as a function of metallicity. The difference between the two cloud faces is almost constant for the low ionization case, with δ log T e ≈ 0.5 dex, while it increases for the high-ionization case, reaching δ log T e ≈ 0.75 dex at the highet Z value considered, 10 3 Z . Immediate Results The observational results of our analysis involve the measurements of the intensity of the line BC and BLUE component separately. The rest-frame spectra with the continuum placements, and the fits to the blends of the spectra are shown in Appendix A. Table 2 reports the measurement for the λ1900 blend. The columns list the SDSS identification code, the FWHM (in units of km s −1 ) and equivalent width and flux of Aliii (the sum of the doublet lines, in units ofÅ and 10 −14 erg s −1 cm −2 , respectively), FWHM and flux of Ciii], and flux of Siiii] (its FWHM is assumed equal to the one of the single Aliii lines.) with Similarly, Table 3 reports the parameter of the Civ blend: equivalent width, FWHM and flux of the Civ BC, the flux of the Civ blueshifted component, as well as the fluxes of the BC and BLUE of Heiiλ1640. FWHM values are reported but especially values 5000 km s −1 should be considered as highly uncertain. There is the concrete possibility of an ad- Figure 3. Behavior of the intensity ratios employed in this work (with the exception of Civ/Heiiλ1640 shown in the previous Figure), as a function of metallicity, for physical parameters U and nH fixed: (log U , log nH) = (-1,9) and (log U , log nH) = (-2.5,12). Top panels, from left to right: Civ/Siiv+Oiv],(Siiv+Oiv])/Heiiλ1640, Civ/Aliii. Bottom panes, from left to right: Aliii/Heiiλ1640, Civ/(Siiv+Oiv]), (Siiv+Oiv])/Heiiλ1640. ditional broadening (∼ 10 % of the observed FWHM) associated with non-virial motions for the Aliii line (del Olmo et al., in preparation). The fluxes of the BC and of BLUE of Siiv and Oiv] are reported in Table 4. Intensity ratios with uncertainties are reported in Table 5. The last row lists the median values of the ratios with their semi-interquartile ranges (SIQR). 4.1.1. Identification of xA sources and of "intruders" Figure 6 shows that the majority of sources meet both UV selection criteria, and should be considered xA quasars. The median value of the Aliii/Siiii] (last row of Table 5) implies that the Aliii is strong relative to Siiii]. Also Siiii] is stronger than Ciii]. Both selection criteria are satisfied by the median ratios. Only one source (SDSS J084525.84+072222.3) shows Ciii]/Siiii] significantly larger than 1. This quasar is however confirmed as an xA by the very large Aliii/Siiii], by the blueshift of Civ, and by the prominent λ1400 blend comparable to the Civ emission. The lines in the spectrum of SDSS J084525.84+072222.3 are broad, and any Ciii] emis-sion is heavily blended with Feiii emission. The Ciii] value should be considered an upper limit. Three outlying/borderline data points (in orange) in Fig. 6 have ratio Ciii]/Siiii]∼ 1, and Aliii/Siiii] consistent with the selection criteria within the uncertainties, but other criteria support their classification as xA. The borderline sources will be further discussed in Section 4.3. In conclusion, all the 13 sources of the present sample save one should be considered bona-fide xA sources. It is intriguing that the intensity ratios Ciii]/Siiii] and Aliii/Siiii] are apparently anti-correlated in Figure 6, if we exclude the two outlying points. Excluding the two outlying data point the Spearman rank correlation coefficient is ρ ≈ 0.8, which implies a 4σ significance for a correlation, but the correlation coefficient between the two ratios for the full sample is much lower. Given the small number of sources, a larger sample is needed to confirm the trend. . Electron temperature Te as a function of metallicity Z, for physical parameters U and nH fixed: (log U , log nH) = (-1,9) (representative of BLUE and high ionization case, blue and cyan) and (log U , log nH) = (-2.5,12) (representative of the low-ionization BLR, red and orange). Blue and red refer to the first zone of the CLOUDY computation i.e., to the illuminated surface of the clouds; cyan and orange, to the side of the cloud farther from the continuum sources i.e., facing the observer. Fig. 7 shows the distributon of diagnostic intensity ratios Civ/Heiiλ1640, Siiv/Heiiλ1640, and Aliii/Heiiλ1640 for the BC. The lower panels of Fig. 7 shows the results for individual sources. However, the distribution of the data points is relatively well-behaved, with individual ratios showing small scatter around their median values. In the histogram, we see a tail made by a 4-5 objects suggesting systematically higher values. In particular, at least two objects (SDSS J102606.67+011459.0 and SDSS J085856.00+015219.4) show systematically higher ratios, with Civ/Heiiλ1640≈ 10, and Aliii/Heiiλ1640≈ 4. Relation between intensity ratios AlIIIλ1860/SiIII]λ1892 and CIII]λ1909/SiIII]λ1892. The gray area corresponds to the parameter space occupied by the xA sources. Borderline sources are in orange color. Both of them show extreme Civ blueshifts and SDSS J102606.67+011459.0 shows the highest Aliii/Siiii] ratio in the sample. Since the three ratios are, for fixed physical conditions, proportional to metallicity, we expect an overall consistency in their behavior i.e., if one ratio is higher than the median for one object, also the other intensity ratios should be also higher. The lower diagrams are helpful to identify sources, for which only one intensity ratio deviates significantly from the rest of the sample. A case in point is SDSS J082936.30+080140.6 whose ratio Aliii/Heiiλ1640≈ 8 is one of the highest values, but whose Civ/Heiiλ1640 and Siiv/Heiiλ1640 are slightly below the median values. The fits of Appendix A show that this object is indeed extreme in Aliii emission. The Civ and λ1400 blends are dominated by the BLUE excess, and an estimate of the Civ and Siiv BC is very difficult, as it accounts for a small fraction of the line emission. The Heiiλ1640 emission is almost undetectable, especially in correspondence to the rest frame. Ideally, results on these objects should be discussed on a case-by-case basis. BLUE intensity ratios Similar considerations apply to the blue intensity ratios. We see systematic trends in Figure 8 that imply consistency of the ratios for most sources, although the uncertainties are larger, especially for Civ/Heiiλ1640. The ratio Civ/(Siiv+ Oiv]) values are systematically higher than for the BC, while the Civ/Heiiλ1640 is 14Śniegowska et al. slightly higher (median BLUE 5.8 vs. median BC 4.38). The ratio (Siiv+ Oiv])/Heiiλ1640 is much lower than for the BC (median BLUE 2.09 vs. median BC 6.27). The difference might be in part explained by the difficulty of deblending Siiv from Oiv], and by the frequent occurrence of absorptions affecting the blue side of the blend. Both factors may conspire to depress BLUE. The lower diagrams of Fig. 8 are again helpful to identify sources for which intensity ratios deviate significantly from the rest of the sample. SDSS J102606.67+011459.0 shows a strong enhancement of Civ/Heiiλ1640 and Siiv+Oiv], confirming the trend seen in its BC. Fig. 9 shows a matrix of correlation coefficients for all diagnostic ratios which we considered in this work. The 2σ confidence level of significance for the Spearman's rank correlation coefficient for 13 objects is achieved for ρ ≈ 0.54. The highest degree of cor- Note-Columns are as follows: (1) SDSS name, (2) and (3) report the FWHM of Aliii and Ciii] in km s −1 ; (4), (5) and (6) Note. Columns are as follows: (1) SDSS name, (2) rest-frame equivalent width of the total Civ emission i.e., Civ BLUE+BC, inÅ; (3) the FWHM of the Civ line in km s −1 ; Cols. (4) and (5) list fluxes of the Civ BC and BLUE line; Cols. (6) and (7) relation is found between the ratios Civ/Heiiλ1640 and Civ/Siiv(0.87), and between Civ/Heiiλ1640 and Siiv/Heiiλ1640 (0.81). A milder degree of correlation is found between Aliii/Heiiλ1640 and Civ/Heiiλ1640 (0.23) and Siiv/Heiiλ1640 (0.44). These results imply that Siiv and Civ are likely affected in a related way by a single parameter. The main parameter is expected to be T e , and hence Z (Sect. 3.5.1). The Aliii (normalized to the Heiiλ1640 flux) line shows much lower values of the correlation coefficient. While Siiv and Civ are basically the same line, the Aliii formation may not be exclusively collisional, as shown by the results of the CLOUDY simulations. Therefore there could be a different response to individual U , n H and optical depth variations. The prominence of Ciii] with respect to Siiii] decreases with Siiv/Heiiλ1640 BLUE, Civ/Heiiλ1640, Siiv/Heiiλ1640 BLUE, and increases with Civ/Siiv+Oiv]. Apparently the Ciii]/Siiii] ratio is strongly affected by an increase Note. Columns are as follows: (1) SDSS name, (2) the FWHM of the Siiv line in km s −1 . (3) and (4) We propagated the diagnostic intensity ratios measured on the BC and BLUE components with their lower and upper uncertainties following the relation between ratios and Z in the Figures 2, for the fixed physical conditions assumed in the low-and high-ionization region. The results are reported in Table 6 and Table 7 for the BC and for the blueshifted component, respec-tively. The last row reports the median values of the individual sources Z estimates with the sample SIQR. The distributions are shown in Figs. 10 and 11, along with a graphical presentation of each source and its associated uncertainties. Table 6 and Table 7 permit to quantify the systematic differences that are apparent in Figs. 10 and 11. The agreement between the various estimators is in good on average (the medians scatter around log Z ≈ 1 by less than 0.2 dex). However, there are systematic differences between the Z obtained from the various diagnostic ratios. Siiv and Aliii over Heiiλ1640apparently overestimate the Z by a factor 2 with respect to Civ/Heiiλ1640. The out-of-scale values of Civ/Siiv and Civ/Aliii may suggest that metallicity scaling according to solar pro- portion may not be strictly correct (Sect. 5.6). In the case of BLUE, several estimates from Civ/Heiiλ1640 strongly deviate from the ones obtained with the other ratios, due to the non-monotonic behavior of the relation between Z and Civ/Heiiλ1640, right in the range of metallicity that is expected. Correlation between diagnostic ratios The median values of all three ratios consistently suggest high metallicity with a firm lower limit Z ≈ 10, and in the range 10Z Z 100Z . There is apparently a systematic difference between BC and BLUE, in the sense that Z derived from the BC is systematically higher than Z from blue. The difference is small in the case of Civ/Heiiλ1640 but is significant in the case of Siiv+ Oiv]/Heiiλ1640, where Z from BLUE are a factor of 10 lower. We have stressed earlier that there are often absorptions affecting the BLUE of Siiv+ Oiv]/Heiiλ1640. Absorptions and the blending with Ciiλ1332 and Siiv BC lines make it difficult to properly define the continuum underlying the λ1400 blend at negative radial velocities. We think that the Siiv+ Oiv] BLUE intensity estimate is more of a lower limit. Another explanation might be related to the assumption of a constant density and U for all sources. While there are observational constraints supporting this condition for the BC (Panda et al. 2018(Panda et al. , 2020b, there are no strong clues to the BLUE properties, save a high ionization degree. Table 8 reports the Z estimates for the BC, BLUE, and a combination of BC and BLUE for each individual object. The values reported are the median values of the individual objects' estimates from the different ratios. For BC, the three ratios of Table 6 were always used. The last column of Table 8 reports the number of ratios used from the BLUE component. Here the Z value for each object is computed by vetting the ratios according to concordance. If the discordance is not due on physical ground, but rather to instrumental problems (for example, contamination by absorption lines, non linear dependence on Z of some ratios), a proper strategy is to use estimators such as the median that eliminate discordant values even for small sample sizes (n ≥ 3). Measuring medians and SIQR is an efficient way to deal with the measurements of large samples of objects. All estimates log Z 0 were excluded, as either the product of heavy absorptions (Siiv+Oiv]/Heiiλ1640) or of difficulties in relating the ratio (Civ/Heiiλ1640) to Z; apart from J211651.48+044123.7, the upper uncertainty of the negative estimates is so large that Z is actually unconstrained. The difference between BLUE and BC is even more evident: the median (last row) indicates a factor ≈ 6 difference between BLUE and BC. The BC suggests a median Z ≈ 60Z , while the BLUE Z ≈ 10Z . Z for individual sources for fixed U , n H The assumption that the wind and disk component have the same Z in each object is a reasonable one, with the caveats mentioned in Sect. 5.6. Therefore the two estimates, for BLUE and BC could be considered two independent estimators of Z. If the two estimates are combined for each individual object, 10Z Z 100Z , with a median value of Z ≈ 20Z . Estimates of Z relaxing the constraints on U and n H The Z estimates for the BC are mainly based on the three ratios involving Heiiλ1640 normalization. To gain a global, bird's eye view of the Z dependence on the physical parameters, we assigned a score from 0 to 3 and considered the domain of the parameter space U , n H , Z for which there is agreement with all the three diagnostic ratios. The left panels of Fig. 12 and of Fig. 13 shows the 3D space U , n H , Z where each point in space correspond to an element of the grid of CLOUDY the parameter space compatible with all three observed ratios within the uncertainties. The case shown in Fig. 12 and in Fig. 13 is the one with the median values of the sample objects. Similar considerations can be made if we consider the χ 2 behavior. We compute the χ 2 in the following form, to identify the value of the metallicity for median values of the diagnostic ratios and for the diagnostic ratios of where the summation is done over the available diagnostic ratios, and the χ 2 is computed with respect to the results of the CLOUDY simulations as a function of U , n H , and Z (subscript 'mod'). Weights w ci = 1 were assigned to Civ/Heiiλ1640, Siiv/Heiiλ1640, and Aliii/Heiiλ1640; w ci = 0 or 0.5 were assigned to Civ/Aliii and and Civ/Siiv. For BLUE, the three diagnostic ratios were all assigned w ci = 1. The distribution of the data point for the median ± SIQR values in Fig. 12 is constrained in a relatively narrow range of U , n H , Z, at very high density, low ionization, and high metallicity. Within the limit in U , n H , the distribution of Z is flat and thin, around Z ∼ 100Z . This implies that, for a change of the U and n H within the limits allowed by the data, the estimate of Z is stable and independent on U and n H . Table 8 reports the individual Z estimates and the SIQR for the sources in the sample (the last row is the median). By far less constrained is the situation for BLUE. Fig. 13 shows the parameter spaces for the 3 ratios and χ 2 distributions. The spread in ionization and density is very large, although the concentration of data points is higher in the case of low n H (log n H ∼ 8-9 [cm −3 ]) and high ionization (log U ∼ 0). At any rate the spread of the data points indicate that solution at low ionization and high density are possible. The results for individual sources tend to disfavor this scenario for the wide majority of the objects, but the properties of the gas emitting the BLUE are less constrained than for the BC. What is missing for BLUE is especially a firm diagnostic of density that in the case of BC is provided mainly by the ratio Aliii/Heiiλ1640. Results on Z are however as stable as for the BC, even if the dispersion is large, and suggest values in the range 10 Z 100Z . Summing up, all meaningful estimators converge toward high Z values, Z 10Z . Ratios Civ/Siiv significantly less than < 1 are not predicted in the parameter space. Siiv/Heiiλ1640 seems to give the largest estimates of Z. Also the high Aliii/Civ requires extremely high values of Z. A conclusion has to be tentative, considering the possible systematic errors affecting the estimates of the Civ and Siiv intensities: for Civ, the BC in the most extreme cases is often buried under an overwhelming BLUE; a fit is not providing a reliable estimate of the BC (by far the fainter component) but provides a reliable BLUE intensity; for Siiv we may overestimate the intensity due to "cancellation" of the BLUE by absorptions. This said, the present data are consistent with the possibility of a selective enhancement of Al and Si, as already considered by Negrete et al. (2012). The issue will be briefly discussed in Sect. 5. At any rate, the absence of correlation between BLUE and BC parameters (Fig. 9), the difference in the diagnostic ratios and differences in inferred Z, as well as the results for individual sources described below justify the approach followed in the paper to maintain a separation between BLUE and BC. The meaning of possible systematic differences between the BC and BLUE are further discussed in §5. Individual sources The best n H , U , and Z for each object have been obtained by minimizing the χ 2 as defined in Eq. 8, and they are reported in Table 9. The last two rows list the minimum χ 2 values for the median (with the SIQR of the sample) and for the median on the medians reported for individual sources. In other words, the choice of the best physical conditions was obtained by minimizing the sum of the deviations between the model predictions and the observer diagnostic ratios. The obtained value of Z cover the range 20 Z 500, with 10 out of 13 sources with 50 Z 200Z , and medians of intensity ratios yielding Z ∼ 100Z . There is some spread in the ionization parameter values, −3.75 U −1.75, but in all cases indicating low or very low ionization level. The hydrogen density is very high, in only one case logn H ≈ 11.75, and in several cases n H reaches 10 14 cm −3 . The median values are Z = 100Z , log U = −2.5, log n H =13, therefore validating the original assumption of log U = −2.5, log n H =12 for fixed physical condition. The result for individual sources confirm the scenario of Fig. 12 for the wide majority of the sample sources. The higher n H values are consistent with recent inferences for the low-ionization BLR derived from Temple et al. (2020), based on the FeIII UV emission which is especially prominent in the UV spectra of xA quasars (Martínez-Aldama et al. 2018). It is interesting to note that borderline objects (Aliii/Siiii]≈ 0.5, Ciii]/Siiii]≈ 1) show higher values of the ionization parameter (log U ≈ −1.75), and also the lowest value of Z for the BC (≈ 20Z ). The inferences are less clear from BLUE (Table 10. In some cases, the permitted volume in the 3D parameter space for individual sources covers a broad range in U and n H as for the median. In some others the volume is more limited but the n H can be very high logn H ∼14. Most sources show high degree of ionization, −1 log U 0, and only in one case SDSS J102606.67+011459.0 there is apparently a lowionization solution with U comparable to that of the low-ionization BLR. The median suggests logn H ∼8.25, logU ∼-0.5, close to the values that we assumed for the fixed (U , n H ) approach. The results on metallicity suggest in most cases Z 20Z , even if Z for the median is Z = 10Z . However, within 3σ from the minimum χ 2 , Z values up to 100 are also possible. In summary, the low-ionization BLR of xA sources seems to be consistently characterized by extremely low ionization, high density and very high metallicity, under the assumption that Z scales with the solar chemical composition. Diagnostics on BLUE is less constraining, and measurements are more difficult. The 0-order results are however consistent again with high metallicity Z 10Z . Inferences on Z are in agreement with the case for fixed physical conditions, as the Z determinations are weakly dependent on U , n H . Table 5. The individual contour was smoothed with a Gaussian kernel. Right: data points in the parameter space selected for not being different from χ 2 min within a confidence level of 3σ. Figure 13. The parameter space nH, U , Z. Left: data points in 3D space are elements in the grid of the parameter space that are in agreement with the three main diagnostic ratios used for BLUE, within the SIQR of the median estimate from Table 5. The individual contour was smoothed with a Gaussian kernel. Right: data points in the parameter space selected for not being different from χ 2 min within a confidence level of 3σ. 3. a first estimate of metallicity can be obtained from the assumption that the low-ionization BLR associated with the BC and wind/outflow component associated with BLUE can be described by similar physical conditions in different objects. Several diagnostic ratios can be associated with the intensity ratios predicted by an array of photoionization simulations, namely • for the BC: Aliii/Heiiλ1640, Civ/Heiiλ1640, Siiv/Heiiλ1640, assuming (log U , logn H ) = (-2.5,12) or (log U , logn H ) = (-2.5,13) • for the BLUE: Civ/Heiiλ1640, Siiv+Oiv]/Heiiλ1640, Civ/Siiv+Oiv] assuming (log U , logn H ) = (0,9). (2), (3) and (4) Z in units of Z , log U and lognH in cm −3 in the same order. Note-Columns: (1) SDSS identification, (2), (3) and (4) Z in units of Z , log U and lognH in cm −3 in the same order. 4. Estimates can be refined for individual sources relaxing the constant (log U , logn H ) assumptions. Tight constraints can be obtained for the BC. The BLUE is more problematic, because of both observational difficulties and the absence of unambiguous diagnostics. Our method relies on ratios involving Heiiλ1640 that have not been much considered in previous literature. In addition, we have considered fixed SED, turbulence (equal to 0), and column density in the simulations (N c = 10 23 ) as fixed. The role of turbulence is further discussed in Sect. 5.5, and is found to be not relevant unlike in the case of Feii emission in the optical spectral range, where effects of self-and Lyα-fluorescence are important, (e.g., Verner et al. 1999;Panda et al. 2018), while the N c effect is most likely negligible. Extension of the method to the full Population A is a likely possibility, since we do not expect a very strong effect of the SED on the metallicity estimate, as long as the SED has a prominent big blue bump, as it seems to be case for Population A. The role of SED is likely important if the method has to be extended to sources of Pop. B along the main sequence. Intensity ratios involving Heiiλ1640 are difficult to measure in the xA spectra, but may be more accessible for Population B spectra. Ferland et al. (2020) have shown significant differences in the SED as a function of L/L Edd , with a much flatter SED at low L/L Edd . The extension to Pop. B would therefore require a new dedicated array of simulations. Accretion parameters of sample sources The bolometric luminosity has been computed assuming a flat ΛCDM cosmological model with Ω Λ = 0.7, Ω m = 0.3, and H 0 = 70 km s −1 Mpc −1 . Following we decided to use Aliii as virial broadening estimator for computing the M BH . Our estimates adopt two different scaling laws: (1) the scaling laws of Vestergaard & Peterson (2006) for Civ and a second, unpublished one based on Aliii (del Olmo et al., in preparation). Eddington ratios have been obtained using the Eddington luminosity L Edd ≈ 1.3 × 10 38 (M BH /M ) erg s −1 . The luminosity range of the sample is very limited, less than a factor 3, 46.8 log L 47.3, in line with the requirement of similar redshift and high flux values. Correspondingly, the M BH and the Eddington ratio are constrained in the range 8.8 logM BH 9.5 and −0.55 log L/L Edd 0.18, respectively. The M BH sample dispersion is relatively small, with log M BH ∼ 9.4 ±0.2 [M ]. The scatter in M BH and L/L Edd is reduced to ≈ 0.1 dex if we exclude one object with the lowest M BH and highest L/L Edd . Applying a small correction (10%) to the FWHM to account for an excess broadening in Aliii due to non-virial motions will decrease the M BH by 0.1 dex (as found by Negrete et al. 2018 for Hβ), and increase L/L Edd correspondingly. If this correction is applied the median L/L Edd is ≈0.6. Using the Civ BC FWHM as a virial broadening estimator, also decreasing M BH median estimate by 0.1 dex, The accretion parameters are consistent with extreme quasars of Population A at high mass and luminosity; they are mainly at the 24Śniegowska et al. low L/L Edd of Sample 3 (based on M BH estimates from Aliii of ). The small dispersion in physical properties of the present sample (0.2 dex) focuses the analysis on properties that may differ for fixed accretion parameters, and fixed ratio of radiation and gravitation forces. Correlation between diagnostic ratios and AGN physical properties Considering the small dispersion in M BH , L/L Edd and bolometric luminosity, it is hardly surprising that none of the ratios utilized in this paper is significantly correlated with the accretion parameter. The highest degree of correlation is seen between L/L Edd and Civ/Aliii, but still below the minimum ρ needed for a statistically significant correlation. In Figure 14 we present the correlation between metallicity and diagnostic ratios along with log of bolometric luminosity, log of black hole mass and log of Eddington ratio for BC and BLUE. The strongest correlation between Z BC and intensity ratios are with Siiv/Heiiλ1640 (0.81) and Aliii/Heiiλ1640 (0.76). For Z BLUE , Siiv/Heiiλ1640 (BLUE components) correlates strongly (0.74). Z BLUE correlates with physical parameters, whereas Z BC rather anti-correlates with them but not at a statistically significant level. Considering the limited range in luminosity and M BH , and small sample size, these trends should be confirmed. The metallicity values we derive are very high among quasars analyzed with similar techniques (e.g., Nagao et al. 2006b;Shin et al. 2013;Sulentic et al. 2014): as mentioned, typical values for high-z quasars are around 5Z . This value could be taken as a reference over a broad range of redshift, and also for the sample considered in the present paper, as there is no evidence of metallicity evolution in the BLR up to z ≈ 7.5 (e.g., Nagao et al. 2006b;Juarez et al. 2009;Xu et al. 2012;Onoue et al. 2020). This is in line with the results of Negrete et al. (2012) who found very similar intensity ratios for the prototypical NLSy1 and xA source I Zw 1, of relatively low luminosity at low z, and a luminous xA object at redshift z ≈ 3.23. Even if these authors did not derive Z from their data, the I Zw 1 intensity ratios reported in their paper indicate very high metallicity consistent with the values derived for the present sample. More than inferences on the global enhancement of Z in the host galaxies, the absence of evolution points toward a circumnuclear source of metal enrichment, ultimately associated with a Starburst (e.g., Collin & Zahn 1999a;Xu et al. 2012). A detailed comparison with previous work on the dependence of Z on accretion parameters is hampered by two difficulties. (1) Before comparing the intensity ratios of this paper, we should consider that other authors do not distinguish between BLUE and BC when computing the ratios. This has the unfortunate implications that in some cases such as Aliii/Civ, the ratio is taken between lines emitted predominantly in different regions (virialized and wind), presumably in very different physical conditions. Not distinguishing between BC and BLUE yields Civ/Aliii∼ 10 1. (2) Methods of M BH estimate differ. For example Matsuoka et al. (2011) use the Vestergaard & Peterson (2006) scaling laws without any correction to the line width of Civ. This might easily imply overestimates of the M BH by factor 5 -10 (Sulentic et al. 2007). More properly, the analysis by Shemmer et al. (2004) used Hβ from optical and IR observations to compute M BH and to examine the dependence of metallicity on accretion parameters. These authors found the strongest dependence on Eddington ratio (with respect to luminosity and mass) over 6 orders of magnitude in luminosity, suggesting that luminosity and black hole mass are by far less relevant (as also found, for example, by Shin et al. 2013). A posteriori analysis of Nv strength As it was stressed in several works (e.g. Wang et al. 2012a;Sulentic et al. 2014, the intensity of the Nv line is difficult to estimate due to blending with Lyα and strongly affected by absorption. We model Lyα and Nv using the same criteria as in Siiv modelling. However, in this work, we give only a qualitative judgement of Nv strength for our sample, because of large uncertainties due to effect mentioned above. For sources in the highest metallicity range obtained from ratios from BC, the Nv broad component intensity is slightly higher or comparable to Lyα broad component. Blue components dominate both lines. We notice also significantly higher intensity of blue component in comparison to broad one in Siiv and Civ blends. An example of source of this type is shown in the upper half Fig. 15. On the contrary, sources with the lowest metallicities obtained from BC intensity ratios show the Lyα BC intensity higher than in Nv and the BC is stronger than BLUE. The same behaviour of strong broad component we see in the Siiv and Civ ranges. An example of sources of this type is shown in the lower panels of Fig. 15. Shin et al. (2013) compared Siiv+Oiv] and Nv fluxes and found strong, significant correlation between them (ρ = 0.75). The Nv over Heiiλ1640 or Hβ should be a strong tracer of Z, as it is sensitive to secondary Z production and hence proportional to Z 2 (Hamann & Ferland 1999). Therefore, we conclude that the Nv emission is extremely strong, and consistent with very high metal content. A much more thorough investigation of the quasar absorption / emission system is needed to include Nv as a Z estimator. This is deferred to further work. Role of column density The column density assumed in the present paper is logN c =23 [cm −2 ]. With this value the emitting clouds in the low-ionization conditions remain optically thick to the Lyman continuum for most of the geometrical depth of the cloud. Even if the value logN c =23 may appear as a lower limit for the low-ionization BLR, as higher values are required to explain low-ionization emission such as Caii and Feii (Panda 2020;Panda et al. 2020a), the emission of the intermediate and high-ionized region is confined within the fully ionized part of the line emitting gas whose extension is already much less than the geometrical depth of the gas slab for logN c =23. Therefore, we expect no or negligible effect from an increase in the column density for the low ionization part of the BLR. For BLUE, the situation is radically different, and we have no actual strong constraints on column density. Most emission may come from a clumpy outflow (Matthews 2016, and references therein), and therefore assuming a constant N c may not be appropriate. Considering the poor constrain that we are able to obtain, we leave the issue to an eventual investigation. Role of turbulence The results presented in this work refer to the case in which there is no significant micro turbulence included in the CLOUDY computations. Fig. 16 shows that at low ionization the effect is relatively modest, and that in the high-ionization case appropriate for BLUE the effect is very modest. Less obvious is the behavior at low-ionization for R FeII : it shows an increase for t = 10 km s −1 , but then it has a surprising drop at larger value of the micro-turbulence. While the increase can be explained by an increase of the transitions for which fluorescence is possible, the decrease is not of obvious interpretation. It has been however confirmed by the independent set of simulation of Panda et al. (2018Panda et al. ( , 2019 who used the more recent version of CLOUDY C17.01 (Ferland et al. 2017). Metal segregation? Metals are expected to be preferentially accelerated by resonance scattering (e.g., Proga 2007b; Risaliti & Elvis 2010). In principle, for a sufficiently large photon flux, the acceleration of metals by radiation pressure might become larger than the Coulomb friction, therefore causing a decoupling of the metals with respect to their parent plasma (Baskin & Laor 2012). This possibility has been explored in the context of the BALs, and broad absorption and emission components are expected to be related (Elvis 2000;Xu et al. 2020). The ionization parameter values are however several orders of magnitudes higher than the ones derived for the BLUE emission component. In addition our Z estimates for the BLUE suggest, if anything, values lower or equal than for the BC, whose Z might be related more to the original chemical composition of the gas in the accretion material. However, we ascribe the systematic differences between BC and BLUE as uncertainties in the method and measurement, so that Z from BLUE and BC should be considered intrinsically equal. Considering that the most metal rich stars, galaxies, and molecular clouds in the Universe do not exceed Z ≈ 5Z (Maiolino & Mannucci 2019), circumnuclear star formation is needed for the chemical enrichment of the BLR gas (e.g., Collin & Zahn 1999a,b;Wang et al. 2011Wang et al. , 2012b. Star formation may occur in the self gravitating, outer part of the disk. An alternative possibility is that a massive star could be formed inside the disk by accretion of disk gas (Cantiello et al. 2020). Abundance pollution? An implication of the scenarios involving circumnuclear or even nuclear star formation is that there could be an alteration of the relative abundance of elements with respect to the standard solar composition (Anders & Grevesse 1989;Grevesse & Sauval 1998). Some support is provided by the extreme Civ/Siiv and Civ/Aliii 26Śniegowska et al. . Effects of turbulence on diagnostic line ratios, for Civ/Heiiλ1640 (blue) and Siiv/Heiiλ1640(magenta), considering 5 (circles) and 20 Z (squares). The top panel assumes the low ionization conditions appropriate for BC emission, the bottom one for BLUE. In the top panel the green lines trace the same trends for the FeII blend at λ4570 over Hβ ratio i.e., RFeII. that may hint to a selective enhancement of Al with respect to C. As suggested by Negrete et al. (2012), corecollapse supernovae with very massive progenitors could be at the origin of a selective enhancement. Supernovae with progenitors of masses between 15 and 40 M have selective enhancement in their yields of Al and Si by factors of ≈ 100 and 10 relative to hydrogen with respect to solar (Chieffi & Limongi 2013). Since Carbon is also increased by a factor ∼10 with respect to solar, the [Al/C] is expected to be a factor ∼ 10 the solar value in supernova ejecta. The case for Silicon is less clear, as the enhancement is of the same order of magnitude of the one expected for Carbon. Pollution of gas by supernovae may therefore lead to an estimate of the Z higher than the actual one, if solar relative abundances are assumed. This possibility will be explored in an eventual work (Garnica et al., in preparation). Implications for quasar structure evolution Metallicity and the outflow prominence of quasars were found to be highly correlated (Wang et al. 2012a;Shin et al. 2017). The implication of these results is that xA sources, which show the highest blueshifts (Sulentic et al. 2017;Vietri et al. 2018;Martinez-Aldama et al. 2018;Martínez-Aldama et al. 2018), should also be the most metal rich. The xA sources should be at the top of the Z outflow parameter correlation of Wang et al. (2012a), if Z 10Z . There is evidence of a metallicity correlation between BLR and NLR , as expected if the outflows on spatial scales of kpc are originating in a disk wind. Zamanov et al. (2002) derived very small spatial scales at low luminosity. This provides additional support to the idea that xA sources -which at low-z phenomenologically appear as Feii-strong NLSy1s, are relatively young sources. Their low [Oiii]λ5007 equivalent width implied young age more than orientation effects (Risaliti et al. 2011;Bisogni et al. 2017). The z ≈ 2 quasars of the present sample are radiating at relatively high L/L Edd although there are no examples of the extremes of xA sources showing blueshifted emission in Aliii as prominent as the one of Civ (e.g., Martínez-Aldama et al. 2017). There is no evidence of heavy obscuration. They are certainly out of the obscured early evolution stage in which the accreting black hole is enveloped by gas and dust (see the sketch in D'Onofrio & Marziani 2018). The W Civ distribution covers the upper half of the one of Martínez-Aldama et al. (2018). There are no weak-lined quasars following Diamond-Stanic et al. (2009). The xA sources of the present sample may have reached a sort of stable equilibrium between gravitation and radiation forces 28Śniegowska et al. made perhaps possible by the development by an optically thick, geometrically thick accretion disk, and by its anisotropic radiation properties (e.g., Abramowicz et al. 1988;Szuszkiewicz et al. 1996;Sadowski et al. 2014). The median value of the peak displacement of the BLUE component is around ≈ 3500 km s −1 , and the centroid at half maximum is shifted by 5000 km s −1 . The extreme blueshifts in the metal lines imply outflows that may not remain bound to the potential well of the black hole and of the inner bulge of the host galaxies (e.g., Marziani et al. 2016b, and references therein). The high metal content of the outflows, estimated by the present work to be in the range 10 − 100Z , implies that these sources are likely to be a major source of metal enrichment of the interstellar gas of the host galaxy and of the intergalactic medium. Using a standard estimate for the mass outflow rateṀ (Marziani et al. 2016b),Ṁ ≈ 15L CIV,45 v 5000 r −1 1pc n −1 9 M yr −1 , we obtain an outflow rate ofṀ ≈ 20M yr −1 , assuming median values for the sources of our sample: median outflow velocity from the peak of BLUE ≈ −3500 km s −1 , a median luminosity of the Civ BLUE (corrected because of Galactic extinction) of 4.2· 10 44 erg s −1 , a median radius 5.9 · 10 17 cm from the Kaspi et al. (2007) radiusluminosity correlation for Civ, and n 9 = 1. For a duty cycle of ∼ 10 8 yr, the expelled mass of heavily enrichedgas could be ∼ 10 9 M . CONCLUSION The sources at the extreme end of Population A along the main sequence are defined by the prominence of their Feii emission and, precisely, by the selection criterion R FeII 1 Du et al. 2016b). Their properties as a class are scarcely known. Even if there has been a long history of studies focused on Feii-strong sources since Lipari et al. (1993); Graham et al. (1996), their relevance to galactic and large scale structure evolution is being reconsidered anew with the help of the quasar main sequence. This paper adds to other aspects that were considered by previous investigations (for example, the very powerful outflows, the disjoint low-and high-ionization emitting regions, first suggested by ), a quantitive analysis of the chemical composition of xA sources. The main aspects of the present investigations can be summarized as follows: • We distinguish between two emission line components most likely origination from emitting in widely different physical conditions: a virialized low-ionization BLR, and a high-ionization region associated to a very strong blueshifted excess in the Civ emission line. This is the conditio sine qua non for meaningful Z estimates. • The physical conditions in the low and high regions were confirmed to be very different, with the low ionization (U , n H ) ≈(-2.75, 13) and the high ionization (U , n H ) ≈(-0.5,8. -9). The high ionization region parameters are however poorly constrained. • Using intensity ratios between the strongest metal lines and Heiiλ1640 emission at λ1640 we derive metallicity values in the range 10 Z 100Z , with most likely values around several tens of the time solar metallicity. • We find evidence of overabundance of Al with respect to C. This result points toward possible pollution of the broad line emitting gas chemical composition by supernova ejecta. xA quasars are perhaps the only quasars whose ejection are able to overcome the potential well of the black hole and of the host galaxy. Applying the method to large samples of quasars would permit to constrain the metal enrichment processes on a galactic scale. The present analysis relied heavily on the Heiiλ1640 line which is of low equivalent width and with a flat, very broad profile (incidentally, we note that the low equivalent width is consistent the high Z of the emitting regions). Therefore, a more precise analysis would require spectra of moderate dispersion but of higher S/N ratio. A large part of the scatter and/or systematic difference for various Z estimators is related to difficulty to isolate faint and broad emission in relatively noisy spectra. The results of the arrays of simulations as a function of n H , U , and Z are shown below, for N c = 10 23 . The SED shape is the same for all simulations (table agn) which corresponds to the SED of Mathews & Ferland (1987). No turbulence was assumed.
2020-09-30T01:00:59.719Z
2020-09-29T00:00:00.000
{ "year": 2021, "sha1": "168639c1f5230189fa09b35cb81d955bba2e6b77", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2009.14177", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b5375d1db057eb0451ae4349871f6e109121cbb0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
259091389
pes2o/s2orc
v3-fos-license
Time Out: The Impact of Physician Burnout on Patient Care Quality and Safety in Perioperative Medicine Perioperative care delivery is a patient-centered, multidisciplinary process. It relies heavily on synchronized teamwork from a well-coordinated team. Perioperative physicians—surgeons and anesthesiologists—face enormous challenges in surgical care delivery due to changing work environments, post-COVID consequences, shift work disorder, value conflict, escalating demands, regulatory complexity, and financial uncertainties. Physician burnout in this working environment has become increasingly prevalent. It is not only harmful to physicians’ health and well-being, but it also affects the quality and safety of patient care. Additionally, the economic costs associated with physician burnout are untenable due to the high turnover rate, high recruitment expenses, and potential early permanent exit from medical practice. In this deteriorating environment of unbalanced physician supply/demand, recognizing, managing, and preventing physician burnout may help preserve the system’s most valuable asset and contribute to higher quality and safety of patient care. Leaders in government agencies, health care systems, and organizations must work together to re-engineer the health care system for better physicians and patient care. additionally, the economic costs associated with physician burnout are untenable due to the high turnover rate, high recruitment expenses, and potential early permanent exit from medical practice. In this deteriorating environment of unbalanced physician supply/demand, recognizing, managing, and preventing physician burnout may help preserve the system's most valuable asset and contribute to higher quality and safety of patient care. leaders in government agencies, health care systems, and organizations must work together to reengineer the health care system for better physicians and patient care. Background emerging from the CoVID pandemic, the uS health care system faces new challenges in patient care quality, access, equality, and financial sustainability because of the accelerating rate of physician burnout and exit. Perioperative care is a significant portion of the health care system, accounting for 60% of hospital costs and 10% of hospital margins. 1 as such, the consequences of perioperative physician burnout can be severe and pervasive and may have avalanching effects. Today, burnout among perioperative physicians (ie, surgeons, anesthesiologists, intensivists, and hospitalists) is ever-increasing due to evolving workplace stressors and occupational environments. Burnout is a prolonged and unmitigated stress response to chronic workplace stressors encompassing human life's physical, psychological, and social domains. 2 Burnout is characterized by exhaustion, cynicism, and a personal feeling of inefficacy. 3 The estimated burnout rate for surgeons ranged from 30-40% 4 and 10-50% for anesthesiologists before the pandemic 5 ; the CoVID pandemic likely accelerated this. 6 Compared to pre-pandemic levels, emotional exhaustion and depersonalization increased to 39% and 61%, respectively, among all physicians. 7 For anesthesiologists, the prevalence of burnout syndrome increased by more than 64%, from 37.5% in 2020 to 61.7% in 2021. The high burnout rate can also be seen in general surgeons, who suffer a burnout rate of 58.6% and have decreased career satisfaction from 42.6% to 23.8%. 8 not only does this led to lower efficiency and engagement, but it is also associated with lowerquality patient care and poor outcomes. Physicians who experience burnout syndrome often admit to a greater likelihood of up to 2.2 of making medical errors. 9,10 Perioperative physician burnout is unique in some ways, developing over time, and may even start as early as medical school. The personal sacrifices made throughout training and clinical practice accumulate over the years. The chronic stress from unpredictable care demands, sleep deprivation, and workplace challenges gradually decreases the sense of accomplishment and value of personal sacrifice. When the unpredictable care demand arises in conjunction with expanding regulatory, legal, and organizational mandates, the lack of support, social reward, and autonomy can exacerbate the mismatch between personal and organizational objectives. 11 Consequently, although the daily physical demand is high for perioperative physicians, the psychological stress can be even higher. 12 This review focuses on the relationship between perioperative physician burnout and the quality of surgical patient outcomes. our primary objective is to define the perioperative characteristics that contribute to physician burnout, with a secondary objective to illustrate methods to mitigate it. The Development of Perioperative Physician Burnout Despite the widespread media attention and renewed interest, "burnout" is not a medical condition or illness. according to the World health organization (Who), burnout is an occupational phenomenon. 13 First described in 1975 by Dr herbert J Freudenberger, who noted burnout symptoms in child psychotherapists working in new York 14 ; Dr Christina Maslach and her colleagues studied the burnout phenomena in detail. The Maslach Burnout Inventory (MBI), a 22-item self-administered questionnaire, is widely accepted today in the science community and Who. 2 as per ICD-11, the syndrome of burnout has 3 categorical symptoms 15 : 1. Physical exhaustion and energy depletion 2. Increasing mental distance, negativity, or cynicism toward one's job 3. reduced professional efficacy. however, perioperative physician burnout is unique as it entails a unique working environment and interdepartmental collaboration. 16 Multiple factors affect perioperative physician burnout, which can be divided into organizational, cultural, and individual factors. Perioperative working environments are unique among all health care delivery facilities as they are specific to the operating room demands and multidepartmental interplay. InDIVIDual FaCtOrs Physicians often exhibit higher burnout resilience when compared to the general working population. 17,18 however, when compounding factors from medical training to practice have accumulated over a lifetime, 19 resilience wanes and burnout sets in. 20 although necessary for their professional success, the intrinsic personality and characteristics of surgeons or anesthesiologists can also increase the risk for burnout in and of itself. These include strong commitment, discipline, life-long devotion, and time commitment. 21 These, unfortunately, cause delays for the already delayed fulfillment of life goals and livelihood earnings, leading to chronic aggregation of stress and burnout. additional individual factors that contribute to burnout are the following 22,23 : 1. Workload 24 2. occupational and interpersonal relationships 25 3. Delay in beginning families and unbalanced work life 26 4. Personal debt and value 24 5. Self-motivation vs resilience and pandemic post-traumatic stress disorder 27 6. empowerment/enslavement paradox. 28 although stress and challenge in the right amount may boost the confidence, productivity, and accomplishment of many perioperative physicians, 29 persistent and exaggerated work-related stress can lead to symptoms of burnout. 7 OrganIzatIOnal FaCtOrs It is well known that the dysfunctions of the health care system in the united States contribute to physician burnout. 2 When surgical workload is high, resources are scarce, and capacity is constrained, occupational conditions get worse. occupational environments have many elements, including the external working environment, internal culture, community, and organizational value, all of which have slowly deteriorated over the years. 30 The operating room shutdowns during the CoVID pandemic, the subsequent exodus of perioperative health care workers, and economic instability have further exacerbated the chronic physician supplydemand imbalance, work-life imbalance, and value conflict. 31,32 Some factors unique to perioperative operational burnout are as follows: 1) surgical logistics interplaying with the health care system, teams, patients, and patient's families 33 ; 2) complex and changing regulatory and legal mandates on practice methods, procedures, pharmaceuticals, and equipment use 23 ; 3) multidepartmental initiatives that lack clarity for appropriate coordination 34 ; and 4) clinical pathways and workflow implementation from nonsurgical departments that also override physician judgments and autonomy despite the potential to reduce burnout. 35,36 The CoVID-19 pandemic highlighted these factors during the complex need for "fast-tracking" policyto-action and knowledge-to-action changes that led to stress as the necessity of required procedures and patient care remained. 37,38 These often serve as system disruptors. For example, the need to rapidly implement infection prevention protocols with limited evidence and organizational support in a system with depleted resources can result in a deteriorating working environment. 39 Despite efforts, as the pandemic eased, there were large surgical backlogs, heavy staff losses, and unpredictable supply chain disruptions. This further increased physicians' workloads and complexity that additional personal sacrifices could not easily overcome. Culture anD COmmunIty Modified in 2017, the World Medical association (WMa) Declaration of Geneva stated that physicians should "attend to [their] own health, well-being, and abilities in order to provide care of the highest standard." 40 The WMa declaration fundamentally recognizes the importance of physician well-being as a constituent of the health care system, occupational environment, professional culture, and organizational values. unfortunately, the implementation of the WMa declaration, a much-needed cultural overhaul in medicine, became further delayed due to the CoVID pandemic. This effort to rejuvenate the workplace through implementing the WMa Declaration of Geneva and "happiness in medicine" may positively change physician burnout. 41 a perioperative environment consists of distinguished occupational cultures and organizational traditions. Perioperative physicians are often under tremendous pressure to consistently perform at the highest level regardless of the time of day or operation constraints. Culturally, it is a patient's and the hospital's understanding that delays will produce consequences for one's health, the financial bottom line, or both. however, it is tricky as it always requires the physician's top physical and psychological conditions, especially at night with dysregulated sleep and diet schedules. This dilemma continues with the associated social media nuances, consumer reviews, marketing tactics, 42 and changing patient expectations in a world of immediate access. 23,25 a paradigm shift focusing on greater equity, inclusion, and diversity (eID) has brought renewed awareness to how gender, race, ethnicity, sexual orientation, and gender identity interplay within social structures and organizations; this can also contribute to burnout risk. Women surgeons and anesthesiologists comprise a significant portion of the perioperative workforce and face unique challenges related to their gender. These challenges include microaggression, sexual harassment, compensation inequity, and lack of representation in leadership positions. In fact, 94% of women surgeons and anesthesiologists experienced sexist microaggressions, and 81% of minority perioperative physicians experienced racist/ethnic microaggressions. 43 The findings are particularly worrisome because there are more women in medicine, and more women physicians are choosing to leave their medical careers early at an accelerating rate. 44 There continues to be an environment of unpredictable surgical care delivery due to the complexity of the disease and unknown human factors. 45 In this dynamic environment, patient-centered and safe surgical care command high physical and psychological investment for perioperative physicians. Despite the priority on perioperative quality and safety, errors can still occur. 23 unfortunately, those devastating errors can lead to "second victim syndrome." 46,47 additionally, the authors observed that the industrial-like metrics for productivity, 48 efficiency, 49 long hours, extensive shifts, 50 and cost cutting with increased perioperative throughput add another dimension of complexity and stress to perioperative physicians, increasing their likelihood of burnout. The Pathophysiology of Physician Stress and Burnout Burnout due to occupational stress has become widespread in the perioperative workspace. even though burnout is not classified as a disease, the physiological reactions to burnout resemble many disease processes, such as the hypothalamus-pituitary-adrenal (hPa) axis and the hypothalamic-pituitary-thyroid (hPT) axis derangement at the least. In fact, burnout syndrome bears all the hallmarks of chronic psychological stress disorders and shift worker disorder. Perioperative physicians work shifts ranging from 8 to 24 hours or longer. Shift work disorder is defined as a misalignment between sleep patterns and those in line with societal norms. It is well illustrated that working outside 6 am to 7 p m or more than 40 hours a week is a risk factor for emotional exhaustion and reduced quality of care, the specific symptoms of burnout and shift worker disorders. 51 not isolated from other life events, burnout generated from the working environment can interplay with other life events throughout one's lifespan, both physically and psychologically. as a result of occupational stress, burnout contributes to the cumulative burden of life, which is associated with a cascade of immune and neuroendocrine derailments called allostasis accommodation. These responses are known to deteriorate total health and cognitive dysfunction and to accelerate aging through epigenetic means. 52 They are also linked to the development of chronic diseases, such as hypertension, metabolic syndromes, diabetes, and cancers. Chronic psychological stress of any form is associated with a 22% higher mortality rate for all causes and a 31% higher mortality rate for cardiovascular events. 53 Impact of Physician Burnout on Surgical Patient Care Despite extensive burnout research, the impact of burnout of professionals on others is poorly understood, especially in the health care industry, where the consequences may be related to patient morbidity and mortality. This issue is especially relevant in the perioperative space, where high-quality surgical and professional activities determine immediate and long-term surgical outcomes. 54 Globally, perioperative death is the third most common cause of death, with approximately 4.2 million deaths per year . 55 Despite relentless efforts over the last few decades at both organizational and individual levels, there has been only slight improvement. It is widely suspected that there may be a reciprocal relationship between physician burnout and surgical outcomes. 56 This has also inspired the assumption that inadequately addressing physician well-being and burnout may be partially responsible for many failed improvement initiatives in the past. 57 a negative correlation between physician burnout and patient safety has been observed. Physician burnout can double the risk of patient safety 58 and contribute to up to 7% to 10.6% of serious medical mistakes. 59 using medical malpractice as an indicator, 60 although not an accurate quality indicator, higher surgeon burnout was linked to more frequent medical malpractice suits with an average cost of $371,054. 61 It is of note that burnout at any point during a physician's care may increase the chance of medical error, and a medical error at any point in a physician's life may increase the frequency and intensity of burnout. 62 Burnout affects the interpersonal team dynamics essential for a highly reliable surgical team. although studied more in critical care settings than perioperative settings, it has been shown that emotionally exhausted physicians can engage less in teamwork. Indeed, physician burnout can corrupt team spirit and culture and result in low-quality work and a high mortality rate, in an Intensive Care unit setting at least. 63 Patient satisfaction is another indicator of quality health care. Many studies have demonstrated a strong association between physician burnout and patients' care experience and satisfaction, where physicians experiencing a high degree of burnout have significantly lower patient satisfaction scores. 64 Costs associated With Perioperative Physician Burnout Physician burnout is becoming an emerging threat to the health care system and public health. Today, due to burnout, more physicians are leaving than entering the field, creating an alarming shortfall of approximately 45,000 to 90,000 physicians across all specialties. 65 This is especially concerning as our aging population grows and health care demand rises. The high rate of physician burnout imposes a high direct and indirect cost on the health care systems at a staggering $4.6 billion/year. 66 There are many other hidden costs as well. PrODuCtIVIty anD eFFICIenCy Physician productivity is hard to measure and varies from specialty to specialty, and from institution to institution, depending on their business model and operational systems. often used to evaluate business and operational fundamentals, perioperative physicians are held accountable for perioperative productivity and efficiency. 67 turnOVer/early retIrement The estimated costs of physician turnover range between $268,000 and $957,000 per physician because of recruitment and start-up efforts. 68 When combined with a high turnover rate of more than 26%, 69 the financial burden significantly impacts health systems and organizations. 70 Burnout is strongly associated with a physician's plan to reduce workload and scope of practice or to leave the workforce early or entirely. This problem inevitably causes a drop in health care capacity and, conversely, may further increase the workload for the remaining physicians and potentiate their burnout rate. This vicious cycle may subject health care organizations to elevated financial burdens and regulatory and operational risks. Human COst unfortunately, medicine is one of the few professions with a higher-than-average suicide and suicide attempt rate. 71 In the service of others, physicians often ignore themselves physically and mentally. Physician suicides are intimately linked to depression and other mental disorders, often consequential to physician burnout. Shockingly, there are an average of 300-400 physician suicides per year, equivalent to a large medical school class size. 72 Despite its growing concern and multiple initiatives for mitigation-such as hotlines, programs, access, culture changes, and preventive programs-the risk cannot be ignored. Furthermore, despite high resilience at the beginning of a medical career, physicians suffer from a higher rate of suicide as compared to the general population. Societies and training programs for most surgical and anesthesia specialties continue to monitor this figure. Studies demonstrate the increase in suicide rate year over year, with some of the highest rates in surgical specialties, which may, unfortunately, worsen in the current state of burnout. 73 Strategies to Mitigate Perioperative Physician Burnout Physician burnout is an occupational phenomenon that also negatively affects patients and families. Produced primarily through the chronic accumulation of work-related stressors, physician burnout is a moral imperative that the health care industry and government agencies must address across the continuum of medical education, residency training, physician practice, and practice environments. Decreasing physician burnout is not only a good business practice that can yield positive dividends to the health care industry but also a moral obligation toward our physicians and patients. even though physician burnout is multifactorial with no simple or quick solution, the paradigm shift toward improvement must start now at the individual and organizational levels. 74 InDIVIDual For many years, physician burnout was mislabeled as a personal matter within the medical community, health care industry, and public domains. 75 This conceptual error delayed the understanding of this critical issue and resulted in many missed opportunities for effective prevention and management. although physician demographic factors, such as age, gender, surgical subspecialties, and marital status, play a role in burnout, 76 it is the complex interactions between the individual's susceptibility and the working environment that determine the occurrence and outcomes of burnout. Protective modalities such as mindfulness, resilience reinforcement, and relationship building are helpful. 77 however, special attention should be paid to the whole spectrum of human well-being or PerMa elements through positive psychology: Positive emotions (of which happiness and life satisfaction are all aspects), engagement, Relationships, Meaning, and Achievement. It is essential to know that no one element is more important than the others but, rather, that each contributes to the whole. 78 Many tools can be leveraged for the PerMa purpose, such as didactic education, workshops, and personal coaching. OrganIzatIOn Increasing workloads without adequate staff and resource support are often cited as essential contributors to physician burnout. 79 The burnout rate can increase greatly when the workload is heavy and unproportionate to a physician's compensation. Major practice changes, such as the adoption of the electronic medical record (eMr) and the emergence of physicians' email "in-basket" management, have been recognized among the key drivers leading to physician burnout in recent years. Physicians spend about 2 hours on the eMr for every 1 hour of direct patient care. 80 The eMr contributes to physician burnout in many ways, including the functionality, usability, time, and frequency, as well as the availability and effectiveness of organizational support. organizations should pay special attention to technology that can reduce the burden of the eMr, digital messages, and nuisance alerts. Culture anD COmmunIty additionally, both macroaggressions and microaggressions, either from patients or colleagues, are widespread in the perioperative environment. It creates psychologically unsafe working environments and is linked to increased physician burnout. 81 a psychologically safe culture and practice environment with the eID principle in mind should be the top priority for any organization and its leaders, especially in the areas of communication and signage. Special committees such as "joy in medicine" committees 41 and workflows must be created with allocated time in order to quickly implement regulations without passing the strain of interpretation and fear of regulation on to the frontline health care practitioners. 36 Methods to mitigate and help perioperative physicians with social media nuances, systems to assist with messaging demands, and integrated technologies with training must become priorities. 82 These cannot be passed down to individuals as they affect the entire workplace culture and are not limited to a single person. awareness and dialog of these culturally related factors that lead to burnout should be undertaken as a priority of a system and its leadership. It is incumbent for us to understand that optimal and safe patient care requires foremost the optimization of the health and well-being of the physicians. For the long-term benefit of the health care industry and its customers, a collective effort to measure, monitor, and mitigate the physician burnout risk as outlined should be made systemically and proactively.
2023-06-07T06:17:50.214Z
2023-06-06T00:00:00.000
{ "year": 2023, "sha1": "de4025014fd73609c531d080bb37f69e7175cb92", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.7812/tpp/23.015", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "f6a4980d5a50c5f4e9154246c5b84df277ba94a9", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
11757265
pes2o/s2orc
v3-fos-license
Plant Metabolomics: An Indispensable System Biology Tool for Plant Science As genomes of many plant species have been sequenced, demand for functional genomics has dramatically accelerated the improvement of other omics including metabolomics. Despite a large amount of metabolites still remaining to be identified, metabolomics has contributed significantly not only to the understanding of plant physiology and biology from the view of small chemical molecules that reflect the end point of biological activities, but also in past decades to the attempts to improve plant behavior under both normal and stressed conditions. Hereby, we summarize the current knowledge on the genetic and biochemical mechanisms underlying plant growth, development, and stress responses, focusing further on the contributions of metabolomics to practical applications in crop quality improvement and food safety assessment, as well as plant metabolic engineering. We also highlight the current challenges and future perspectives in this inspiring area, with the aim to stimulate further studies leading to better crop improvement of yield and quality. Introduction Plants produce large numbers of metabolites of diversified structures and abundance that play important roles in plant growth, development, and response to environments. These diverse small molecular weight metabolites, the chemical base of crop yield and quality, are also valuable nutrition and energy sources for human beings and live stocks [1]. Generally, these metabolites are classified into primary and secondary metabolites. The former are indispensable for the growth and development of a plant, while the latter are not essential but are crucial for a plant to survive under stress conditions by maintaining a delicate balance with the environment. In addition, primary metabolites are highly conserved in their structures and abundances while those of secondary metabolites differ widely across plant kingdoms [2]. The diversity of plant metabolites and the likely complicated regulatory mechanism highlight the necessity to explore the underlying biochemical nature [1]. The output of plant metabolomics depends largely on its methodologies and instrumentations to comprehensively identify, quantify, and localize every metabolite. Actually, it is very challenging because of the complexity of the diverse metabolic characteristics and abundances of molecules. Fortunately, albeit the fact that accurate and exhausted analysis of the whole metabolome of a biological sample seems currently impossible, methodologies and instrumentations of plant metabolomics have been developing rapidly [3]. At present large scale analysis of highly complex mixtures are enabled by a series of integrated technologies and methodologies, such as non-destructive NMR (nuclear magnetic resonance spectroscopy), mass spectrometry (MS) based methods including GC-MS (gas chromatography-MS), LC-MS (liquid chromatography-MS) and CE-MS (capillary electrophoresis-MS), and FI-ICR-MS (Fourier transform ion cyclotron resonance-MS) [4,5]. Assisted by other technologies of sampling, metabolomics could be performed in the subcellular level and even in a single cell [6][7][8][9]. These analytical approaches have shown their potential power in plant metabolomic studies in many common plant species including staple food crops such as tomato, rice, wheat, and maize for various purposes [10][11][12][13]. However, because of the intrinsic limitation of each analytical platform, combined approaches are increasingly used in metabolomics analysis. Although metabolomics is downstream of the other functional genomics (transcriptomics and proteomics), the practical size of the metabolome of a species, unlike transcriptome or proteome, cannot be speculated directly by known genomic information via central dogma. Therefore, metabolomics is used to obtain a large amount of valuable information for the discovery of genes and pathways through accurate and high throughput corollary peak annotation via snapshotting the plant metabolome [14]. It seems that there is a complicated regulatory network among these small molecules in plants, and by detecting the interactions among these metabolites, metabolomic analysis contributes significantly to the understanding of the relation between genotype and metabolic outputs by tackling key network components [15]. Such kinds of metabolomic analysis, integrated with transcriptomic analysis, have been successfully applied to investigate the coordinated rules of metabolic fluxes and metabolite concentrations in plants [15,16]. Recently, high throughput and low-cost approaches have been used to achieve huge omics data output in a short time, and further to reconstruct the metabolic models in microbial organisms [17]. However, integration of sequential multiple omics data to understand plant development remain challenging, since the relationship between each of the omics is complex and not always linear. Nevertheless, plant metabolomics has become a powerful tool to explore various aspects of plant physiology and biology, which broadens significantly our knowledge of the metabolic and molecular regulatory mechanisms regulating plant growth, development and stress responses, and the improvement of crop productivity and quality. In this review, we summarize our current understanding of plant physiology and biology in the context of metabolites and metabolic networks. The important roles of inherent genetic factors governing the natural metabolic variation among plants are highlighted, the application of plant metabolomics in crop improvement, and its future prospective are also discussed. Using Plant Metabolic Phenotype to Reveal the Function of Genes in the Plant Genome With the advance of sequencing technology, dozens of plant species have been sequenced. To comprehensively understand functional genomics regarding plant development, the importance of advanced tools of metabolomics, together with QTL (quantitative trait locus) analysis, GWAS (genome-wide association study), and knock-out/down technology, has been increasingly recognized within the plant science community. From mQTL to mGWAS: Hunting for Candidate Genes Correlated to Metabolic Phenotype in Genetic Variation In plants, it is well known that QTLs are distributed in many regions of the chromosome and large numbers of alleles occur in the process of domestication. Molecular breeding benefits from the fragment with preponderant genes that leads to high productivity or quality. Compared with the few participants and unfulfillable crosses-designed in human genetics, plants are more suitable for linkage analysis. However, one of the limitations of complex QTL mapping is the acquirement of precise phenotype data. Although high throughput plant phenotyping platforms and corresponding plant phenomics have offered and integrated a set of novel technologies, more details of complex plant phenotypes still need to be mined. More recently, some specific traits like metabolic variants in large-scale omics data have been taken into analysis in human disease and mouse studies [18,19], and shows more advantages than classic macroscopical phenome in disease and pharmaceutical studies because it provides much more information [20]. Therefore, using the metabolic phenotype to study genetic variation may deepen our understanding of plant biology from a metabolomic viewpoint. Table 1 summarizes currently conducted researches. In Arabidopsis, the analysis of 369 recombinant inbred lines and 41 introgression lines indicated that the metabolite heterosis is primarily contributed by epistasis [21]. In tomato, metabolite profiling in seeds of 76 introgression lines in two consecutive harvest seasons revealed the presence of 30 metabolite quantitative trait loci (mQTLs) and dissected partial mechanisms, underlying the variational contents of main primary metabolites [22]. Similar mQTL analyses have been performed in other plant species, such as wheat, rice, and rape [23][24][25], however, genetic bases of the metabolomics diversity in plants remain to be further uncovered. In QTL analysis, the heritability of given phenotype (broad-sense heritability) and the r 2 of the individual locus linked to a given phenotype (the effect size of locus), two important parameters, are usually evaluated. In metabolome-based QTL study, primary metabolites often have high heritability, and the secondary metabolite loci have higher r 2 for a metabolic phenotype than primary metabolites [26]. Besides linkage analysis, QTLs can also be identified through association analysis [52]. Compared with most artificial mapping populations that are constructed by crosses between two parental accessions, association analysis populations are composed of large numbers of natural accessions containing more genetic variants as well as potential for the identification of unknown phenotype-associated loci in the plant genome. Benefiting from the development of next-generation sequencing technologies, metabolome-based GWAS (mGWAS) has been used to understand genetic mechanisms underlying metabolic diversity and their associations with complex traits in plants. Riedelsheimer et al. designed an mGWAS analysis using a set of 289 different maize inbred lines with 118 biochemical compounds and was able to identify 26 distinct metabolites that are strongly associated with single nucleotide polymorphism (SNPs) in maize, and pinpointed the key role of a chromosome 9 localized cinnamoyl-CoA reductase in improving the quality of lignocellulosic biomass [45]. In rice, a GWAS analysis using those metabolomic data obtained from 175 rice accessions successfully identified 323 associations between 143 SNPs and 89 secondary metabolites, which revealed two sorts of genetic machineries determining the natural variations in rice secondary metabolite compositions [48]. Another matrix of 840 metabolite features obtained from a worldwide collection of 524 rice accessions indicated that few loci with large effects control the levels of secondary metabolites while several loci with small effects control the natural variation of primary metabolites [49,53]. Nevertheless, although mGWAS identifies large-scale metabolite-related QTL, which maybe widely used in future in plants, several drawbacks are also inescapable at present. Firstly, limited to the present statistical algorithm, it is difficult to exactly identify the epistasis or gene-environment interaction (GˆE) QTL. Secondly, limited to the precision especially in some region of the chromosome with slow decay of linkage disequilibrium, and the labor and time-consuming procedure, it is unrealistic for all of the hundreds of potential genes from one single analysis to be verified by transgenic analysis. Fortunately, the same as with other traits like seed quality, as long as the regions of interesting QTL are determined, these QTL could be further utilized for marker-assisted selection breeding without the necessarily to find out the underlying gene(s) [54]. Reverse Genetic Approaches for Exploring the Function of Enzyme in Certain Metabolic Pathways Plant metabolic pathways are usually under multiple levels of regulation. Currently, our understanding of plant metabolomes results mainly from studies in a few model plants, therefore, pathways absent in those model plants are scarcely known. During the last decade, metabolomic approaches combined with reverse genetic tools (such as RNAi and gene knockout) expanded tremendously our understanding of biochemical reactions and metabolic pathways not reported in those model plants [55]. Direct measurement of the alteration in the metabolome or specific metabolic compositions of mutants can facilitate the functional annotation of the causing genes. In Arabidopsis, phenylalanine ammonia-lyase (PAL) is encoded by four genes involved in the phenylpropanoid pathway. Double mutant pal1pal2 that lacks three major flavonol glycosides showed over accumulation of phenylalanine, perturbed metabolisms in other nonaromatic amino acids, as well as reduction in lignin contents [56]. Exposing the gdh (glutamate dehydrogenase) triple mutant to continuous darkness demonstrated that providing 2-oxoglutarate for the tricarboxylic acid cycle is the main physiological function of NADH-GDH (NADH-dependent glutamate dehydrogenase), and that NADH-GDH impacts remarkably on amino acid accumulation in both roots and leaves [57]. Fukushima et al. established a database called Metabolite Profiling Database for Knock-Out Mutants in Arabidopsis (MeKO) based on the metabolomic analysis on 50 Arabidopsis mutants, which includes images of mutants, accumulation patterns of different metabolites, as well as their statistical results [58], facilitating significantly the related studies in Arabidopsis. Metabolomic analyses with mutants rather than silent mutation, such as transgenic or overexpression lines, can also achieve the same outcome. In rice, constitutively overexpression of the Arabidopsis chloroplast NADK gene enhanced NADK activity, accumulated the NADP(H) pool, increased electron transport and rates of CO 2 assimilation, and verified the critical role of NADP content in the photosynthetic electron transport rate in rice [59]. With the advancement of genome editing techniques, such as CRISPR/Cas9 [60], our understanding of a specific enzyme in plant metabolism will be significantly promoted. In addition, because genome editing is convenient and highly effective to generate multiple gene mutations simultaneously in plants [61], the interaction between two or more genes in a certain metabolic pathway can be readily explored via analyzing the metabolic profile of multiple-gene mutants. In the future, mGWAS or mQTL analysis, combined with reverse functional genomic strategies, will more effectively uncover in depth the genetic and biochemical mechanisms governing metabolic pathways in plants. Metabolomics and Plant Development under Normal and Stress Conditions Successful molecular breeding largely depends on the detailed understanding of the molecular mechanisms underlying plant development obtained via systems biology approaches, including metabolomics, under normal or stress conditions. Detection of metabolic changes in different developmental stages contributes in finding characteristic metabolites (metabolic markers) for specific developmental stages. Similarly, plant metabolomics can help plant breeders to identify resistant biomarker metabolites that integrate the genetic background with the influence of the environment under stress conditions, and the selected biomarker may be used as a diagnostic metabolite for plant stress [62]. Spatial-Temporal Metabolic Profiling during Plant Development The potential yield of a crop is controlled mainly by two factors, the rate of biomass accumulation and the duration of growth. Exploring dynamic metabolic changes occurring during plant growth and development may provide a new insight into the mechanisms of biomass accumulation at the metabolic level. Previous functional genomics have focused mainly on kinetics of transcripts and proteins, much less on the synchronously variable patterns of metabolites. Functional genomic analysis provides information on spatial-temporal expression patterns of genes and proteins, while metabolic profiling analysis adds informative metabolic data to functional genomic data to comprehend the whole picture of plant development. Therefore, both targeted and non-targeted metabolomic strategies have been applied in spatial-temporal metabolic profiling of developing plants. In rice, metabolomics analysis revealed substantial variation in the abundance of phenolamides, which displays developmentally controlled accumulation patterns [50]. In Arabidopsis, the change in the patterns of temporal-spatial distribution of the Kreb's cycle intermediates occurs obviously in the pre-senescent leaves, and the accumulation of glucosinolates, raffinose, and galactinol occurs in the base region of leaves prior to senescence [63]. As a major part of reproductive development, seed development initiates from embryogenesis that is followed by a metabolically active period in which a massive synthesis of reserve compounds occurs in the developing seeds, whose relative proportions vary depending on the different crop seeds [33,64]. During seed development, metabolic change patterns are similar at the accumulation stage but different at the seed desiccation stage in both monocot (rice) and dicot (Arabidopsis and tomato), showing both conserved and divergent metabolic adaptation during plant evolution [65]. Analysis of the spatio-temporal metabolic signature of plant development is also capable to identify potential biomarkers for capturing the genetic and developmental intrinsic characteristics. Such an approach has been successfully applied to study rice tillering (branching), in which 21 metabolites captured almost 83% metabolic variation [66], and soybean developmental phase transition from vegetative to reproductive stage, in which eight flavonoid kaempferol glycosides were identified as potential growth markers [67]. Plant phenotype depends on the synthesis and accumulation of a series of metabolites in specific organs, at specific developmental stages and upon random environmental signals [68], therefore, various kinds of metabolites in the plant have organ/tissue-specific characteristics [50]. For example, sphingolipids, a class of lipids critical for male reproductive development, is significantly different between pollen and leaf tissues in Arabidopsis [69,70]. In young tomato seedlings, anthocyanins accumulate in hypocotyls, while several flavonols and phenolic compounds pile up in cotyledons, and some alkaloidal compounds build up in radicals/roots [68]. Since numerous biochemical components vary at different cell levels or even at subcellular levels in the plant (there are approximately 40 different cell types in plants, even a plant organ such as a leaf may include about 15 different cell types) and metabolic processes are regulated by asymmetric distribution of regulatory element (enzymes and mRNA), therefore, high resolution of spatially resolved plant metabolomic technology is increasingly required for such studies [71,72]. With the advantage of these technologies, metabolites can be traced at high spatial resolution and used to demonstrate their regulation directly [73].The application of spatially resolved metabolomic technology in plant development will be a great complement to the conventional technologies based on chromatography and mass spectrometry or NMR. Metabolic Responses of Plants to Stress Plants frequently encounter various environmental stresses during their development processes and plants have evolved a series of adaptive changes at both transcriptional and post-transcriptional levels, leading to the reconfiguration of regulatory networks to maintain homeostasis [74]. Generally, environmental stresses are classified into two types: abiotic stress and biotic stress. Abiotic stresses result from inappropriate levels of environmental factors, such as drought, flood, extreme temperature, severe radiation, metal ion stress, nutrient limitation, and oxidative stress, while biotic stresses come from pathogens and pests. Once plant receptors are stimulated by stress signals, expression of stress responsive genes is activated and subsequently specialized metabolites (especially some secondary metabolites) are biosynthesized to adapt to environmental stresses [75]. Rapid qualitative and quantitative analyses of metabolic responses of plants to environmental perturbations will help us not only to identify phenotypic response to abiotic and biotic stresses on plants and to screen for stress tolerant individuals, but also to reveal genetic and biochemical mechanisms underlying the plant's responses to stresses, to better understand the plant plasticity for future genetic engineering of stress resistant/tolerant plants. Abiotic Stress In nature, adverse environmental conditions usually consist of several different factors, and one stress is usually accompanied with or followed by another [76]. To clarify the contribution of individual stress, a controlled variable method was introduced and plants were subjected to a single primary stress factor to simplify the system [77]. Symptoms and main metabolic changes observed in single abiotic stress have been previously reviewed [78][79][80]. However, in nature, plants often encounter not only one single stress, since once a single stress occurs, it will be followed by other stresses. For example, salinity stress frequently causes osmotic stress, and flooding often leads to low-oxygen stresses [79]. Here we summarize the effects of multiple combinatorial stresses on plants, which are more similar to the natural environment. To better dissect the plant metabolic regulatory networks and their functions in the responses to complex abiotic stresses, integrated multiple-omics analysis is required [81][82][83][84]. When maize plants are subjected to water stress and salinity stress separately or concurrently, levels of six metabolites (citrate, fumarate, phenylalanine, valine, leucine, isolecuine) in leaves only change significantly under combined stresses, indicating a crosstalk effect in multiple stresses, but the potential of using those six metabolites as stress markers has not been concluded [85]. As global warming is approaching, heat and drought stresses become big challenges to sustain grain yields. A recent work on rice floral organ development provided mechanistic understandings of the responses of rice floral organs to combined stresses, in which integrative analyses on metabolomics and transcriptomic features of floral organs revealed that sugar starvation is the determinant of the failure of reproductive success under heat and drought stress in rice [86]. Heat-sensitive (Moroberekan) anther has lower levels of sucrose and myo-inositol but higher level of galactinol and raffinose, while heat-tolerant (N22) anther has lower abundances of glucose-6-P and fructose-6-P [86]. Consistent with metabolomic changes in anther, Moroberekan rice has significantly up-regulated expression of the intercellular sugar transport regulation gene Carbon Starved Anthers (CSA) [87], while N22 rice shows the enhanced expression of MST8, a sugar transporter gene, and INV4, a cell wall invertase gene [86,87]. In Arabidopsis, GC-MS profiling combined with transcriptomic analysis of leaves revealed a synergistic stress response of the joint treatment of darkness and high temperature, which is attenuated by low temperature. Because protein degradation occurs rapidly, amino acid catabolism comes to be the main cellular energy supply in the absence of photosynthesis, as evidenced by the conditional connections between amino acid metabolism and the Kreb's cycle [82]. The combined cold and dehydration stresses in rice cause the up-regulation of carbohydrate metabolism associated genes, which is consistent with the buildup of glucose, fructose, and sucrose in the aerial parts of the plant [83]. Several sugars such as sucrose, raffinose, maltose, and glucose frequently accumulate in plant cells suffering combinatory stresses, perhaps protecting plants via osmotic adjustment from oxidative damage that usually follows most stress conditions [88,89]. Combined stresses normally result in a more extreme condition than that of each individual stress alone, and therefore has profound effects on central metabolisms such as sugars and their phosphates and sulfur-containing compounds [82,88,89]. Interestingly, the combined elevated CO 2 and salinity stress exerts a milder effect on the metabolic physiology of the plants than that of salinity stress alone [84], indicating that there is a complicated crosstalk between different stresses, which merits further investigations. In addition, a recent study revealed that secondary metabolism is also involved in the plant's tolerance to the combinatorial drought and salinity stresses, in which the tolerant Tibetan wild barley (XZ25 and XZ16) displays transcriptomic alterations in the levels of secondary metabolism pathway genes and lower DNA damage, as compared with control barley cv CM72, together with an increase of flavonoids and phenols [90]. Biotic Stress To combat attacks from pathogens and pests, plants use complex chemical machinery as a major defense. Similar to distinctive responses to diverse abiotic stresses, metabolic responses of plants to biotic stresses depend also highly on tissues, species, and plant-pathogen or pest interactions. Consequently, the identified compounds from biotic stressed plants help in searching for novel defense compounds, and meanwhile serve as important plant defensive state markers [91]. Increasing numbers of metabolites have been identified and regarded as biotic stress tolerant or sensitive metabolic biomarkers in diverse plant species. For example, 16 fatty acids (such as unsaturated linoleic acid) together with two amino acids (glutamine and phenylalanine) were identified as the major components of the resistance features of gall midge resistant rice varieties [92]. When subjected to BLB (bacterial leaf blight) caused by Xanthomonas oryzae pv. oryzae (Xoo), sensitive and tolerant rice cultivars display contrasting changes in several specific metabolites, such as acetophenone, xanthophylls, alkaloids, carbohydrates, and lipids [93]. The agent of rice blast disease, Magnaporthe grisea, another devastating pathogen of rice, can also infect other important crops, including wheat, barley, and purple false brome grass [94]. Metabolomic analysis revealed identical changes in metabolic patterns in barley, rice, and purple false brome grass, in which malate, polyamines, quinate, and non-polymerized lignin precursors accumulate during infection by M. oryzae [95]. The accumulation of phenylpropanoid and phenolic compounds is also reported in response to F. graminearum in wheat [96]. Phenylpropanoids, the precursors of lignin, constitute an important component of plant stress defense mechanism, which modulate cell wall composition and stiffness in root. The thickened cell wall may help to defend against pathogen infection in the plant. In the future, metabolomics will focus on a better understanding of the chemical machinery operating in plants responding to both abiotic and biotic stresses and the improvement of plant resistance, thus reducing crop yield lost under stress conditions. Safety Assessment of Genetically Modified (GM) Crops The application of genetic engineering to produce genetically modified (GM) crops is considered one of the most developed leading agro-biotechnologies. Though GM crops have been proven to have huge economic potential considering their effects on value-added traits such as tolerance to herbicide, resistance to insects, faster or delayed ripening, high levels of antioxidants and other nutrients, authorization and commercialization of GM crops have always been controversial among both the scientific community and the public sector over their potential risks to the environment and human health [97]. Therefore, risk assessments of GM plants and derived products are very strict in many countries and regions including the European Union. Metabolomics provides an additional dimension to GM crop analysis, allowing the detection of both intended and unintended effects that might take place in GM crops because of genetic modification at the metabolic level. This facilitates greatly the substantial equivalence evaluation of GM crops. Case studies comparing metabolic changes between GM crops and their non-transgenic counterparts have covered almost all important agricultural crops, such as rice, maize, soybean, pea, wheat, potato, tomato, barely, and so on, which have confirmed detectable alterations of their metabolites due to transgenic modifications [97,98]. However, results also show that other factors such as environmental conditions usually exert greater influences on metabolic compositions than genetic modification [99]. Therefore, in this case, metabolomics studies comparing GM crops with their non-GM counterpart lines are often combined with parallel studies using different culture conditions, in different geographical locations, and over multiple years, to corroborate the authentic effect of genetic modification [99]. Recently, natural variation has been taken into consideration for the substantial equivalent assessment of GM crops. If the variations between a GM crop and its non-GM counterpart parent lines fall in the range of natural variation, then it is considered to be safe at metabolic level [100]. However, for detecting unintended effects caused by genetic modification in GM crops, non-targeted metabolomics seemed to be more powerful than targeted metabolomics [101]. Nevertheless, the applications of non-targeted metabolomics in GM crops have not been validated and approved yet within the global regulatory framework for GM food safety assessment [97]. In the future, to facilitate the molecular characterization of GM crops at the metabolomic level, multiple metabolomic platforms to detect as many as possible metabolites in a sensitive and robust manner [102], and the exploration of the metabolomic variation in more and more crops rather than just rice [10] and maize [12], such as soybean [103,104] are needed. Metabolomics and Crop Improvement Crop breeding depends largely on phenotypic selection in plots or genomic selection by genetic markers. This is hindered by great hurdles, for example, marker effects for selecting complex traits vary frequently among populations [105]. Metabolomics combined with other omics will allows us to solve key issues of agronomic performance that remained unsettled previously. Efforts can be directed to crop plants that have detailed information on performance in large-scale environments [106]. The information resulting from mQTL and mGWAS allows us to analyze the nature of quantitative traits of interest. Plant metabolomic technology can provide information not only on the numbers of identified metabolites but also their correlations with each other and with agronomic important traits, thus it could lead to the development of more rational models to link specific metabolite or pathway with yield or quality associated traits. Even more promising is the possibility of studying the relationship between metabolite variations and the resulting phenotypes [107]. Notably, the ongoing efforts elucidating the metabolic responses to various stresses imply that metabolomics-assisted breeding could also be useful in obtaining crops more resistant to stresses [108]. The important role of metabolomics in crop improvement will become increasingly evident in the future. Plant Improvement by Metabolic Engineering Since plants are capable of designing and producing multifarious chemical compounds that serve mankind as foods and medicines, effective engineering of metabolic pathways in plants associated with modern biotechnology will bring more benefits to human beings [109]. As a successful example, golden rice that accumulates higher levels of vitamin A proves that the nutrient content of a crop plant can be improved by metabolic engineering [110]. However, due to the limitation of current knowledge about metabolic control, it is quite challenging to rationally engineer complicated metabolic networks. Recent technological advances in plant metabolomics and other "omics" offer golden opportunities to dissect the remarkable complicacy of the plant biochemical capacity and facilitate a better investigation into plant metabolic systems to increase the potential of practical applications through precise metabolic engineering [111]. Based on knowledge of sugar biosynthesis and accumulation pathways, yields of endogenous sugars, such as higher-value sugars and simple sugar derivatives, have been successfully increased via plant metabolic engineering [112]. Knowledge-based metabolic engineering strategies, generating large datasets and rational models of metabolic pathways via large scale gathering and mining of various omics data, will continuously help to refine the input and output of engineering plants [113]. Conclusions and Future Perspectives With the growing interest in the use of metabolomic technologies for a wide range of biological targets, plant metabolomics have dramatically improved in recent years. The combination of the capabilities of available analytical platforms for the analyses of complex samples, together with the integration of metabolomics with other "omics" and functional genetics, is able to provide novel insights into genetic and biochemical aspects of cellular function and metabolic network regulation [114]. Plant metabolomics, alone or combined with functional genomics, has been applied in many fields. Even though it has some limitations currently, it is no doubt an important tool that is revolutionizing plant biology and crop breeding. The full elucidation of biochemical and genetic mechanisms underlying plant developmental and stress responsive biology depends largely on the comprehensive investigations using systematic omics techniques, which is the foundation for the application of metabolomics in plant science. Among them metabolomics is of particular importance, because the metabolites are more relevant to the plant phenotype (both physiological and pathological phenotypes) as compared with DNAs, RNAs or proteins [115]. Therefore, future studies in this area will focus on both directions: one is the improvement of the metabolomic platform to facilitate the accurate and effective identification and quantification of as many as possible metabolites (mainly secondary metabolites), the precise interpretation of generated data, and the rapid integration with other omics platforms; the other is the comprehensive investigation into molecular and biochemical mechanisms of metabolic variations in plants (mainly crops) using both non-targeted and targeted approaches, to expand and enrich the understanding of plant metabolism in growth and development under both normal and stressed conditions, and the application of metabolomics to plant breeding ( Figure 1) for better crop yield and quality. and simple sugar derivatives, have been successfully increased via plant metabolic engineering [112]. Knowledge-based metabolic engineering strategies, generating large datasets and rational models of metabolic pathways via large scale gathering and mining of various omics data, will continuously help to refine the input and output of engineering plants [113]. Conclusions and Future Perspectives With the growing interest in the use of metabolomic technologies for a wide range of biological targets, plant metabolomics have dramatically improved in recent years. The combination of the capabilities of available analytical platforms for the analyses of complex samples, together with the integration of metabolomics with other "omics" and functional genetics, is able to provide novel insights into genetic and biochemical aspects of cellular function and metabolic network regulation [114]. Plant metabolomics, alone or combined with functional genomics, has been applied in many fields. Even though it has some limitations currently, it is no doubt an important tool that is revolutionizing plant biology and crop breeding. The full elucidation of biochemical and genetic mechanisms underlying plant developmental and stress responsive biology depends largely on the comprehensive investigations using systematic omics techniques, which is the foundation for the application of metabolomics in plant science. Among them metabolomics is of particular importance, because the metabolites are more relevant to the plant phenotype (both physiological and pathological phenotypes) as compared with DNAs, RNAs or proteins [115]. Therefore, future studies in this area will focus on both directions: one is the improvement of the metabolomic platform to facilitate the accurate and effective identification and quantification of as many as possible metabolites (mainly secondary metabolites), the precise interpretation of generated data, and the rapid integration with other omics platforms; the other is the comprehensive investigation into molecular and biochemical mechanisms of metabolic variations in plants (mainly crops) using both non-targeted and targeted approaches, to expand and enrich the understanding of plant metabolism in growth and development under both normal and stressed conditions, and the application of metabolomics to plant breeding ( Figure 1) for better crop yield and quality.
2016-07-09T08:41:28.331Z
2016-06-01T00:00:00.000
{ "year": 2016, "sha1": "eebb746f0d2c0898fddcf89bbe42647c7684bb0b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/17/6/767/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eebb746f0d2c0898fddcf89bbe42647c7684bb0b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
56081338
pes2o/s2orc
v3-fos-license
Effects of Distributed Generation on Voltage Levels in a Radial Distribution Network Without Communication The challenges associated with incorporating a large amount of distributed generation (DG), including fuel cells, into a radial distribution feeder are examined using a dynamic MATLAB / SIMULINK ™ model. Two generic distribution feeder models are used to investigate possible scenarios where voltage problems may occur. Modern inverter topologies make ancillary services, such as on-demand reactive power generation/consumption economical to include, which expands the design space across which DG can function in the distribution system. The simulation platform enables testing of the following local control goals: DG connected with unity power factor, DG and load connected with unity power factor, DG connected with local voltage regulation (LVR), and DG connected with real power curtailment. Both the LVR and curtailment strategies can regulate the voltage of the simple circuit case, but the circuit utilizing a substation with load drop compensation has no universal solution. Even DG with a penetration level around 10% of rated circuit power can cause overvoltage problems with load drop compensation. The real power curtailment control strategy creates the best overall circuit efficiency, while all other control strategies result in low light load efficiency at high DG penetrations. The lack of a universal solution implies that some degree of communication will be needed to reliably install a large amount of DG on a distribution circuit. (cid:1) DOI: 10.1115/1.4001050 (cid:2) Introduction New advancements in inverter-based decentralized electrical energy technologies, which include everything from plug-in electric vehicles, solar photovoltaic panels, to combined heat and power ͑CHP͒ with fuel cells and microturbine generators, have the potential to change the premises upon which electric power is generated, transmitted, distributed, and consumed ͓1͔.Whether the proliferation of these energy resources is driven by energy economics or environmental concerns, the existing distribution system is not designed to be flexible enough to accommodate these resources, even provided that the necessary accommodations were well known.Previous work has shown that voltage regulation can become a major concern when large penetrations of distributed generators significantly change the distribution feeder characteristics ͓2-4͔.The IEEE 1547 standard for interconnecting distributed generation ͑DG͒ states that the generator may neither actively regulate any voltage nor cause any voltage on the system to go beyond specified requirements ͓5͔.This clause alone will limit the penetration of DG allowed in many existing distribution scenarios.Thus, independent of the difficulties in economically installing fuel cells and other DG systems, producing a large percentage of power on-site may be an ambitious goal from the other side of the point of common coupling ͑PCC͒: the electricity distribution system. Recent interest and investment in smart grid technology promises to make the distribution system more intelligent in the long term, by enabling communication and control between load meters, voltage regulators, field capacitors, d-FACTS, smart substation elements, and even other circuits ͓6͔.In this scenario, DG will play a major role, due to its ability to change power output and power angle, in meeting the needs of the distribution system and the greater utility network.However, waiting for these widespread intelligent circuit upgrades will create a major barrier to installation and deployment of a high penetration of DG.It is thus critical to the near-term deployment of DG to understand what converter behavior is desirable and most compatible with the current system, and then to construct and deploy such converters to be upgradeable, so that in the future, as smart grid circuits become available, the asset is further optimized and incorporated into the system. The goal of this paper is to explore four different control methods, where each rely purely on locally measured parameters: DG with unity power factor ͑baseline control͒, DG and local load with unity power factor ͑power factor correction͒, DG with local voltage regulation ͑LVR͒, and DG with real power curtailment ͑RPC͒.These control strategies are evaluated based on whether they cause the generator to create ͑1͒ over-or undervoltages on the circuit, or ͑2͒ undesirable utility conditions such as an excessive reactive power demand.A DG control that successfully improves the voltage regulation and power flow in the circuit is labeled as a "model citizen."A poor citizen DG control creates major problems in the circuit, and a good citizen has a neutral effect ͓7͔.These four control strategies are added to a variety of generic feeder models and the resulting behavior is analyzed and classified to determine whether locally controlled distributed generators can become a model citizen of the grid.Finally, the circuit efficiency of each scenario is also calculated, analyzed, and compared, providing an additional facet of the impacts of DG on the circuit. Background At present, addition of DG to distribution circuits is tightly limited and regulated to prevent any possible problems.The IEEE-1547 standard is developed as a guideline for this implementation ͓5͔.This standard states that a DG installation may neither cause any voltage on the system to go outside of set limits, nor actively regulate the local voltage.It recommends that a generator either operates with a power factor of 1 or provide a power factor compensation for the local load to the realistic limits of 0.9, leading to lagging.ANSI C84.1 defines the allowable voltage rating at the customer entrance for Range A, which encompasses most DG locations, is from 114 V to 126 V ͑0.95-1.05 per unit͒.To allow for transformer and secondary line losses, a study into this issue by GE, under contract by NREL, defines the acceptable per-unit ͑p.u.͒ voltage at the distribution transformer primary as 0.98-1.05p.u. ͓4͔ The per-unit value is the actual value normalized to a set base value, and is used here for both voltage and power. This same study looked at all combinations for six different DG levels on each of eight different base circuits, with two control strategies, two load growth scenarios, four DG locations, and two load levels.For each case, the maximum and minimum voltage across both are recorded and used to understand voltage behavior of the circuit, due to the addition of DG.All cases are designed to have no steady-state voltage problems when no DG is implemented.Many of these cases resulted in either under-or overvoltages on the circuit, which would preclude the DG installation and realize the problems associated with installing generators on a distribution feeder ͓4͔. The current work takes a subset of the circuits from the GE/ NREL work and focuses on how the DG-grid interface could be controlled to avoid voltage problems.As a premise, it is assumed that the generator real power output can be curtailed on demand.An example of a generator that is curtailable, though not controllable, would be a photovoltaic ͑PV͒ array ͓8͔.It is also assumed that the inverter connection can provide either a leading or lagging power factor.A variety of cases that span different DG locations, penetrations, and load power are simulated for different control strategies.All the control strategies try to execute control using purely local information such as local load reactive power or local bus voltage, which represents the current manner in which DG is introduced. Assumptions and Approach 3.1 Model Development.The set of models used to explore these circuits are developed in MATLAB/SIMULINK ™ , according to a modified version of the ladder iterative technique from Ref. ͓9͔.The models are built and solved entirely in the time domain, and provide voltage and current waveforms as outputs.A circuit schematic of the simulated radial distribution model is presented in Fig. 1.The time-based data are run through a postprocessing MAT- LAB code to produce voltage magnitudes, angles, and real and reactive power flow.This method lacks the optimization of a more commonly used load flow analysis software, but it has a major advantage in providing flexibility for the design of an interface between the DG and the grid, as well as control and communication throughout the feeder. Circuit Models. This work explores two generic feeder models: Circuits A and B. Circuit A: Simple Model. The first circuit explored has 20 load buses, evenly spaced along a 4-mile feeder, with a substation "source" bus set at 1.05 p.u. voltage, assuming a 12.47 kV base.There are no transformers modeled and no capacitors on the line.The first half of the line has an impedance of 0.5+ j1.0 ⍀, and the second half has an impedance of 0.8+ j1.4 ⍀.The base power is 7 MW, and the load is evenly distributed among the 20 load buses.Light load means a total power of 0.3 p.u. with a power factor of 0.95; and heavy load means a total power of 1.0 p.u. with a power factor of 0.85.Spanning light and heavy load conditions represent a temporal variation in the loading of a distribution feeder.Circuit A approximates a simple, densely loaded urban circuit.The real/reactive power flows and voltage profile are shown in Fig. 2 for the case without DG.Distance is measured as the distance from the substation.Power flow is defined as positive when going from the substation toward the circuit loads.As there is no generation of real or reactive power on the circuit, all power flows are positive and monotonically decreasing along the length of the circuit.This corresponds to a voltage profile that always decreases with increasing distance from the substation.The heavy load case causes a larger voltage drop, but both load cases result in acceptable voltages at all locations on the circuit. Circuit B: Voltage Control Model. The second circuit is similar to Circuit A, except that this circuit is 8 miles in length with four fixed capacitor banks, rated at 1200 kVAR, and evenly distributed at bus 4, 8, 12, and 16.The longer line now has a first half line impedance of 1.0+ j2.0 ⍀ and the second half impedance of 1.6+ j2.8 ⍀.Also, there is an automatic voltage regulating ͑AVR͒ autotransformer at the substation with load drop compensation ͑LDC͒.LDC uses a compensation parameter and a local power flow to approximate the line drop, and compensate for this by changing the output voltage.Essentially, this will increase the substation voltage during heavily loaded times, and decrease it at light load to prevent overvoltage.The compensation parameter assumed here is 0.6+ j1.1 ⍀ with a voltage set-point of 1.02 p.u. and a maximum voltage of 1.05 p.u.A characteristic voltage profile of the circuit without DG at both light and heavy load times is shown below in Fig. 3.The operation of the AVR with LDC is indicated by the high substation voltage, indicated by the y-intercept, for the heavy load case, and lower voltage for the light load case.Figure 3 also shows the real and reactive power flows for Circuit B without the presence of DG.The fixed capacitor banks result in substantial reactive power flowing from the circuit to the substation, which causes the voltage profile of the light load case to rise with increasing distance from the substation.For model verification, a sample four-bus system is created with a base voltage of 10 MVA and a base voltage of 12 kV.Values for the line and load impedance are chosen to be arbitrary, but realistic.For simplicity of hand calculation, the loads are all assumed to be at constant impedance with overhead distribution lines, which means capacitance can be neglected.The MATLAB/SIMULINK ™ model is then compared with the same fourbus system in both POWERWORLD, which is a conventional load flow simulation program, and to hand calculations for the same bus.The comparison of bus voltages and angles are shown below in Tables 1 and 2, and the close agreement between results of all three methods indicates that MATLAB/SIMULINK ™ method is a valid way to simulate load flow in a three-phase power system.A comparison of line power flow shows similar agreement, and an additional comparison between MATLAB/SIMULINK ™ and POWERWORLD for constant power loads is consistent, indicating that both constant power and impedance loads are represented realistically by the MATLAB/SIMULINK ™ model. Control Strategies. Four different local control strategies are described and explored herein. Baseline. The baseline case control strategy is to set the generator to run at full real power capacity all the time and to produce no reactive power.This simple control strategy most closely resembles how most DG units today operate, particularly high temperature fuel cells, which have exhibited little loadfollowing capability ͓10͔. Power Factor Correction.An alternative DG control strategy is to operate at full real power capacity, and to create an overall power factor of 1, as seen by the distribution primary.This requires the DG to compensate for the consumption/generation of reactive power by generating/consuming it locally.This strategy assumes that the generator has limits of 0.9 leading to 0.9 lagging power factor, as referred to the generator output capacity, and it cannot compensate outside of this range. LVR. A control strategy for regulating the generator bus voltage is to use reactive power injection to directly affect the local bus voltage.This is investigated in Refs.͓4,11͔.The generator sinks the reactive power if the voltage is too high, and sources it when the voltage is low.Here, the limits of 0.9 leading and lagging are used, along with a 5% voltage droop.A diagram of the associated reactive power is shown in Fig. 4. RPC. All previous methods assume that DG real power is independent of the utility desires and feeder condition.This assumption infers that either the owner controls its operation, or that the output is intermittent due to natural causes.The real power curtailment strategy assumes that the voltage will be adequately regulated in the feeder with no DG, and thus, irregular voltages must be due to excess DG real power.The real power curtailment method is derived from Ref. ͓8͔ and generalized to all generators in the circuit.If the local voltage exceeds 1.05 p.u., the output power is reduced until the voltage falls below 1.05 p.u. Between 1.04 and 1.05, the previous output power is maintained, and if the voltage falls below 1.04 p.u., the output power will be increased if it was being curtailed. Analysis Parameters. The output parameters of interest across the various studies include maximum/minimum voltage and substation real/reactive power input.The voltage extremes must be between the 0.98 p.u. and 1.05 p.u. boundaries set on the power system.Ideally, the voltages will be within a narrow band at the lower end of the acceptable range.This is because lower voltage will reduce the power requirement for constant current and constant impedance loads, and indirectly provide an efficiency benefit.The real and reactive power should have an export/ import pattern that is more desirable than without DG.For real power, this is assumed to be within the confines of load-leveling: power import at heavy load is less than or equal to that without DG, and power import at light loads is greater than or equal to the nominal power on the circuit.Similar conditions are also applied to reactive power consumption, as it is assumed that this resource is added with switched capacitor banks.At present, one-third of utility capacitor banks are fixed and the other two-thirds switched to meet changing load requirements.A reduced swing in the reactive power usage would reduce the number of switches and extend the component lifetime.The reactive power usage should not extend outside of the range, as adding reactive power need would directly equate to an increase in infrastructure investment, and a reduced reactive power load might recede below the permanent demand met by the base 1/3 of nonswitched capacitors.The basic comparison metrics are summarized in Table 3.The simple feeder baseline case shows overvoltages for penetrations above 0.2 for generators located at the end, and above 0.3 for generators located at the middle, as presented in Fig. 5.The overvoltages occur during light load, when a net export of DG power creates a voltage rise in the circuit.An example illustrating this effect is shown in Fig. 6 for a DG penetration of 100% located at the middle.The discontinuity in power flow at 2 miles is due to the injection of 7 MW real power.In the light load case, this injection results in a substantial negative real power flow that corresponds to a voltage rise from the substation to the circuit midpoint.The overvoltage problem is not present in the heavy load case because the voltage drop, caused by a large reactive power demand, dominates the voltage increase due to reversed real power flow.The real power import to the circuit ͑Fig.7͒ decreases linearly with penetration, and the reactive power import ͑Fig.8͒ is insensitive to DG because it does not generate reactive power.There are no voltage problems associated with citing DG at the beginning of the circuit. Alternate Control Strategies. Adding the power factor correction control to the generator does not correct the overvoltage problem, and in fact, exacerbates it slightly by reducing the voltage drop, due to the reactive power flow in the circuit.The local voltage regulation control succeeds in eliminating the overvoltage problem, but it creates new problems in the real and reactive power flows.This control strategy works by assuming that reactive power flow through a line impedance will change the voltage.This is true for a remote bus, but a bus near to the substation is considered a stiff voltage source and no amount of reactive power flow will change this.As a result, when the generator is located at the beginning, the voltage regulation control causes a sharp increase in the reactive power demand of the system both at light loads ͑model͒ and at heavy loads ͑poor͒.The reactive power flow change for the LVR case is presented in Fig. 9.It should be noted that there was never a voltage regulation problem in these cases where the DG is at the beginning.Thus, voltage regulation is not a universal solution and will sometimes create new problems while trying to solve a problem that did not exist. The load curtailment control strategy does not affect the circuit during heavy load conditions when overvoltage is not a problem.In the light load cases, the generator real power output is decreased and the resulting substation power is shown in Fig. 10.This control action eliminates the overvoltage problem, as shown along with the other control strategies in Fig. 5.An advantage of the curtailment method is that most overvoltage problems occur during light load conditions.This period typically coincides with night and low electricity rates-a time when DG users may want to reduce real power output, and thus, fuel consumption.This type of control would naturally turn DG units into peaker units, and level the grid tie-line power flow.However, this reduction in output will be detrimental to the circuit performance if it occurs during high load, when the power is most critical.Additionally, combined heat and power ͑CHP͒ installations associated with many applications ͑e.g., manufacturing͒ may not have the flexibility to be turned down by the utility.Without LDC, locating the DG by the substation did not change the power flow in the rest of the circuit, and this location was relatively safe.With LDC, the generator reduces real power flow, which causes a low voltage problem and results in undervoltages at penetrations greater than 0.3.The DG-at-middle case exhibits a combination of problems: the same overvoltage as from Circuit A for penetrations above 0.3, and a new undervoltage problem that begins occurring at penetrations above 0.5.When DG is at the end, there is an overvoltage for any penetration above 0.1, and an additional undervoltage for penetrations above 0.5.The detailed behavior of Circuit B with a 1.0 penetration of DG at the middle is presented in Fig. 12.The substation undervoltage occurs for the light load case due to considerable exportation of both real and reactive power.Yet, even this undervoltage is insufficient to compensate for the voltage rise in the circuit, and the furthest circuit locations experience overvoltages.The real power import is identical to that of the baseline control case for Circuit A ͑Fig. 13͒.The reactive power import is still insensitive to DG penetration, but the addition of fixed capacitors on Circuit B results in a reactive power import that is either near-zero or negative, which means the circuit is exporting reactive power ͑Fig.14͒. Alternate Control Strategies. Again, power factor correction is completely ineffective in addressing voltage regulation problems.LVR can effectively eliminate all overvoltages, but there is still an undervoltage problem associated with adding DG to the beginning of the circuit, as presented in Fig. 11.The LVR control still causes a poor reactive power demand profile, as shown in Fig. 15.Not only does LVR control increase the demand for reactive power at heavy loads, but it also increases the importation of reactive power during light load when the generator is located at the beginning.This is another problematic consequence of installing generators with LVR control. Results from the curtailment method are mixed and are highly location dependent.When the DG is at the end of the circuit, the power curtailment eliminates both over-and undervoltages throughout the circuit.However, when the DG is located at the middle or beginning, the location of the voltage problems do not coincide with the generator, and the output power is not curtailed-thus, the middle and beginning cases show the same behavior as cases without control.In addition, the 1.0 penetration DG-at-end case invoked power curtailment for both light and heavy loads, which is shown in Fig. 16.A 7 MW DG installation would never function above 4.7 MW, which adds another undesirable constraint to the installation.The conditions for over-and undervoltage are summarized in Table 4, along with the effectiveness of the LVR and curtailment regulation strategies. Circuit Efficiency In addition to affecting voltage, power flow through the distribution circuit consumes real power and causes line efficiency losses.As a result, the circuit efficiency will depend strongly on the same factors that affect voltage, which include DG placement, penetration level, and control strategy.The circuit efficiency is defined as the fraction of real power input that is consumed at the load circuit = P load P substation + P DG As this definition does not account for transmission or substation/ distribution transformer losses, it is not intended to absolutely quantify the effects of DGs on power delivery.It instead provides a simple and useful measure for comparing the variations in distribution line losses attributable to DG parameters. Circuit A. The efficiencies of Circuit A with the baseline control are presented in Fig. 17.When the DG is located at the beginning, the efficiency is roughly independent of penetration because the impedance between the generator and the substation is minimal.For the light load cases, both middle and end locations have an initial increase in efficiency with penetration that is followed by a sharp efficiency loss for penetrations above 0.3.At these high DG levels, excess real power flows directly from the DG to the substation and creates additional line losses throughout the circuit.When these circuits are loaded heavily, the DG power is instead consumed locally and the efficiencies remain high. The LVR control strategy creates an efficiency profile that is similar to that of the baseline control, except that the efficiency losses at light load/high penetration are exacerbated by the increase in reactive power flow as well.All efficiencies for the circuit with LVR control are presented in Fig. 18.In contrast to baseline control, the real power curtailment control eliminates the DG-to-substation bulk power transfer that is the source of this efficiency loss.Curtailment control allows the efficiency to remain high for all DG scenarios, as shown in Fig. 19.This efficiency analysis shows that compensating for voltage rise with reactive power consumption not only increases stress on the external electric power system, but it also compromises the efficiency as well.Providing the model citizen load-following behavior instead improves the efficiency associated with installing generators at the middle and end of the circuit, and thus enhances the benefit locally. Circuit B. The baseline efficiencies of Circuit B, presented in Fig. 20, demonstrate trends similar to those of Circuit A. One notable difference between the two circuits is that Circuit B has higher efficiency for heavy load than light load, while Circuit A showed the opposite trend.The capacitors installed in Circuit B cause this difference because during light load they generate excess reactive power that is exported through the substation.In the heavy load condition, this reactive power is consumed in the circuit and not subjected to traveling long distances as it does in Circuit A. The same high penetration/low efficiency trends from Circuit A are still observed in Circuit B, although the longer line of Circuit B results in lower numbers overall. The local voltage regulation control strategy again continues to closely follow the efficiency trends of the baseline case, as presented in Fig. 21.The light load DG-at-middle efficiency is improved with LVR because the reactive power draw of the DG prevents reactive power export in the light load condition.However, the DG-at-end 1.0 penetration case has an even worse effi-ciency because the reactive power must travel farther than in the DG-at-middle case, and the line impedance is higher on the latter half of the line. Figure 22 presents the real power curtailment strategy, which generates an excellent efficiency profile for all penetrations when the DG is located at the end, due to the same load-following effect observed in Circuit A. The beginning and middle DG locations are unchanged from the baseline control, due to the lack of corrective action by the DG controller during these situations. Conclusions The distribution system functions by making assumptions about the circuit that DG installation, including most current fuel cell systems, invalidates and thereby creates problems with maintaining proper voltage.In addition to a baseline case where the generator produces a rated real power output, three alternative control strategies are investigated: power factor correction, local voltage regulation, and real power curtailment.All of these strategies rely only on locally measured information and do not assume communications in the circuit, which is the usual case today.The power factor correction strategy is found to be ineffective at preventing overvoltages and shows poor citizen behavior whenever the baseline control strategy does.The local voltage regulation and real power curtailment have varying effectiveness depending upon DG installation and operating conditions.Local voltage regulation can exhibit model citizen behavior when implemented with DG installations far from the substation, but it can also act as a poor citizen when located near the substation.Real power curtailment may not always work, but it is always better than an interconnection strategy without any control.The efficiency of the circuit also changes with various DG penetrations and locations.In general, the real power curtailment control strategy was most effective at consistently maintaining high circuit efficiencies when it could provide corrective action.This is due to the reduced import and/or export of real and reactive power that results from this strategy, and implies that the "model citizen" load-following behavior for the rest of the electric utility provides localized benefits as well.As no control strategy elicited model citizen behavior in all cases, the results imply that some degree of communication and control on the circuit is needed to allow high DG penetration.Sufficient communication and control may be as simple as a priori knowledge of DG location relative to the substation and other loads, and choosing a proper control strategy accordingly. Fig. 1 Fig. 1 Circuit schematic of distribution model Fig. 3 Fig. 3 Real/reactive power flow and voltage profile for Circuit B without DG at light and heavy load Fig Fig. 4 Diagram of reactive power consumption by local voltage regulation control Fig. 7 Fig. 7 Circuit A, substation real power flow for baseline Fig. 13 Fig. 13 Circuit B, substation real power flow for baseline Fig Fig. 22 Circuit B, circuit efficiency for curtailment control Table 3 Analysis metrics for comparing control strategies dem Յ P dem,max P dem Ͼ P dem,max Reactive power Load leveling
2018-12-13T21:40:39.869Z
2009-01-01T00:00:00.000
{ "year": 2010, "sha1": "081b6a1f8b6a6af1bb24eaba7c20271805c18521", "oa_license": "CCBY", "oa_url": "https://cloudfront.escholarship.org/dist/prd/content/qt5788b852/qt5788b852.pdf?t=odm7rt", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "5b6e99324b029a253cdbff009dade10ff24374cd", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
221942179
pes2o/s2orc
v3-fos-license
Implications of COVID-19 on Time-Sensitive STEMI Care: A Report From a North American Epicenter Background Coronavirus disease 2019 (COVID-19) has forced dramatic changes to the healthcare systems throughout the world. Time-sensitive management of cardiovascular emergencies such as ST-elevation myocardial infarction (STEMI) has yet to be evaluated in the context of these new policies, particularly in so-called “hot spot” cities. Methods We evaluated the early impact of the pandemic on STEMI performance in the Greater Montreal Area. A total of 167 patients from 3 different study periods were included. Patients presenting in the lockdown period from mid-March to mid-May 2020 (Group C, 53 patients) were compared to those from mid-March to mid-May 2019 (Group A, 60 patients) and the 2020 pre-COVID-19 period (Group B, 54 patients). Results The number of STEMI admissions was unaffected during the lockdown. However, significantly longer delays between symptom onset and first medical contact (FMC) were noted (Group C 189.0 IQR [70.0, 840.0] min vs. Group A 103.0 IQR [42.5, 263.0] min vs. Group B 91.0 IQR [38.0, 235.5 min], P = 0.007). In contrast, additional safety protocols do not appear to have significantly affected delays between FMC and first intracoronary device activation (Group C 102 IQR [73.0, 133.0] min vs. Group A 104 IQR [87.0, 146.0] min vs. Group B 99.5 IQR [80.0, 150.0] min, P = 0.37). Patients that presented during the outbreak were more likely to be unstable with a higher incidence of Killip classes II-IV compared to groups A and B (28.3% vs. 18.3% vs. 5.6% respectively, P = 0.008). Worse in-hospital outcomes were also noted with a significantly higher rate of major adverse cardiac events (Group A 5.0% vs. Group B 11.1% vs. Group C 22.6%, P = 0.007). Conclusion During the lockdown period, many patients appear to have been reluctant to present to hospitals. This was associated with more unstable STEMI presentations and worse in-hospital course. Importantly, the health care system appears able to ensure timely acute cardiac care while ensuring that COVID-19 protocols are respected. Introduction Coronavirus disease 2019 (COVID-19), caused by severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2), is the major global healthcare challenge of the 21st century and has exerted significant pressures on healthcare systems worldwide [1]. Geographic differences and temporal changes in rates and spread of infection have also been noted. In particular, there have been much-higher rates of diagnoses infection so-called "hot spot" regions, such as the Greater Montreal Area, Canada's COVID-19 epicenter. While the direct impacts of COVID-19 are increasingly understood typified by respiratory insufficiency, a proinflammatory and pro-thrombotic state leading to multi-organ dysfunction and an associated risk of mortality [2], data on the indirect costs of COVID-19 are still lacking. In the context of a global lockdown, both new organizational barriers and patients' fear of acquiring COVID-19 have led to major concerns of undue delays in seeking appropriate emergent care, in particular for ST-elevation myocardial infarction (STEMI) patients. To our knowledge, no study has yet analyzed changes in the pattern of STEMI presentations or related complications in the Canadian epicenter of infection. The magnitude of the effect on STEMI system performance metrics is also not known. As such, we report herein early results demonstrating the impact of the COVID-19 outbreak on time-sensitive STEMI care delivery in the Greater Montreal Area; a hot-spot metropolitan region where the impact would be expected to be most pronounced. Study design This is a retrospective observational study of patients presenting with a diagnosis of STEMI at either the Centre Hospitalier de l'Université de Montréal (CHUM) in Montreal or the Cité-de-la-Santé Hospital in Laval, Québec. Data were collected for the lockdown period between mid-March and mid-May 2020. Data comparison was made with (a) the same period in 2019 and (b) the pre-lockdown period from January to mid-March 2020. STEMI was defined according to the Fourth Universal Definition of Myocardial infarction [3]. The only exclusion criterion was the occurrence of in-hospital STEMI. Data collection and endpoints Pre-hospital data was collected using the emergency medical service records (when applicable) and in-hospital records. Time from symptom onset and FMC to first device activation were gathered for all patients. The Killip classification was used to quantify severity of associate heart failure for all patients, and left ventricular ejection fraction (LVEF) was available for most patients using a transthoracic ultrasound. Major adverse cardiac events (MACE) were defined as a composite of cardiac death, reinfarction, cardiogenic shock, target-vessel urgent revascularization, stroke, stent thrombosis, the occurrence of malignant arrhythmia, or mechanical complications of infarction (ventricular septal defect, papillary muscle rupture, or free wall rupture and pseudoaneurysm). Patients were followed for the duration of the index hospitalization. Statistical analysis Statistical analyses were performed using the Statistical Package for Social Sciences (SPSS, IBM, Version 25). Continuous variables were all tested for normality using the Shapiro-Wilk test. Data are presented as either means ± standard deviation (SD) or medians and interquartile ranges, when appropriate. Categorical data are presented as counts and percentages of the total, and were analyzed using a χ2 test across groups. Comparisons of continuous and normally-distributed variables were accomplished by a one-way ANOVA test. A Kruskal-Wallis test was used for non-normal continuous data. A two-tailed alpha of 0.05 was used for all analyses. Baseline patient characteristics A total of 167 patients with STEMI were included in our analysis (72 from the Cité-de-la-Santé Hospital and 95 from CHUM). Patients were separated in 3 groups based on the reference period of the index event. Group A (first reference group) included 60 patients with a STEMI presentation between mid-March and mid-May 2019. Group B (second reference group) included 54 patients with an index event recorded in the pre COVID-19 period, between January and Mid-March 2020. Group C (53 patients) represents the COVID-19 lockdown from mid-March to mid-May 2020. Baseline clinical characteristics are presented in Table Pre-hospital evaluation and initial presentation The overall study population showed a median time from symptom onset to first medical contact Table 2 and Fig. 1.1. In contrast, durations between FMC to first device activation and arrival to hospital to first device activation were each comparable among the groups. See Supplemental Tables S1 and S2. Killip class at time of hospital presentation was more advanced for patients with a STEMI during the lockdown period compared to the pre-COVID era, with significantly more cases categorized as a Killip class >1 (Group A 18.3% vs. Group B 5.6% vs. Group C 28.3%, P = 0.008). In-hospital evolution The rate of anterior STEMI was 46.7% from mid-March to mid-May 2019 and 50.9% during the same period in 2020 (P = 0.71). No differences were recorded between groups regarding the number of diseased vessels, TIMI score post revascularization and the number of devices used during the index procedure. The LVEF of the overall population was 50.0% IQR [43.0, 60.0]. Patients with a STEMI presentation during the COVID-19 lockdown period showed a trend towards a lower LVEF (45.0% IQR [40.0, 55.0]) as compared to patients in groups A (50.5% IQR [40.8, 60.0]) and B (53.0% IQR [45.0, 60.0]). However, the difference across groups was not statistically significant (P = 0.09). MACE rates after the index procedure for patients admitted during the lockdown period were significantly higher than those observed in 2019 during the same period or the one recorded between January and mid-March 2020 (22.6% vs. 5.0% vs. 11.1%, respectively, P = 0.007). In a post-hoc analysis, a composite endpoint of mechanical complications, shock, or death was significantly higher during the COVID-19 lockdown period (Group A 1.7% vs. Group B 3.7% vs. Group C 13.2%, P = 0.03). Discussion Both systemic barriers to accessing emergent healthcare resources and likely also patients' perception of the risk of contracting COVD-19 in the healthcare system have changed in recent months due to the global pandemic. Our data show for the first time that, in a metropolitan area with a significant burden of COVID-19 cases, there was no decrease in the number of ST-elevation myocardial infarctions, but the pandemic is associated with a significant increase in the delay prior seeking medical attention with serious negative consequences for patients. System performance after first medical contact, on the other hand, appears to have been unhampered by addition safety protocols related COVID-19. There were no known instances of patients contracting COVID-19 during acute STEMI care and the public should be made aware that emergency health services for serious conditions should not be avoided. This study represents the first Canadian report of changes in the pattern and consequences of STEMI presentation to specifically asses both ischemic time prior to and following contact with the medical system. Our first important finding is that the rate of STEMI did not decline during the pandemic. While it had been hypothesized that factors related to STEMI onset might be reduced during a lockdown, this was either simply not the case or was otherwise offset by other aggravating factors, such as increased psychological stress. It, however, cannot be excluded that there was in fact an increased number of STEMIs during the period. Although we appear to have captured all severe STEMI patients, it is possible that patients with uncomplicated STEMI may not have presented at all and therefore have yet to come to the attention of the medical system. Also, we cannot exclude coagulation abnormalities in patients with a STEMI and a concurrent COVID-19 infection, as systematic screening for COVID-19 infection was not done for all patients who presented to our hospitals. (The majority of patients were considered low risk based on absence of signs or symptoms of infection or a history of possible exposure or travel). Some recent data suggest that STEMI patients with COVID-19 may present with a higher thrombus burden [4]. Whether this contributed to either the number or clinical severity of STEMI admissions in our area remains a hypothesis in need of exploration. Secondly, significantly longer delays between symptoms onset and FMC were recorded during the lockdown period, which is in line with recent reports from abroad [5]. Our data makes plain that public health and political leaders must actively communicate the message that patients with symptoms of a heart attack should not delay seeking medical help for fear of COVID-19. Of note, there is no evidence in our cohort of patients contracting COVID-19 by accessing emergency healthcare services. Indeed, our study highlights that, during the lockdown period, delays in seeking care were associated with more advanced heart failure on presentation and significantly higher rates of in-hospital MACE. Moreover, the increase in MACE was primarily driven by the most serious complications, including a marked surge in mechanical complications, cardiogenic shock, and death. Nearly 1 in every 5 patients during the lockdown period had a complicated STEMI that included higher than historic rates of ventricular septal, free wall and papillary muscle rupture. Importantly, the use of protective equipment during the pandemic and reorganization of pre-hospital and in-hospital care had no appreciable impact on FMC to device times. Therefore, maintaining a safe working environment for both essential workers and patients in the context of COVID-19 without sacrificing the expediency of quality of cardiac care is clearly feasible. This aligns with recent data from London showing that, despite the COVID-19 outbreak, primary percutaneous coronary intervention (PCI) could be delivered in a timely fashion with a short door-to-balloon time according to existing guidelines [4,6]. It is important to note that this report may not be generalizable to non-urban settings or cities with a lower burden of COVID-19. Moreover, as the organization of STEMI services may well vary from a metropolitan area to metropolitan area, caution must be exercised in interpreting the results. However, as future public health and healthcare system policy decisions will need to be tailored to each regional setting, taking into account population density, the baseline organization structure of local emergent healthcare delivery, and the local burden of COVID-19, providing granular data on a regional level of the impact of recent policy decisions on the provision and quality of care is essential. Moreover, it appears likely that the local burden of disease would be an important predictor of patients' attitudes regarding whether to delay seeking medical attention for an acute coronary syndrome. In conclusion, the rate of STEMI in the Greater Montreal Area appears unaffected so far by the COVID-19 pandemic. However, patients have delayed, to their detriment, seeking emergent medical treatment. At the very least, clinical suspicion of STEMI should remain high even in the context of the COVID-19 pandemic or any future large-scale sanitary crisis. Public health, political, and physician leaders must conduct awareness campaigns to ensure that patients with symptoms of a heart attack do not delay seeking care. Moreover, such campaigns should stress that the system is able to provide prompt and effective care in a manner that is safe for both patients and healthcare workers. Finally, front-line healthcare workers should remain vigilant for potential mechanical complications of STEMI in patients with delayed presentation. Funding statement None. Declaration of competing interest There are no conflicts of interest to disclose.
2020-09-22T05:06:29.383Z
2020-09-19T00:00:00.000
{ "year": 2020, "sha1": "b068b1faecbcfc81b0f185d120c318176c57b111", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.carrev.2020.09.024", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "1bb4cf3b5982d7ef04cd76da3b662bcbf70960d1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
103335422
pes2o/s2orc
v3-fos-license
Photocatalytic Degradation of Commercial Acetaminophen: Evaluation, Modeling, and Scaling-Up of Photoreactors In this work, the performance of a pilot-scale solar CPC reactor was evaluated for the degradation of commercial acetaminophen, using TiO2 P25 as a catalyst. The statistical Taguchi’s method was used to estimate the combination of initial pH and catalyst load while tackling the variability of the solar radiation intensity under tropical weather conditions through the estimation of the signal-to-noise ratios (S/N) of the controllable variables. Moreover, a kinetic law that included the explicit dependence on the local volumetric rate of photon absorption (LVRPA) was used. The radiant field was estimated by joining the Six Flux Model (SFM) with a solar emission model based on clarity index (KC), whereas the mass balance was coupled to the hydrodynamic equations, corresponding to the turbulent regime. For scaling-up purposes, the ratio of the total area-to-total-pollutant volume (AT/VT) was varied for observing the effect of this parameter on the overall plant performance. The Taguchi’s experimental design results showed that the best combination of initial pH and catalyst load was 9 and 0.6 g L−1, respectively. Also, full-scale plants would require far fewer ratios of AT/VT than for pilot or intermediate-scale ones. This information may be beneficial for reducing assembling costs of photocatalytic reactors scaling-up. model pollutant because it is a massively consumed drug worldwide. The excretion of this drug (in wastewater from hospitals and private households) and disposal of unused medicine have caused acetaminophen to appear in surface waterbodies. A concentration range between 4.6 and 52 µg L −1 has been reported in previous studies carried out in several countries of Europe and America [6]. Nonetheless, it is expected that higher concentrations can be found in hospitals wastewaters although there are no reports of these concentrations in literature. Furthermore, water contaminated with pharmaceuticals such as acetaminophen can cause hepatic damage in humans [7,8] and alter the equilibrium of aquatic ecosystems due to its toxicity [9,10]. As a result, there is a need to develop efficient technologies to remove emerging contaminants from industrial wastewater and water bodies. Heterogeneous photocatalysis has proven to be an effective method for eliminating many emerging pollutants [11][12][13][14]. As an advanced oxidation process, heterogeneous photocatalysis uses sunlight as the promotor of the redox reactions responsible for removing the contaminants. The general mechanism of photocatalysis for degrading organic pollutants has been reported in several papers [15,16] and involves the generation of electron-hole pairs, which in the presence of electron acceptors such as atmospheric oxygen, leads to the formation of potent oxidant agents that can destroy and mineralize organic matter. Although photocatalysis was discovered around three decades ago, the worldwide applications at full-scale are scarce or almost null. The estimation of the total footprint required for operating photocatalytic reactors, as the land occupation is a matter of study from the engineering point of view. To the best of the authors' knowledge, no studies have been reported on the scaling of solar photocatalytic reactors that consider pilot, middle and full-scale schemes, regarding the total area required for its operation. Besides, most of the existing photocatalytic reactors have been designed following empirical methods rather than strict mathematical modeling. This approach can be attributed to the complexity of the modeling and simulation of simultaneous phenomena that take place during its operation, i.e., photonics, hydrodynamics, and kinetics. Regarding the photon absorption by the catalyst, the estimation of the radiant field can be a challenging task when the catalytic particles are suspended in the reactor due to the scattering that takes place once the photons enter the reactor. This difficulty may be even higher when the solar radiation acts as the photon's source, because of its variability and form of propagation (direct or diffuse). The most common approaches for estimating the radiant field are the Discrete Ordinates Method (DOM) [17,18], the Monte Carlo (MC) simulations [19][20][21], and the Six-Flux Model (SFM) [21]. The first two methods require far more computational effort than the SFM, which is based mainly on the assumption that when photons collide with catalytic particles, scattering occurs in the six directions of the Cartesian system. Despite the simplicity of the SFM, which is composed only of algebraic equations, several reports show satisfactory fitting to experimental data [21]. Concerning photocatalytic kinetics, the Steady-State Approximation (SSA) is usually applied for the total concentration of holes and hydroxyl radicals (OH • ) according to the general mechanism of photocatalysis with TiO 2 [22]. This strategy, which was first reported by Turchi and Ollis [15] and Alfano et al. [16], has led to the writing of several critical photo-kinetic laws, such as the modified Langmuir-Hinshelwood equation. For estimating the effect of the radiation field in the kinetic law, many authors have put explicitly the radiation intensity or the local volumetric rate of photon absorption (LVRPA) in the kinetic equation [23,24], which can allow finding kinetic parameters independent on the reactors' geometry. However, to date, there has not been reported a strategy for finding kinetic parameters that can be suitable for describing the photocatalytic treatment of pollutants independently of the natural variability and propagation form of the solar radiation. In the present work, a strategy that combines simplicity and accuracy for modeling and simulating solar CPC photocatalytic reactors was used, using acetaminophen as a target molecule. A Langmuir-Hinshelwood-like equation derived from the SSA in the kinetic model was used, which has shown to be adequate for describing many experimental data in the photocatalytic abatement of organic pollutants. Additionally, we applied the Six Flux Model and a solar emission model based on the clarity index (K C ) [25,26] for estimating the LVRPA and coupled it to the kinetic law in order to find kinetic parameters independent of the reactor's geometry. For solving the time-dependent mass balance, the whole system (reaction zone, recycle tank, and piping) was considered as a combination of a series of plug flow reactors (PFRs) and continuous stirred tanks under turbulent flow and resolved for the total organic carbon (TOC) concentration. Moreover, a Taguchi's robust design was used for estimating the signal-to-noise ratio for each operating parameter value (catalyst load and initial pH). These values were determined for obtaining the highest mineralization of acetaminophen regardless of the variability of solar radiation. After finding these photocatalytic kinetic constants, the performance of solar photoreactors at different scales in terms of the theoretical footprint and the total area to total treating volume ratio (A T /V T ) was analyzed. Table 1 shows the TOC removal obtained after varying the initial pH, catalyst load, and the solar accumulated energy. The highest (53.49%) and lowest (5.39%) TOC removals were obtained with a pH of 5, and the performance was favored by a higher catalyst load (0.6 g L −1 ). It was expected that acidic pHs enhanced the photocatalytic degradation for the case of the acetaminophen. Regarding the solar accumulated UV energy, the degradation increased with higher values because of a higher quantity of available photons. In this case, the increase of TOC removal concerning the cloudy days was higher than 50%. Whereas with a pH of 9, the variability of the TOC removals was smaller than the observed one with a pH of 5. Regarding the catalyst load, the best results were obtained with 0.6 g L −1 for initial pH of 5 and with 0.3 g L −1 for initial pH of 8. This result can suggest an interaction between the catalyst load and the initial pH of the slurries. Signal-to-Noise Ratios of Initial pH and Catalyst Load The initial pH affects the physicochemical properties of the catalyst, including the surface charge, the size of aggregates, and the position of the conductance and valence bands [27][28][29]. The reported point of zero charge (pH zpc ) for the TiO 2 Degussa is between 6 and 6.5 [29,30]. When the pH of the slurry is below the pH zpc , the surface of TiO 2 acquires a positive charge and vice versa. Therefore, when the initial pH is 5, the stronger electrostatic forces can enhance the attachment of anionic species derived from the primary target molecule or its intermediates. This phenomenon favors the adsorption of these species and their latter oxidation [11,31,32]. However, Horst et al. [30] have shown that when the pH is very close to 6, the TiO 2 particles aggregate with hydrodynamic diameters larger than those found in much more alkaline suspensions (i.e., pH = 9). Similarly, Vanegas et al. [33] have proven that when the pH is near to 5, the agglomeration of titania is stronger than in the case of suspensions having pH values around 8. The agglomeration of TiO 2 particles reduces its useful area, which is believed to diminish the photocatalytic mineralization rates. As a result, due to the electrostatic forces and the agglomeration effects, the highest and lowest mineralization percentages of acetaminophen could have been obtained with the same value of initial pH (Table 1). Regarding the catalyst load, when the initial pH was 5 and the TiO 2 concentration was 0.3 g L −1 , the availability of active sites might not have been enough for obtaining elevated TOC removal rates due to the agglomeration effect mentioned before. On the contrary, when the catalyst load was 0.6 g L −1 , the higher amount of TiO 2 particles could have overcome the limitation imposed by their agglomeration. In any case, when the initial pH is 5, the instability of the TiO 2 suspension may diminish the effectiveness of the photocatalytic process because that value is very close to its pH zpc . On the other hand, according to Table 1, high TOC removals could also be attained when the initial pH was set to 9. With this pH, the availability of OH¯ions in solution enhances. As a result, we can expect that TOC removal rates rise because, according to the general mechanism of the TiO 2 -based photocatalysis [22], the holes of the valence band react with OH¯ions to generate OH • . Nevertheless, as mentioned before, when the solution pH is above the pH zpc , the catalyst surface charges negatively, and consequently the adsorption of OH¯ions becomes more difficult. Probably, that is why the highest TOC removal was not obtained when working with this initial pH. In this case, the effect of catalyst load was opposite to that obtained with an initial pH of 5. The highest and lowest TOC removals were attained with 0.3 and 0.6 g L −1 of TiO 2 , respectively. One possible reason may be the clouding effect taking place in the slurry when the TiO 2 concentration was 0.6 g L −1 . This effect could have occurred when the pH was 9 due to the higher dispersion of catalytic particles [29,34] in the reactive media, which could block the photons' path inside the reactor. Therefore, with an initial pH of 9 and catalyst load of 0.3 g L −1 , the clouding effect could have been minimal, and higher TOC removals were obtained. Table 2 shows the S/N ratios of the initial pH and catalyst load employed in these experiments. According to it, the performance of the photocatalytic system is more robust (a steadier response with high variability of the noise factor) when the initial pH is 9 (S/N = 31.33), and the TiO 2 load is 0.6 g L −1 (S/N = 31.01). Concerning to the catalyst load, this result is different to the estimated by the SFM approach in a previous work that used a solar CPC reactor and P25 as the catalyst [34]. However, the SFM calculation of the cited report did not include the effect of the initial pH nor the adsorption phenomenon. As mentioned above, there could be an interaction between the initial pH and the catalyst load, and the optimal values can differ depending on the substrate and other operating conditions. The same discussion can be applied to the effect of the initial pH. It is important to note that the pH affects the surface charge of the solid (as mentioned previously) and the attack orientation of the OH • as well, which can influence the oxidation rate significantly [26]. Table 2. Signal-to-noise ratios of the initial pH and catalyst concentration. Variable Level S/R Initial pH 5 19.43 9 31.33 The most relevant result is the discrepancy with other studies with similar operating conditions (initial pH and catalyst load) but using a controlled or fixed amount of UV accumulated energy. Whereas in previous works [34][35][36], the reported values for catalyst loads were around 0.35 g L −1 (closer to the lower catalyst load used in this study), the recommended one in this study is 0.6 g L −1 . In fact, from Table 1, the TOC removal at pH of 9 was higher with 0.3 g L −1 , which is more consistent with the reported in the literature for this kind of reactor [35,37]. However, since the target of the Taguchi experimental design is finding suitable operating conditions for a robust operation (regardless to the variation of the solar radiation), the selection of 0.6 g L −1 is justified by a large number of active sites when the available UV photons are scarce (cloudy days) or particle agglomeration reduces the surface area due to the pH effect. Furthermore, the initial pH of 9 is far from the pH zpc of the P25, and therefore, the apparent particle size becomes more stable, which improves the photocatalytic process. Additionally, as the effects of the photolysis and the physical adsorption in all experiments were negligible (0.93%-1.17% of TOC removal). Therefore, it can be stated that the TOC removal can be attributed mainly to the photocatalytic oxidation process. TOC Removal Modeling The TOC removal was modeled by coupling the hydrodynamics, photonics, kinetics and mass balance in the photocatalytic reactor. The L-H parameters were obtained by fitting the experimental data to the mathematical model. The reaction time was standardized according to the commonly used t 30W expression, which is a normalization of the time that considers continuous irradiation of 30 W m −2 over the reactive zone [24,38,39]. The L-H parameters were estimated through linear regression of the reciprocal values of the initial rates and initial concentrations (initial rates law) according to the Equation (1). This equation is the reciprocal of the material balance expression associated with the batch-recirculating photocatalytic system. Figure 1 shows the fitted linear analysis for the initial reaction rates obtained with the three different TOC initial concentrations. This strategy allowed to find kinetic parameters independent of the radiation field, as described in Equations (2) and (3). In Equation (2), VRPA represents the integration of the LVRPA along the reactor volume, whereas the 1.2 × 10 4 factor was used as a conversion factor from ppm to mol L −1 . TOC Removal Modeling The TOC removal was modeled by coupling the hydrodynamics, photonics, kinetics and mass balance in the photocatalytic reactor. The L-H parameters were obtained by fitting the experimental data to the mathematical model. The reaction time was standardized according to the commonly used t30W expression, which is a normalization of the time that considers continuous irradiation of 30 W m −2 over the reactive zone [24,38,39]. The L-H parameters were estimated through linear regression of the reciprocal values of the initial rates and initial concentrations (initial rates law) according to the Equation (1). This equation is the reciprocal of the material balance expression associated with the batch-recirculating photocatalytic system. Figure 1 shows the fitted linear analysis for the initial reaction rates obtained with the three different TOC initial concentrations. This strategy allowed to find kinetic parameters independent of the radiation field, as described in Equations (2) and (3). In Equation (2), VRPA represents the integration of the LVRPA along the reactor volume, whereas the 1.2 × 10 4 factor was used as a conversion factor from ppm to mol L −1 . Although the kinetics parameters were obtained with data of the bulk of the liquid-phase, they are still valid to represent this model because the external mass-transfer limitations between the bulk and the catalyst surface are negligible due to the significant mass-transport in the turbulent regime. As a result, the TOC found in the surroundings of the catalyst surface can be assumed as the same found in the bulk of the solution. Besides, there are no internal mass-transfer limitations because the catalyst (TiO 2 Degussa P25) is considered non-porous. Furthermore, as it will be explained in Section 3.5, Equation (21) was integrated in order to obtain the TOC removal profile, as follows: where L is the total length of the reactor and m was taken as 0.5, which is a value suitable for geographical zones closer to the Earth's equator, where radiation intensities are high, and there is a good photon availability [22]. The boundary conditions to solve Equation (4) were where TOC in represents the inlet TOC concentration of the reactor in a given time. This model was employed to predict the TOC abatement of the contaminant according to the three different initial concentrations. The results are presented in the Figure 2, which reveals a good fit of the model to the experimental data. where L is the total length of the reactor and m was taken as 0.5, which is a value suitable for geographical zones closer to the Earth's equator, where radiation intensities are high, and there is a good photon availability [22]. The boundary conditions to solve Equation (4) were where TOC in represents the inlet TOC concentration of the reactor in a given time. This model was employed to predict the TOC abatement of the contaminant according to the three different initial concentrations. The results are presented in the Figure 2, which reveals a good fit of the model to the experimental data. From the obtained results, the kinetic parameters can be considered valid for the range of TOC initial concentrations (40-150 ppm) and an initial pH of 9. Although they were estimated with a single value of catalyst load, the model can be evaluated with different catalyst doses because the SFM consider the optical thickness and the scattering albedo, which are functions of the catalyst load. Nonetheless, the simulations were carried out under constant radiation flux of 30 W m −2 , which is considered as the average UV radiation flux received in a sunny day (10 a.m. to 2 p.m.) in northern latitudes. This value was selected considering previous works with solar radiation [24,34,[39][40][41] and the difficulty of describing the solar radiation variability with the model used in this study. Effect of Catalyst Load and Total Treated Volume on Plant Scaling-Up The estimated kinetic parameters were used for simulating large-scale photocatalytic reactors in the TOC removal of acetaminophen. In order to compare the potential size of full-scale plants, the effect of the catalyst load and the total-pollutant volume on the mineralization concerning the AT/VT ratio was studied. Figure 3 shows the TOC removal profiles in a system with a total volume of 5000 L, with two different catalyst loads (0.3 and 0.6 g L −1 ). From the obtained results, the kinetic parameters can be considered valid for the range of TOC initial concentrations (40-150 ppm) and an initial pH of 9. Although they were estimated with a single value of catalyst load, the model can be evaluated with different catalyst doses because the SFM consider the optical thickness and the scattering albedo, which are functions of the catalyst load. Nonetheless, the simulations were carried out under constant radiation flux of 30 W m −2 , which is considered as the average UV radiation flux received in a sunny day (10 a.m. to 2 p.m.) in northern latitudes. This value was selected considering previous works with solar radiation [24,34,[39][40][41] and the difficulty of describing the solar radiation variability with the model used in this study. Effect of Catalyst Load and Total Treated Volume on Plant Scaling-Up The estimated kinetic parameters were used for simulating large-scale photocatalytic reactors in the TOC removal of acetaminophen. In order to compare the potential size of full-scale plants, the effect of the catalyst load and the total-pollutant volume on the mineralization concerning the A T /V T ratio was studied. Figure 3 shows the TOC removal profiles in a system with a total volume of 5000 L, with two different catalyst loads (0.3 and 0.6 g L −1 ). The simulations were done under the same conditions described above (Equations (4), (5), and (8)- (11)) and the strategy is shown in Figure 4). The volume used of 5000 L represents the averagedaily-wastewater volume generated in medium-sized Colombian hospitals or some industrial facilities. The time t30W was set at 110 min and the total area was estimated based on the footprint of a single CPC module (4.1 m 2 , as seen in Figure 5). This footprint includes the area that would be covered by the whole CPC structure and the space between each module in a large-scale plant (30 cm of spacing lengthwise and crosswise). The simulations were done under the same conditions described above (Equations (4), (5), and (8)- (11)) and the strategy is shown in Figure 4). The volume used of 5000 L represents the average-daily-wastewater volume generated in medium-sized Colombian hospitals or some industrial facilities. The simulations were done under the same conditions described above (Equations (4), (5), and (8)- (11)) and the strategy is shown in Figure 4). The volume used of 5000 L represents the averagedaily-wastewater volume generated in medium-sized Colombian hospitals or some industrial facilities. The time t30W was set at 110 min and the total area was estimated based on the footprint of a single CPC module (4.1 m 2 , as seen in Figure 5). This footprint includes the area that would be covered by the whole CPC structure and the space between each module in a large-scale plant (30 cm of spacing lengthwise and crosswise). The time t 30W was set at 110 min and the total area was estimated based on the footprint of a single CPC module (4.1 m 2 , as seen in Figure 5). This footprint includes the area that would be covered by the whole CPC structure and the space between each module in a large-scale plant (30 cm of spacing lengthwise and crosswise). The plot shows that the photocatalytic performance is better when using 0.3 g L −1 of the catalyst. This result was expected because the same photon-absorption model reported in ref. [34] was applied. As stated before, the optimal catalyst load in CPC reactors (regarding the LVRPA) was 0.3 g L −1 . As mentioned before, with an initial pH of 9 and 0.3 g L −1 of catalyst load, the best performance for TOC removal was obtained experimentally. Consequently, the simulations shown in Figure 3 are congruent with the experimental data. This optimal value for catalyst load is consistent with the results obtained in previous works, where solar CPCs of similar diameters were used under sunny weather conditions [35,37,39]. The plot shows that the photocatalytic performance is better when using 0.3 g L −1 of the catalyst. This result was expected because the same photon-absorption model reported in ref. [34] was applied. As stated before, the optimal catalyst load in CPC reactors (regarding the LVRPA) was 0.3 g L −1 . As mentioned before, with an initial pH of 9 and 0.3 g L −1 of catalyst load, the best performance for TOC removal was obtained experimentally. Consequently, the simulations shown in Figure 3 are congruent with the experimental data. This optimal value for catalyst load is consistent with the results obtained in previous works, where solar CPCs of similar diameters were used under sunny weather conditions [35,37,39]. The observed behavior in both plots (0.3 and 0.6 g L −1 in Figure 3) is similar for small ratios of AT/VT. However, better performances are shown for 0.3 g L −1 when the AT/VT ratio increases. For example, if a TOC removal of 50% is needed, a full-scale operation with 0.6 g L −1 of TiO2 would require an AT/VT ratio of 175 m −1 ; but if it operates with 0.3 g L −1 of TiO2, the ratio would be 120 m −1 . The difference becomes more significant at higher TOC removals. This tendency can be explained due to the relative low mineralization rates that are usually obtained in photocatalytic processes. At small AT/VT ratios, there is no a significant difference of TOC removal performance when they are low. When the AT/VT ratio increases, the residence time increases as well. Therefore, the conversion of the organic matter (via photocatalytic oxidation) is higher. Nevertheless, the mixing effect with the recycling-feeding tank (VT) acts as a damping stage of the TOC removal process. Therefore, the overall degradation rate can become slower depending on the AT/VT. From the above observation, the effect of the total treated volume (VT) on the mineralization was evaluated as well. The photocatalytic abatement of acetaminophen with three different-contaminant volumes: 50, 500 and 5000 L ( Figure 6) was simulated, which represent respectively the number of effluents that can be treated in the pilot, intermediate, and full-scale plants. The observed behavior in both plots (0.3 and 0.6 g L −1 in Figure 3) is similar for small ratios of A T /V T . However, better performances are shown for 0.3 g L −1 when the A T /V T ratio increases. For example, if a TOC removal of 50% is needed, a full-scale operation with 0.6 g L −1 of TiO 2 would require an A T /V T ratio of 175 m −1 ; but if it operates with 0.3 g L −1 of TiO 2 , the ratio would be 120 m −1 . The difference becomes more significant at higher TOC removals. This tendency can be explained due to the relative low mineralization rates that are usually obtained in photocatalytic processes. At small A T /V T ratios, there is no a significant difference of TOC removal performance when they are low. When the A T /V T ratio increases, the residence time increases as well. Therefore, the conversion of the organic matter (via photocatalytic oxidation) is higher. Nevertheless, the mixing effect with the recycling-feeding tank (V T ) acts as a damping stage of the TOC removal process. Therefore, the overall degradation rate can become slower depending on the A T /V T . From the above observation, the effect of the total treated volume (V T ) on the mineralization was evaluated as well. The photocatalytic abatement of acetaminophen with three different-contaminant volumes: 50, 500 and 5000 L ( Figure 6) was simulated, which represent respectively the number of effluents that can be treated in the pilot, intermediate, and full-scale plants. Figure 6 shows that the TOC removals for the 5000 L curve are much higher than the ones corresponding to the 50 and 500 L curves, whose behavior is very similar. These results show that the area (or CPC modules) needed for obtaining a specific TOC removal is not directly proportional to the total volume. For example, if a TOC removal of 30% is required when treating a 500 L effluent (containing acetaminophen), then the most appropriate A T /V T ratio would be 200 m −1 . In contrast, if the volume of the effluent is 5000 L, then this ratio is reduced to a third part approximately (60 m −1 ). The difference becomes substantially higher as the required TOC removal increases. As a result, this information may be tremendously useful when scaling photocatalytic processes, as it could avoid unnecessarily monetary investment for the construction and operation of the reactors. Figure 6 shows that the TOC removals for the 5000 L curve are much higher than the ones corresponding to the 50 and 500 L curves, whose behavior is very similar. These results show that the area (or CPC modules) needed for obtaining a specific TOC removal is not directly proportional to the total volume. For example, if a TOC removal of 30% is required when treating a 500 L effluent (containing acetaminophen), then the most appropriate AT/VT ratio would be 200 m −1 . In contrast, if the volume of the effluent is 5000 L, then this ratio is reduced to a third part approximately (60 m −1 ). The difference becomes substantially higher as the required TOC removal increases. As a result, this information may be tremendously useful when scaling photocatalytic processes, as it could avoid unnecessarily monetary investment for the construction and operation of the reactors. In all the simulations, the flow rate was held constant at 30 L min −1 , which was the same used in the experimental runs carried out in the pilot-scale photoreactor. This supposition can be made because the CPC photoreactors are modular units that can be arranged in series. Therefore, a more extensive scale plant would only require a higher number of CPC reactors with the same size and operating conditions than the CPC used in the experimental pilot-scale plant. Nonetheless, larger volumes with the same flow rate yield higher residence times, which can improve the TOC removal as seen in Figure 6. Then, in order to scale-up and design full-scale plants with solar CPC photoreactors, it is not enough to estimate an ideal AT/VT ratio for attaining a given TOC removal. It requires a more in-depth analysis that can be done with the model presented here. At large-scale, the effect of the initial concentration of acetaminophen was insignificant. Several simulations conducted with [TiO2] = 0.6 g L −1 , VT = 5000 L at 41.6 ppm, 87.6 ppm, and 149.8 ppm showed almost null differences in TOC removal (results not presented here). Reagents and Chemicals A stock solution of the contaminant was prepared with commercial liquid acetaminophen (Genfar ® -Sanofi, Bogota, Colombia). TiO2 Aeroxide P-25 (Evonik, Essen, Germany) was employed as the photocatalyst in all the experiments (primary particle size, ~21 nm by TEM; specific surface area 50 m 2 g −1 by BET; composition 80% anatase and 20% rutile by X-ray diffraction). The initial pH was adjusted with solutions of NaOH 0.1 N and HCl 0.1 N (Merck, Darmstadt, Germany). Equipment The experimental runs were carried out in the Solar Photocatalysis Laboratory at Universidad del Valle (Cali, Colombia-3°29′N latitude). Figure 5 exhibits a schematic representation of the CPC In all the simulations, the flow rate was held constant at 30 L min −1 , which was the same used in the experimental runs carried out in the pilot-scale photoreactor. This supposition can be made because the CPC photoreactors are modular units that can be arranged in series. Therefore, a more extensive scale plant would only require a higher number of CPC reactors with the same size and operating conditions than the CPC used in the experimental pilot-scale plant. Nonetheless, larger volumes with the same flow rate yield higher residence times, which can improve the TOC removal as seen in Figure 6. Then, in order to scale-up and design full-scale plants with solar CPC photoreactors, it is not enough to estimate an ideal A T /V T ratio for attaining a given TOC removal. It requires a more in-depth analysis that can be done with the model presented here. At large-scale, the effect of the initial concentration of acetaminophen was insignificant. Several simulations conducted with [TiO 2 ] = 0.6 g L −1 , V T = 5000 L at 41.6 ppm, 87.6 ppm, and 149.8 ppm showed almost null differences in TOC removal (results not presented here). Reagents and Chemicals A stock solution of the contaminant was prepared with commercial liquid acetaminophen (Genfar ® -Sanofi, Bogota, Colombia). TiO 2 Aeroxide P-25 (Evonik, Essen, Germany) was employed as the photocatalyst in all the experiments (primary particle size,~21 nm by TEM; specific surface area 50 m 2 g −1 by BET; composition 80% anatase and 20% rutile by X-ray diffraction). The initial pH was adjusted with solutions of NaOH 0.1 N and HCl 0.1 N (Merck, Darmstadt, Germany). Equipment The experimental runs were carried out in the Solar Photocatalysis Laboratory at Universidad del Valle (Cali, Colombia-3 • 29 N latitude). Figure 5 exhibits a schematic representation of the CPC photoreactor used in this study. It consisted of 10 Duran glass tubes (1200 mm in length, 32 mm o.d., 1.4 mm wall thickness) that were placed upon a series of involutes made of aluminum (reflectance: ψ = 0.85), as seen in Figure 5b. The reactor was operated under a batch regime with recirculation, using a 40 L recycle feed tank and a centrifugal pump (0.5 HP of nominal power) that delivered 30.2 L min −1 . This experimental setup made it possible to keep the slurry (fluid and catalyst) saturated with oxygen, because whenever the slurry left the pipe, it was exposed to the surrounding air before entering the tank. The whole piping and accessories were made of PVC, 1 in. diameter. The TOC concentration was followed with a TOC analyzer (Shimadzu 5050A, Sao Paulo, Brazil); whereas, the pH was measured with an Orion 4-Star pH-meter (Thermo Scientific-ARC Analisis, Bogota, Colombia). Additionally, the solar UV intensity and the corresponding accumulated energy in the 295-380 nm range were measured with a UV A+B radiometer (Solardetox-Acadus S50, Barcelona, Spain). Experimental Design Due to the weather variability (sunny or cloudy days in tropical regions), the photocatalytic mineralization of acetaminophen was evaluated with the Taguchi´s experimental design [42,43]. This is a robust design that allows finding the most appropriate operational conditions that are insensitive to the noise (non-controllable factors), through the estimation of the signal-to-noise ratios (S/N) of the controllable variables. In this study, the initial pH and catalyst load were chosen as the controllable factors because they have been reported as two of the most influencing variables in the performance heterogeneous photocatalytic reactions [27,37,44]. Similarly, the accumulated solar UV energy is another important parameter, but it cannot be controlled since it depends on the geographical location, weather conditions and time of the day. As a result, the accumulated solar UV energy was considered as the noise factor. The corresponding signal-to-noise ratios (S/N) of each controllable variable were estimated with the "more is better" equation from the Taguchi's robust experimental design (Equation (6)) because our purpose was to maximize the mineralization of acetaminophen. Regarding Equation (6), S/N stands for the signal-to-noise ratio of each level of the experimental factors while Y i and n represent the percentage of mineralization and the number of experiments associated with each level. The TOC removal was calculated with the Equation (7). where TOC i and TOC o represent the TOC at the beginning and the end of each experimental run, respectively. Procedure The initial TOC concentration of acetaminophen was set to 40 ppm to simulate the strength of the wastewater generated in the washing containers and glass equipment from the Drugs Laboratory at the Universidad de Cartagena, Colombia. The initial pH was set to 5 and 9 for avoiding extreme conditions of acidity or alkalinity, which would require further amounts of reagents for neutralization. The catalyst loads were 0.3 and 0.6 g L −1 , which are within the range reported in the literature [45,46]. In the first stage of the experimental runs, the samples from the reactor were taken at the beginning of the process and after the amount of UV energy reached 19.14 and 38.28 W h m −2 . These values represent the average quantity of accumulated solar-UV energy received in Cali during a 3-h period in cloudy and sunny days, respectively. Subsequently, the samples were taken and filtered using 0.45 µm membranes (Merck Millipore ® , Cartagena, Colombia) for measuring the removal of TOC. Afterward, three different initial concentrations (40,90 and 150 ppm of TOC) were considered for finding the kinetic parameters of the photocatalytic process. In this case, the initial pH and catalyst dosage were set to the values that exhibited the highest S/N ratio described in Section 3.3, and the reactor was operated until it reached 35 W h m −2 of solar UV accumulated energy. Here, the samples were also taken at the beginning and the end of the experiments, and each 5 W h m −2 of accumulated UV energy. These runs were conducted under sunny weather conditions, and the flow rate was held at 30.2 L min −1 to ensure turbulent flow (Reynolds number = 19,420). In all cases, in order to achieve adsorption equilibrium, the slurry was recirculated for 20 min under dark conditions. Modeling of the Solar CPC Photoreactor The modeling approach consisted of coupling hydrodynamics with a photocatalytic kinetic model (including the LVRPA) in a time-dependent mass balance, as previously described in ref. [23]. The hydrodynamics was described by the following equations [24,47]: v z,max v z, average = (n + 1)(2n + 1) 2n 2 (11) in which r is the radial coordinate, n is a hydrodynamic parameter, and f is the friction factor. Further, the local volumetric rate of photon absorption (LVRPA) was estimated with the Equation (12) [21,24,34], which is derived from the SFM and adapted for a cylindrical configuration. The central assumption of this model is that the scattering phenomena takes place along the six Cartesians coordinates, which reduces the complexity of solving the photonic balance within the photocatalytic reactor. According to the SFM, the LVRPA is corr e −rp/λωcorr + γ ω corr − 1 − 1 − ω 2 corr e rp/λωcorr (12) where I 0 corresponds to the solar UV radiation flux that hits the reactor wall (either direct or diffuse radiation); whereas r p is a parameter considered in the SFM which is associated to the photon's traveling path [21,24], and γ, ω corr , λ ωcorr are defined as follows: The simulation of the radiant field and the calculation of the LVRPA were done in a Visual Basic routine that coupled the Ray Tracing technique with the SFM and a radiant emission model. This was previously reported in refs. [24,34,40]. As the LVRPA appears as the photonic contribution in the kinetic law, it is feasible to find kinetic parameters independent of the radiation field. The mass balance was solved in terms of the TOC and was coupled to a hydrodynamic model for turbulent regime [24,47]. Moreover, the kinetics contribution was described with a Langmuir-Hinshelwood (L-H) equation that had an explicit dependence on the LVRPA, as shown on the right-hand side of Equation (21). The entire reactor was divided into a large number (2500) of plug flow reactors (PFR) of length L, each one of them associated to the (r,θ) coordinates of the cross-sectional area. A higher number of PFRs would represent a significant increase in the computing time (as observed in previous simulations not shown in this study), without a visible improvement in the accuracy of the model. In that case, the mass balance can be described by the following equation: where k t and K 1 represent the kinetic and binding constants, respectively. In each PFR, the flow rate Q was equivalent to the product of the cross-section area and the average axial velocity (A R v z ). Besides, dV R in Equation (16) could be replaced by A R dz, so that we obtained a differential equation in function just of the axial direction z. Thereby, it was only necessary to estimate the average axial velocities profile in terms of the radial coordinate (r). Although the first option for modeling this kind of photoreactor is to consider it as a PFR, this is not entirely accurate due to the turbulent regime of the system. The mixing of the streamlines does not allow to find a well-defined velocity profile; therefore, since the concentration depends on the velocity due to the convective effects, the turbulent regime must be considered for the mass balance of the reactor. The CSTR provides a simple way for modeling this part of the mass transfer phenomenon with more accuracy than the PFR alone. Considering the above assumption, each PFR was divided into a series of small reactors with a length of ∆z. In every simulation step, Equation (21) was solved for each small reactor, starting from the plane z = 0 down to z = ∆z. Then, in order to consider the mixing effect of the turbulent regime, the TOC profile was averaged in the z = ∆z plane. This averaging step, as shown in Equation (17), was intended for assuming that a continuous stirring tank (CSTR) was in that position. TOC out average = 2π 0 2R 0 rv Z TOC out r,θ drdθ Q (22) Afterward, this average was taken as the inlet concentration for the next PFR reactor located from z = ∆z to z = 2∆z, being ∆z = L/100. Subsequently, Equation (21) was solved for each reactor found in the (z = ∆z, z = 2∆z) interval, and a new CSTR was virtually placed in z = 2∆z. Finally, these steps were repeated successively until the total length of the reactor was covered (z = L). This modeling approach is described graphically in the Figure 4, in which n corresponds to the number of divisions of the total length (L). The time dependence of the photocatalytic process was treated as a step dependence, which is associated with the number of passes (n pass ) that the slurry has in the reactor. This strategy has been used several times for modeling photocatalytic recirculation systems [24,37,48]. The change in TOC concentration per each pass was estimated as follows: Conclusions The Taguchi experimental design was applied for analyzing the TOC removal of commercial acetaminophen in a solar CPC photocatalytic reactor. It showed that the most favorable conditions for a robust operation were an initial pH of 9 and a catalyst load of 0.6 g L −1 . Although the results differ from the reported studies with similar conditions, the variation of the solar radiation and the interaction of the pH with the catalyst load are the reasons for this discrepancy. On the other hand, the kinetic parameters obtained through the mathematical model proposed in this work (k T = 7.5874 × 10 −8 mol L −1 s −1 W −0.5 m 1.5 and K 1 = 109.52 L mol −1 ) can be used for scaling purposes since the model had a specific contribution of the photonic absorption. Furthermore, that large-scale plants require smaller ratios of A T /V T when compared with intermediate and pilot-scale schemes. This result is reasonable because the higher the scale, the higher residence times, and therefore, the conversion is enhanced. Therefore, in order to save monetary resources, a careful analysis based on these results should be made before deciding to scale photocatalytic reactors. Author Contributions: Déyler Castilla-Caballero and José Colina-Márquez analyzed the data and run the simulations of the kinetic model; Fiderman Machuca-Martínez supplied the reactants and the equipment for the experimental runs; and Ciro Bustillo-Lecompte collaborated with the paper writing and editing.
2019-04-09T13:09:41.310Z
2018-04-28T00:00:00.000
{ "year": 2018, "sha1": "7c857e1630027486d0c3aeab2bb345a12f1db9ca", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4344/8/5/179/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a8292ee6cb04ae50b3f19abb63dac146e9c75343", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Chemistry" ] }
269096861
pes2o/s2orc
v3-fos-license
Prevalence and risk factors for long COVID among adults in Scotland using electronic health records: a national, retrospective, observational cohort study Summary Background Long COVID is a debilitating multisystem condition. The objective of this study was to estimate the prevalence of long COVID in the adult population of Scotland, and to identify risk factors associated with its development. Methods In this national, retrospective, observational cohort study, we analysed electronic health records (EHRs) for all adults (≥18 years) registered with a general medical practice and resident in Scotland between March 1, 2020, and October 26, 2022 (98–99% of the population). We linked data from primary care, secondary care, laboratory testing and prescribing. Four outcome measures were used to identify long COVID: clinical codes, free text in primary care records, free text on sick notes, and a novel operational definition. The operational definition was developed using Poisson regression to identify clinical encounters indicative of long COVID from a sample of negative and positive COVID-19 cases matched on time-varying propensity to test positive for SARS-CoV-2. Possible risk factors for long COVID were identified by stratifying descriptive statistics by long COVID status. Findings Of 4,676,390 participants, 81,219 (1.7%) were identified as having long COVID. Clinical codes identified the fewest cases (n = 1,092, 0.02%), followed by free text (n = 8,368, 0.2%), sick notes (n = 14,469, 0.3%), and the operational definition (n = 64,193, 1.4%). There was limited overlap in cases identified by the measures; however, temporal trends and patient characteristics were consistent across measures. Compared with the general population, a higher proportion of people with long COVID were female (65.1% versus 50.4%), aged 38–67 (63.7% versus 48.9%), overweight or obese (45.7% versus 29.4%), had one or more comorbidities (52.7% versus 36.0%), were immunosuppressed (6.9% versus 3.2%), shielding (7.9% versus 3.4%), or hospitalised within 28 days of testing positive (8.8% versus 3.3%%), and had tested positive before Omicron became the dominant variant (44.9% versus 35.9%). The operational definition identified long COVID cases with combinations of clinical encounters (from four symptoms, six investigation types, and seven management strategies) recorded in EHRs within 4–26 weeks of a positive SARS-CoV-2 test. These combinations were significantly (p < 0.0001) more prevalent in positive COVID-19 patients than in matched negative controls. In a case-crossover analysis, 16.4% of those identified by the operational definition had similar healthcare patterns recorded before testing positive. Interpretation The prevalence of long COVID presenting in general practice was estimated to be 0.02–1.7%, depending on the measure used. Due to challenges in diagnosing long COVID and inconsistent recording of information in EHRs, the true prevalence of long COVID is likely to be higher. The operational definition provided a novel approach but relied on a restricted set of symptoms and may misclassify individuals with pre-existing health conditions. Further research is needed to refine and validate this approach. Funding 10.13039/501100000589Chief Scientist Office (Scotland), 10.13039/501100000265Medical Research Council, and BREATHE. Linked datasets Pseudonymised identifiers of National Health Service (NHS) Scotland's Community Healthcare Index (CHI) were used to link the following datasets: Supplementary methods Detailed description of development of the operational definition for long COVID To develop an operational definition that could be used to identify individuals as having long COVID or not, we used matched analysis to identify individual indicators of long COVID and then investigated how those indicators cluster to form one or more phenotypes for long COVID, as depicted in Figure S1. Preparing the matched cohort We began by preparing a matched cohort consisting of pairs of individuals with positive and negative RT-PCR test results for SARS-CoV-2, and with the same propensity to receive a positive RT-PCR test in a given month. We used time-varying matching in month-long intervals from 1 March 2020 until 30 April 2022 (when widespread RT-PCR testing ended in Scotland).In each time period, we matched individuals whose first positive RT-PCR test was recorded during the period (exposed group) to individuals whose first negative RT-PCR test was recorded during the period (including only those individuals who had not previously tested positive) (control group).Individuals were matched on their propensity to test positive for SARS-CoV-2 during the period under investigation. We used nearest neighbour matching on propensity scores with a calliper of 0.8 standard deviations, coupled with exact matching on week of test and age in years (we used individual years of age up to 79, then two-year age bands from 80-89, five-year bands for 90-99, and a single band for those aged 100 or older).Individuals with a positive RT-PCR test were eligible to be used as controls up until the date at which they tested positive.In the event that an exposed case had more than one candidate control, the control case whose propensity score was closest to the exposed case's score was selected.We included the restriction that each individual could be used as a control case no more than once.S2) were used to confirm the adequacy of the matching. Matched analysis Matched pairs were jointly censored if: either individual died before the end of the follow up period; if the control tested positive for COVID-19; or if the exposed individual was reinfected, as identified by a second positive RT-PCR test at least 42 days after the initial positive test (a cut-off of 42 days was selected to allow for persistence of viral material from the original infection, following advice provided by Public Health Scotland). We fitted individual Poisson regression models to estimate adjusted rate ratios (aRR) for exposed cases, relative to control cases for each of our dependent variables.The dependent variables captured counts of various clinical interactions recorded in EHR within 4-12 weeks and >12-26 weeks of the exposed case's test date in each matched pair.The clinical interactions we considered included: 45 grouped clinical codes (reflecting symptoms and diagnoses recorded in primary care records, listed in Table S1); 27 newly dispensed categories of prescriptions (whereby 'newly dispensed' refers to prescriptions that had not been dispensed in the 12 months prior to the test date used for matching, listed in Table S2: British National Foundry (BNF) sub-paragraph codes), and seven indicators of health service use (including counts of: GP visits, hospital admissions, outpatient attendances for respiratory conditions, A&E visits, out of hours encounters, intensive care unit (ICU) admissions, and NHS 24 telehealth interactions). Each model included an offset to account for variation in number of days of follow-up.To adjust for any residual imbalance remaining after matching, all predictors used in the propensity score estimation were included as covariates.The Quasi-Poisson variant of Poisson regression was used to adjust for the possibility of overdispersion.We adjusted p-values to reduce the false discovery rate, using Benjamini and Hochberg's method. 2 Dependent variables that were recorded in fewer than five exposed or control cases' EHR were removed from the analysis. Figure S3-Figure S4 present the results of the matched analysis, comparing individuals with a positive RT-PCR test to those with a negative RT-PCR test, 4-12 weeks and >12-26 weeks following positive cases' test date, respectively.Figure S5Figure S6 present the equivalent results comparing positive cases to controls who had not yet taken an RT-PCR test.All clinical interactions that occurred at a significantly higher rate in the analysis comparing the exposed group to controls with a negative RT-PCR test also occurred at a significantly higher rate relative to controls who had not yet taken an RT-PCR test (adjusted-p < .05). As a sensitivity test, we repeated the matched analysis, stratified by periods when the wild, Alpha, Delta and Omicron SARS-CoV-2 variants were dominant.The results presented in Figure S7Figure S10 reveal similar patterns across the earlier periods and a reduction in significantly higher rates of clinical interactions in the 4-12 and >12-26 weeks following testing for positive cases during the Omicron period.Significantly lower rates of the 'Covid' indicator during the Delta and Omicron periods in the 4-12 and >12-26 weeks following testing may indicate that fewer positive cases were captured in RT-PCR testing data during these periods. Cluster analysis To identify one or more phenotypes for long COVID, we performed cluster analysis on the indicators that occurred at a significantly higher rate among individuals with a positive RT-PCR test relative to those with a negative RT-PCR test in the 4-12 or >12-26 weeks after testing.We tested for clusters of indicators among exposed cases only. Counts of each indicator were normalised prior to clustering.We performed hierarchical clustering using Gower's measure of distance to calculate the dissimilarity between indicators, coupled with Ward's linkage method.We calculated Dunn's Index and the average silhouette widths for two to twelve clusters to identify the optimal number of clusters in terms of internal validity.We then plotted dendrograms colour-coded by cluster, shown in Figure S11-Figure S12.For sensitivity, we repeated the analysis using k-medoids clustering (also known as partition around medoids (PAM)).As before, we calculated Dunn's Index and the average silhouette widths for two to twelve clusters to identify the optimal number of clusters in terms of internal validity.The optimal cluster solutions are presented in Table S3Table S4. As discussed in the main text, imbalance in the frequencies of clinical outcomes that are and are not automatically coded in EHR, coupled with sparse recording of outcomes that are not automatically coded, necessitated that we adopt an alternative approach to developing our operational definition, informed by clinical practice (Figure 2).The plot shows rate ratios for the 27 newly dispensed categories of prescriptions (whereby 'newly dispensed' refers to medications that had not been dispensed in the 12 months prior to testing).Analysis is stratified according to the variant period in which tests were taken.Variant periods reflect the periods when the following strains of SARS-CoV-2 represented more than 60% of sequenced cases: wild The plot shows rate ratios for the 27 newly dispensed categories of prescriptions (whereby 'newly dispensed' refers to medications that had not been dispensed in the 12 months prior to testing).Analysis is stratified according to the variant period in which tests were taken.Variant periods reflect the periods when the following strains of SARS-CoV-2 represented more than 60% of sequenced cases: wild PPI=patient and public involvement Aim The aims of patient and public involvement (PPI) in this study were to: (1) embed patient and public perspectives and information needs into project decision-making; (2) ensure that the experiences of people with Long Covid were incorporated into the study design; and (3) contribute to shared best practice in PPI. Methods This study uses routinely collected health data from the "Early Pandemic Evaluation and Enhanced Surveillance of COVID-19" (EAVE II) platform, including clinical codes, free text from patient notes, sick notes and prescribing.The initial research bid was reviewed in October 2020 by our Lay Co-Investigator (Weatherill), selected from a wide pool of regular patient and public contributors due to his PPI experience and understanding of healthcare data in Scotland via co-leadership of the EAVE II Public Advisory Group (PAG, n=15).The PPI Coordinator (Woolford) recruited two patient partners (Batchelor, White) in July 2021 from the Long Covid Scotland Action Group, on the basis of research interest and lived experience of Long Covid. The resulting PPI Team have been involved in co-producing a PPI Strategy; steering the project; commenting on the analysis protocol; designing and releasing a patient survey to inform analysis interpretation; reviewing plain English summaries of project outputs; assisting in the development of public-and policy-facing documents released in conjunction with the outputs of this study, and authoring this GRIPP2 Appendix. This work was carried out remotely, either using videoconferencing (Zoom, with minutes produced from each recording) or asynchronously via email.Public members of the PPI Team were paid for time and expertise shared in line with National Institute for Health and Care Research (NIHR) guidelines, 3 with appropriate paperwork issued to prevent compromise of any state financial support received.Role Descriptions, Terms of Reference and PPI Objectives and Timelines were co-produced and agreed by the PPI Team shortly after recruitment. Results The newly formed PPI team decided on aims for the project as described above, and deliverables throughout the research cycle to achieve these, summarised in Table S7. Analysis and interpretation Share results from Long Covid Scotland with analysts to inform interpretation; design and carry out consultation with people with Long Covid to select features for prediction model from patient perspective. Dissemination Review public-facing outputs to produce plain English resources an identify potential questions; collaborate with staff to provide written contributions for academic publications. Implementation Provide steer on appropriate messaging and content to be released in policy briefing(s) and in any supporting media materials. Evaluation Evaluate PPI element of project in final stages; share this work by means of a PPI report or paper. Following this, the PPI team met with the primary analyst at the project's design phase to discuss the analysis protocol.Potential shortcomings, including a lack of data from private referrals for Long Covid, were highlighted as part of this involvement. To further the collaborative design of the project, a survey of Long Covid Scotland members led by White was reviewed by experienced clinical and analytical members of the project team before release, particularly in relation to questions on demographic factors and symptoms.The survey was designed to give a richer understanding of the physical, mental, relational and financial impacts of Long Covid on members of the group, beyond what could be established through routinely collected data.The findings, based on 222 in-depth responses from across Scotland, were shared by CW with our team and via the Scottish Parliament's Cross-Party Group (CPG) for Long Covid. 4 They are available through the 2022 report 'Hearing Our Voices -Long Covid: The Impact on Our Lives', 5 and have been used to inform refinement of the analysis design and results interpretation.This includes confirmation that survey symptoms were consistent with operational definition results, but also acknowledgment of the challenge of capturing commonly reported symptoms such as cognitive dysfunction in clinical codes. The PPI Team have attended a total of 14 Steering Group meetings and 4 dedicated PPI meetings, to steer and comment on project development.Key conversations included discussions of inclusion criteria; how operational definitions can help to address the issue of identifying Long Covid patients who may not have had access to testing; whether different SARS-CoV-2 variants pose a higher Long Covid risk; patient interactions with GPs and symptom curation; lobbying for more timely data access; and discussing the timing and sensitivity of disseminating project results to policymakers and the public. As new data and preliminary results emerged, it became clear that our initial deliverable of involving a wider group of Long Covid Scotland members in interpreting the results of cluster analysis data would be impractical, as there were insufficient numbers of patients with multiple codes in the cohort to generate clear clusters.Instead, we have committed to involving patient contributors in feature selection for the prediction model, alongside clinical and analytical experts. As part of ongoing conversations with our public contributors on dissemination of results, we have agreed to collaborate with key policy stakeholders including Public Health Scotland, Scottish Government and the Chief Scientist Office (CSO). Discussion and conclusions As part of the development of an operational definition of Long Covid, patient and public contributions have shaped the analysis design, project steering and results interpretation in a significant way.The sharing of lived experience has been of particular benefit to understanding how results derived from routinely collected data fit into the wider context of patient interaction with the healthcare system. Involvement is also helping to define how we will disseminate the study and contribute to policy implementation in Scotland, for a new condition which continues to impact the lives of many. Reflections The longer timeframe of study has allowed for more nuanced input from PPI through Steering Group and PPI meetings, which have represented an effective mechanism for involvement in this project.Multiple, shorter opportunities for input have also been particularly important when collaborating with people living with chronic conditions like Long Covid, which are unpredictable and likely to cause fatigue. From a coordination perspective, the nature and length of the project has also necessitated more regular input.The PPI Team have had to adapt considerably to changes in project design and management due to data access delays and implications of preliminary results, particularly for cluster analysis.This is an important learning point which has been pronounced in this project due to the complexity and novelty of analysis, involving a new chronic condition, poor clinical coding, and free text analysis. From a patient perspective, involvement represents a critical aspect of a project exploring a new health condition.By definition, much of the expertise surrounding Long Covid is currently held in lived experience.The PPI activities carried out have allowed for more in-depth understandings of how Long Covid impacts people's lives.They also provided a platform for longer-term knowledge exchange between patient, clinical and analytical experts.The low incidence of multiple codes for a given patient, which influenced the final operational definition of Long Covid in this study, reflects anecdotal patient experience in which consultation conversations may omit, overlook or minimise descriptions of multiple symptoms. From an analysts' perspective, recruitment of PPI patient partners who were active members of a wider patient group was particularly valuable.Due to their involvement in the Long Covid Scotland Action Group, the patient partners were able to share not only their own experiences, but also those of the broader Long Covid community in Scotland.The PPI Team's work on the patient survey was particularly beneficial, as it provided a formal mechanism for PPI Team members to synthesise the experiences of the wider Long Covid community.Their input at steering group meetings and in the "Hearing our Voices -Long Covid: The Impact on Our Lives" survey report greatly enhanced the project team's ability to make informed analytical decisions and interpret results from a patient-centred perspective.Individuals who tested positive were typically younger than the general population, while those who tested negative or who had never tested were typically older.Compared to those who tested negative on RT-PCR, those who had not tested were more likely to be male.They were also less likely to have two or more comorbidities, be immunosuppressed, or to have been advised to shield.These differences may indicate less frequent engagement with the healthcare system among those who had not tested, relative to those who had tested negative.The table presents the number and percentage of individuals in each category indicated by the column headings.Percentages in the 'Total' row reflect the share of individuals in each category as a proportion of the total population.Cell counts <5 have been suppressed. S41 Cases of long COVID identified by sick notes will be influenced by variation in the share of working age individuals resident in each health board.Due to low usage of long COVID clinical codes, variation across health boards could be influenced by the coding practices of a small number of clinicians. Figure S1 : Figure S1: Schematic of the methods used to create an operational definition of long COVID Figure S2 : Figure S2: Covariate plots used to assess balance in the matched samples Standardised mean differences between exposed and control groups in the full cohort and the matched sample are shown for each control group (individuals with a negative RT-PCR test and individuals who have not yet tested).Positive standardised mean differences indicate larger means in the exposed group relative to the control group.Points between the dashed lines indicate that values for the exposed and control groups are within 0.1 standard mean differences.Points between the dotted lines indicate that values for the exposed and control groups are within 0.2 standard mean differences. Figure S3 : Figure S3: Rate ratios of clinical interactions for individuals with a positive RT-PCR test, relative to individuals with a negative RT-PCR test 4-12 weeks following testing Panel A presents rate ratios for the 45 grouped clinical codes (reflecting symptoms and diagnoses recorded in primary care records) and the seven indicators of health service use.Panel B presents rate ratios for the 27 newly dispensed categories of prescriptions (whereby 'newly dispensed' refers to medications that had not been dispensed in the 12 months prior to testing).Each point represents an estimate from a separate Poisson regression model.Regressions were run on the matched sample.Controls for all variables used in the propensity score estimation (Equation S1) were included in each regression model.95% confidence intervals are shown. Figure S4 : Figure S4: Rate ratios of clinical interactions for individuals with a positive RT-PCR test, relative to individuals with a negative RT-PCR test >12-26 weeks following testing Panel A presents rate ratios for the 45 grouped clinical codes (reflecting symptoms and diagnoses recorded in primary care records) and the seven indicators of health service use.Panel B presents rate ratios for the 27 newly dispensed categories of prescriptions (whereby 'newly dispensed' refers to medications that had not been dispensed in the 12 months prior to testing).Each point represents an estimate from a separate Poisson regression model.Regressions were run on the matched sample.Controls for all variables used in the propensity score estimation (Equation S1) were included in each regression model.95% confidence intervals are shown. Figure S5 : Figure S5: Rate ratios of clinical interactions for individuals with a positive RT-PCR test, relative to individuals who have not yet taken an RT-PCR test, 4-12 weeks following positive cases' test dates Panel A presents rate ratios for the 45 grouped clinical codes (reflecting symptoms and diagnoses recorded in primary care records) and the seven indicators of health service use.Panel B presents rate ratios for the 27 newly dispensed categories of prescriptions (whereby 'newly dispensed' refers to medications that had not been dispensed in the 12 months prior to testing).Each point represents an estimate from a separate Poisson regression model.Regressions were run on the matched sample.Controls for all variables used in the propensity score estimation (Equation S1) were included in each regression model.95% confidence intervals are shown. Figure S6 :Figure S7 : Figure S6: Rate ratios of clinical interactions for individuals with a positive RT-PCR test, relative to individuals who have not yet taken an RT-PCR test, >12-26 weeks following positive cases' test dates Panel A presents rate ratios for the 45 grouped clinical codes (reflecting symptoms and diagnoses recorded in primary care records) and the seven indicators of health service use.Panel B presents rate ratios for the 27 newly dispensed categories of prescriptions (whereby 'newly dispensed' refers to medications that had not been dispensed in the 12 months prior to testing).Each point represents an estimate from a separate Poisson regression model.Regressions were run on the matched sample.Controls for all variables used in the propensity score estimation (Equation S1) were included in each regression model.95% confidence intervals are shown. Figure S8 : Figure S8: Rate ratios of dispensed prescriptions for individuals with a positive RT-PCR test, relative to those with a negative RT-PCR, 4-12 weeks following positive cases' test dates, stratified by variant periodThe plot shows rate ratios for the 27 newly dispensed categories of prescriptions (whereby 'newly dispensed' refers to medications that had not been dispensed in the 12 months prior to testing).Analysis is stratified according to the variant period in which tests were taken.Variant periods reflect the periods when the following strains of SARS-CoV-2 represented more than 60% of sequenced cases: wild (1 March 2020 to 10 January 2021), Alpha (11 January 2021 to 9 May 2021), Delta (24 May 2021 to 28 November 2021), Omicron (20 December 2021 to 30 April 2021).Each point represents an estimate from a separate Poisson regression model.Regressions were run on the matched sample.Controls for all variables used in the propensity score estimation (Equation S1) were included in each regression model.95% confidence intervals are shown. ( 1 Figure S8: Rate ratios of dispensed prescriptions for individuals with a positive RT-PCR test, relative to those with a negative RT-PCR, 4-12 weeks following positive cases' test dates, stratified by variant periodThe plot shows rate ratios for the 27 newly dispensed categories of prescriptions (whereby 'newly dispensed' refers to medications that had not been dispensed in the 12 months prior to testing).Analysis is stratified according to the variant period in which tests were taken.Variant periods reflect the periods when the following strains of SARS-CoV-2 represented more than 60% of sequenced cases: wild (1 March 2020 to 10 January 2021), Alpha (11 January 2021 to 9 May 2021), Delta (24 May 2021 to 28 November 2021), Omicron (20 December 2021 to 30 April 2021).Each point represents an estimate from a separate Poisson regression model.Regressions were run on the matched sample.Controls for all variables used in the propensity score estimation (Equation S1) were included in each regression model.95% confidence intervals are shown. Figure S9 : Figure S9: Rate ratios of symptoms, diagnoses and health service use for individuals with a positive RT-PCR test, relative to those with a negative RT-PCR, >12-26 weeks following positive cases' test dates, stratified by variant period The plot shows rate ratios for the 45 grouped clinical codes (reflecting symptoms and diagnoses recorded in primary care records) and the seven indicators of health service use.Analysis is stratified according to the variant period in which tests were taken.Variant periods reflect the periods when the following strains of SARS-CoV-2 represented more than 60% of sequenced cases: wild (1 March 2020 to 10 January 2021), Alpha (11 January 2021 to 9 May 2021), Delta (24 May 2021 to 28 November 2021), Omicron (20 December 2021 to 30 April 2021).Each point represents an estimate from a separate Poisson regression model.Missing point estimates occur where there were too few observations for the model to converge.Regressions were run on the matched sample.Controls for all variables used in the propensity score estimation (Equation S1) were included in each regression model.95% confidence intervals are shown. Figure S10 : Figure S10: Rate ratios of dispensed prescriptions for individuals with a positive RT-PCR test, relative to those with a negative RT-PCR, >12-26 weeks following positive cases' test dates, stratified by variant periodThe plot shows rate ratios for the 27 newly dispensed categories of prescriptions (whereby 'newly dispensed' refers to medications that had not been dispensed in the 12 months prior to testing).Analysis is stratified according to the variant period in which tests were taken.Variant periods reflect the periods when the following strains of SARS-CoV-2 represented more than 60% of sequenced cases: wild (1 March 2020 to 10 January 2021), Alpha (11 January 2021 to 9 May 2021), Delta (24 May 2021 to 28 November 2021), Omicron (20 December 2021 to 30 April 2021).Each point represents an estimate from a separate Poisson regression model.Missing point estimates occur where there were too few observations for the model to converge.Regressions were run on the matched sample.Controls for all variables used in the propensity score estimation (Equation S1) were included in each regression model.95% confidence intervals are shown. ( 1 Figure S10: Rate ratios of dispensed prescriptions for individuals with a positive RT-PCR test, relative to those with a negative RT-PCR, >12-26 weeks following positive cases' test dates, stratified by variant periodThe plot shows rate ratios for the 27 newly dispensed categories of prescriptions (whereby 'newly dispensed' refers to medications that had not been dispensed in the 12 months prior to testing).Analysis is stratified according to the variant period in which tests were taken.Variant periods reflect the periods when the following strains of SARS-CoV-2 represented more than 60% of sequenced cases: wild (1 March 2020 to 10 January 2021), Alpha (11 January 2021 to 9 May 2021), Delta (24 May 2021 to 28 November 2021), Omicron (20 December 2021 to 30 April 2021).Each point represents an estimate from a separate Poisson regression model.Missing point estimates occur where there were too few observations for the model to converge.Regressions were run on the matched sample.Controls for all variables used in the propensity score estimation (Equation S1) were included in each regression model.95% confidence intervals are shown. Figure S11 : Figure S11: Clusters of long COVID indicators at 4-12 weeks identified using Hierarchical ClusteringHierarchical Clustering was performed on indicators that occur at a significantly higher rate among individuals with a positive SARS-CoV-2 test, relative to individuals with a negative SARS-CoV-2 during the 4-12 weeks after testing.Gower's distance method and Ward's agglomeration method were used.Two clusters were selected based on the optimal silhouette width for 2-12 clusters. Figure S12 : Figure S12: Clusters of long COVID indicators at >12-26 weeks identified using Hierarchical ClusteringHierarchical Clustering was performed on indicators that occur at a significantly higher rate among individuals with a positive SARS-CoV-2 test, relative to individuals with a negative SARS-CoV-2 during the >12-26 weeks after testing.Gower's distance method and Ward's agglomeration method were used.Two clusters were selected based on the optimal silhouette width for 2-12 clusters The number and percentage of individuals in each category indicated by the column headings are shown.Percentages in the 'Total' row reflect the share of individuals in each category as a proportion of the total population.Individuals in the 'Tested positive' column had a positive LFT or RT-PCR test recorded in their EHRs.Individuals in the 'Tested negative column had a negative LFT or RT-PCR test recorded in their EHRs and no positive test results. Table S2 : British National Foundry (BNF) sub-paragraph codes As a sensitivity test, we repeated the propensity score matching using a second control group consisting of individuals who had neither a positive nor a negative RT-PCR test by the beginning of the month under investigation.Control group members' lack of a test date necessitated that we omit exact matching on week of test in this version of the analysis.It also precluded the inclusion of number of RT-PCR tests taken as a predictor in our propensity score model (because all controls had zero tests and all members of the exposed group had one or more tests, by definition).Controls were assigned a pseudo test date (selected at random from the range of dates within the month under investigation) to allow for calculation of the subset of predictors that were dependent on individuals' test dates (including number of COVID-19 vaccine doses received up to 14 days before testing and hospitalisations and ICU admissions during the 12 months prior to testing).54.3% of positive cases were matched to a control with a negative RT-PCR test, and 83.6% of positive cases were matched to a control who had not yet tested.Covariate balance plots (Figure 1ur propensity score model (Equation S1) included the following predictors: splines in age (with three degrees of freedom); sex; SIMD quintile; six-fold urban-rural classification; local authority of residence; household size (of which 4.1% was mean imputed); number of COVID-19 vaccine doses received up to 14 days before the test date used for matching; and number of RT-PCR tests taken by the RT-PCR test date used for matching; presence or absence of each of the clinical risk groups listed in Table2(reflecting the subset of predictors used in the Q-COVID algorithm1for which Scottish data is available); splines in BMI (with three degrees of freedom), of which 58.3% was imputed multiple imputation chained equations; binary indicators of individuals' status as immunosuppressed, recommended to shield, or having been hospitalised or admitted to an ICU in the 12 months before testing. The equation presents one example of the time-varying propensity score matching model (for the month March 2021).The model was run for each of the 26 months between March 1, 2020 and April 30, 2022.The "MatchIt" Rpackage was used.Propensity to test positive for SARS-CoV-2 were estimated using a generalized linear model (logistic regression).Individuals who received a positive RT-PCR test for SARS-CoV-2 in the month under consideration were matched to those who received a negative RT-PCR test for SARS-CoV-2 in the same month.Individuals were matched on propensity scores using nearest neighbour matching with a calliper of 0.8 standard deviations.In addition, exact matching was performed on week and year of testing and on age.Each individual was used as a control no more than once. Table S7 : Results of PPI Area of research cycle Summary of deliverables Grant development Appoint Lay Co-Investigator and comment on grant application.Undertaking project Collaborate with Long Covid Scotland; co-produce PPI Strategy and Terms of Engagement; provide induction and statistical methods training for project.Design Review analysis protocol; support design and release of survey gauging symptoms and impact of Long Covid on patients in Scotland; continue to question and comment on design development at Steering Group and PPI meetings. Table S8 : Patient characteristics stratified by testing status The table presents patient characteristics stratified by testing status.
2024-04-13T15:18:50.878Z
2024-04-11T00:00:00.000
{ "year": 2024, "sha1": "945e13183828af9fbda1251264b5e18e98748566", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "6fb83d4104c14733c058b0e369d40021dbdc2b4b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
213128370
pes2o/s2orc
v3-fos-license
Integration of GIS and a complex of three-dimensional laser scanning The need for a qualitative presentation of data becomes more and more obvious every year. The ability to dynamically display information, the information relevance, the maximum efficiency of management decisions with minimal labor costs - it is on these principles that modern GIS should be built to manage the large enterprises. This article discusses the most promising way to create a GIS based on laser scanning data for enterprise management. The method of creating a GIS considered in the article, the advantage of the 3D model as the GIS basis, 3D scan data conversion availability, the economic investments analysis and the example of the pilot version of the 3D GIS allow to evaluate all the advantages of integrating GIS laser scanning technologies. We have studied the materials from foreign sources, affecting the possibility of such technologies joint use. The research purpose in this article is to substantiate the possibility of creating a 3D GIS based on laser scanning data. Conclusions are also made about the main advantages and disadvantages of working with such systems. Introduction The modern level of GIS development is coming to a new level thanks to the four-dimensional information representation concept at present, -now it can be not only displayed, but also predictable [1]. This is achieved by the artificial intelligence technologies introduction and expert systems in GIS [2]. The second development direction is fair to point out the new ways to get geo-bases for creating GIS -moving away from paper carriers as a source of data for creating GIS to technologies that allow getting actual, easily convertible data to scanning systems [3]. Unfortunately, the availability of using GIS applications that include an artificial intelligence component remains low, because it requires large investments (not every enterprise that uses the GIS platform for management purposes is advisable for such technologies' implementation). Creating a GIS based on laser scanning data (especially for GIS in order to manage a large industrial enterprise) is much more affordable and profitable on the investing money part. Methods of obtaining spatial information by laser scanning are already firmly established in the field of engineering and geodetic surveys, but not only as a source of initial data for GIS [4]. Although the data obtained by this method has a number of advantages that are most important for all the 'functions' of the created 3D GIS implementation -high quality visualization of the scanned area due to its volume image, the ability to build a 3D model of complex technological objects or engineering structures and high data convertibility (Software to work with laser scanning data have many output formats). Article 132 of the Civil Code, which considers an enterprise as a single immovable complex, "an enterprise as a property complex includes all property types intended for its activities, including land, buildings, structures, equipment, inventory, raw materials, products, rights of claim ... and others exclusive rights, unless otherwise provided by law or contract " [5], puts forward a number of tasks that will allow considering the enterprise as a generalized information structure for such an enterprise GIS [6]. These tasks include the following: -The need to account for a large number of objects with different characteristics; -The need for relevant information at the time of creating databases and accounting systems; -The need for constant information updating; -The need to make quick management decisions. Research into the GIS and laser scanning technologies integration is also considered among foreign authors of scientific articles; however, they do not affect the possibility of using them for managing the large enterprises [7,8,9,10]. Material and Methods The stages of creating a 3D GIS based on laser scanning data do not differ from the stages of creating traditional 2D GIS. An exception is the method of obtaining and processing source data -directly carrying out field work with a laser scanner and creating a model for a cloud of points in various computer-aided design (CAD) systems [11]. To understand the need to introduce such three-dimensional systems, attention should be paid to the GIS modeling main drawback by processing raster images and scanning cartographic materials -a possible information irrelevance. Speaking about the creation of a GIS for a large industrial facility, the relevance of information display is the main concept of using GIS as a whole. The technological objects schemes are often stored in enterprises in the paper media form. As previously discussed, paper mapping materials do not contribute to the 3D GIS main objectives achievement -the effective creation, implementation and use in production. Without obtaining a realistic model of a technological object or an engineering structure, it is not possible to create a 3D GIS itself. And creating such a system based on paper cartographic materials (plans and schemes) is difficult even with the most advanced CAD systems [12], since the resulting model still will not be able to repeat exactly all the design solutions or existing defects and inconsistencies with the existing mechanisms and structures drawings (which is important for working with a dynamic GIS). Laser scanning methods make it possible to obtain the most highly accurate data in the shortest time possible compared to the obtaining data other methods (the same drawing of cartographic materials or the shooting of objects with electronic total stations [13]). The complexity of the integration of GIS and laser scanning is a choice of software platform that supports the ability to create 3D views -the resulting cloud of points will be transformed into a 3D model, then we need software that supports the 3D data format as much as possible -the data should not be distorted [14]. After the model is loaded into "3D accessible software", it is necessary to choose the data information structure -layer-by-layer binding or object-oriented. If we are talking about complex, unique on the structures' part and technological enterprises purposes, it would be advisable to use the second option. This is due to the fact that the resulting data array cannot simply be divided into layers -each of the received 3D objects has the features inherent only in its structure and content. In this case, for successful work with 3D GIS, it is necessary to use a "separate" object display in the environment. For simpler enterprises and engineering complexes, layer-by-layer binding will suffice [15]. In comparison with the obtaining graphic information method using the drawing of a topographic base (topographic plans, aerial photographs, etc.), the object laser scanning method requires a much higher cost: the larger and more complex from the point of view of the construct, the enterprise is built, the higher the cost of both field and cameral works. The more GIS performs the functions, the more complicated its "internal" device. This means that the user, when working with such a system, needs knowledge in the field of software, a GIS device not at the level of an ordinary user. The complexity of the device DBMS, a large number of system components, advanced tools for performing tasks can be implemented with the help of expensive, limited in distribution and software use. The more complex the structure and specification of the enterprise itself, for which the GIS is created, the more complex the device of the platform on which such a GIS will operate. Results For the successful implementation of such fairly expensive projects, it is necessary to work out a mechanism for creating 3D GIS from the initial stage to its implementation in production. In other words, when developing any GIS, a "pilot project" is needed. As a prototype, it is possible to develop a pilot GIS for one of the campus buildings, since its 3D model has been created based on laser scanning. Summary Summarizing, we can say that the two relatively recent technologies integration -GIS and laser scanning, can yield successful results with proper use of software, equipment and creating GIS intricacies knowledge. It is the simultaneous (or rather, sequential) use of GIS and laser scanning that makes it possible not to separate, and besides based on the previously stated concept of building GIS for enterprise management -the ability to dynamically display information (create a 3D model), the relevance of information (a highly accurate 3D model) [16], the maximum efficiency of management decisions with minimal labor costs (a highly accurate 3D model containing all the necessary attribute information for fast and efficient Board Governance)
2019-12-19T09:11:08.976Z
2019-12-18T00:00:00.000
{ "year": 2019, "sha1": "380ac4a3841ec81ae07a21f7e319cc44a586f6a3", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/698/6/066016", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "3c576967479dd0da276fc3ae79e05a700493068c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
137430159
pes2o/s2orc
v3-fos-license
Prediction of Blast Furnace Performance with Top Gas Recycling Top gas recycling has been suggested as a method for reducing blast furnace fuel rates and thus reducing carbon emissions from the ironmaking process. Three methods of top gas recycling are numerically investigated using a mathematical model to predict the furnace performance at fixed b[ast volume and constant coke, ore and pulverised coal compositions with varying recyc]e volumes. For each recycling method, a first calculation sequence is performed varying recycle volume at fixed ore : fuel ratio, and also a second sequence at fixed average liquid metal outflow temperature. Simple replacement of normal blast gases with recycled top gas is predicted to cause the production rate to decrease and the fuel rate to increase. Likewise, oxygen enriched blast replacement has similar effects, although the severity is less as the blast oxygen rate is maintained in this case. Both of these methods reduce furnace efficiency. Hot reducing gas (HRG) replacement, where CO, is stripped from the recycled gas. Ieads to an increase in production of up to 25"/• with a simultaneous decrease in fuel rate of 20'/• at fixed metal temperature. These calculations show that top gas recycling could be used to increasing furnace efficiency while decreasing carbon emissions thus making a positive contribution to efforts to prevent global warming. Introduction In common with other major industries, the iron and steel industry is under social pressure to improve efficiency and to reduce emissions of C02and other gases believed to contribute to global warming. Regarding the blast furnace, various methods can be used to achieve these aims. In this paper, the novel methodtermed 'top gas recycling' is considered. In this method, some of the blast gases are replaced with recycled furnace top gas, optionally in conjunction with increased oxygen enrichment of the blast. Top gas recycling was experimentally tested on a small furnace and was found to significantly decrease fuel rate while increasing productivity. 1) This paper uses a previously presented mathematical model of the blast furnace2,3) to estimate blast furnace performance under various top gas recycling strategies. Three modes of implementing top gas recycling are investigated numerically, and their limitations and benefits examined. This paper focuses on the scientific feasibility of using recycled top gas as a tuyere injectant. Once a feasible recycling method is ascertained, further work could be conducted to analyse its engineering and economic aspects. Methodsof Top GasRecycling To recycle top gas, part of the normal blast gas is 239 replaced by top gas which has been filtered then heated to the blast temperature. In all calculations in this paper, the volume of gas injected through the tuyeres was kept constant, while the composition varied with the fraction of replacement. Three methods of replacement have been investigated. The three methods are schematically depicted in Fig. I in comparison with normal operation without recycling. The methods are defined as follows. (1) Simple blast replacement: The normal blast gas volume is decreased, and sufficient volume of top gas is recycled in order to maintain tuyere gas volume, (2) Oxygenenriched blast replacement: The normal blast gas volume is decreased and sufficient volume of recycled top gas andadditional enriching oxygen is added in order to maintain both tuyere gas total volume and tuyere gas oxygen volume. (3) Hot reducing gas (HRG) replacement: As per oxygen enriched blast replacement, except C02 is removedfrom recycled top gas. This method was experimentally investigated by Tseitlin and co-workers.1) 3. Estimated Limits to Recycling Before undertaking full model calculations, it is possible to estimate variations in furnace performance using simple mass and heat balances. In these calculations, the following procedure was followed. (1) The blast gas volume, composition and tempera- The coke carbon consumedby direct reduction is calculated from the difference between the fuel rate and the sum of the raceway coke carbon consumption rate and hot metal carbon dissolution rate. The carbon dissolution rate is calculated from the hot metal rate and composition. The volume, composition and temperature of gas after it leaves the zone of direct reduction can then be calculated, assuming the solids enter this zone at 1 200'C and the liquids exit at 1 500'C. The ore composition as it enters this zone can also be calculated from the oxygen demandof direct reduction, assuming the slag contains no wustite (Fe*O). Note that indirect reduction in series with solution loss or water gas carbon gasification is equivalent to direct reduction. (3) The amount of oxygen transferred from iron oxides to the gas by indirect reduction can be calculated from the direct reduction oxygen transfer rate and the burden composition and feed rate. The volume and composition of the furnace top gas can be calculated from the rate of indirect reduction, assumingthe fractions of CO and H2consumedby indirect reduction are equal. The top gas temperature (TGT) can be calculated from an enthalpy balance on the indirect reduction region. (4) For top gas recycling calculations, use the top gas composition from step 3 to recalculate the tuyere gas composition in step l, and repeat all steps until the top gas composition converges. In this analysis the following definitions are used. The adiabatic flame temperature (AFT) is taken to be the temperature of gas exiting the raceway. The mean temperature of gas exiting the region where direct reduction occurs is called the direct reduction temperature (DRT). DRT must be sufficiently high to allow direct C 1998 ISIJ~~O Variation inestimated furnace parameters for the three top gas recycling methods, reduction and/or solution loss to proceed, otherwise the operating condition is thermodynamically impossible. The degree of direct reduction (DDR) is the atomic fraction of oxygen removedfrom iron oxides by direct reduction. Figure 2 shows the variation in DDR,top gas nco, corrected nco, AFT, DRT and TGTas blast replacement fraction is varied for the three replacement methodsat fixed ore: fuel ratio. These calculations were based on previously reported operational data.2) In summary, the furnace has inner volume 4 907m3, and the blast, ore, production and fuel rates were 8 747Nm3/min, 16 Two sequences of calculations were performed to analyse the three recycling methods using the full blast furnace model. In the first of the two sequences, the mass ratio of ore : fuel was kept constant while increasing the volumetric fraction of the blast replaced by recycled top gas. In the second sequence of calculations, the aim was to maintain the calculated average temperature of metal entering the hearth (T**1*) as the blast replacement fraction increased. This was achieved by modifying the ore:fuel ratio until where T is T at the base ore : fuel ratio, and To is bas' **1* the calculated metal temperature without recycling. When ore : coke ratio was adjusted, the burden distribution was also adjusted by scaling the ore : coke volume ratio linearly across the furnace radius. For simple recycling, the metal and raceway temperatures rapidly decrease as blast replacement increases due to the strongly endothermic solution loss reaction of recycled C02 With coke in the raceway. Coke consumption in the furnace decreases due to two causes. First, the molar rate of (202 + H20 + C02) entering the raceway slightly decreases as recycle fraction increases, thus raceway carbon demand decreases. Second, Iess heat is available for direct reduction, solution loss and the water gas reaction, thus again carbon demand decreases. As ore:coke is constant, solid inflow, and thus production, decrease. As the solids descent rate is less, the stack heat flow ratio (the ratio of gas to solid thermal capacity) increases and thus the top gas temperature increases. The calculated raceway temperature is below the metal temperature at 40 o/o blast replacement. This is not plausible so results above 30 o/o blast replacement are not shown. Further, this demonstrates that simple C 1998 ISIJ fuel and blast rates is explained by considering the utilisation of hydrogen. Without recycling the blast furnace converts at least some of the biast H20 into H2 in the top gas, while H20 from burden moisture is not appreciably reduced. However, with recycling the fraction of blast H20 exiting as H2 Will increase due to being re-exposed to the net reducing conditions in the furnace. Further, burden moisture will now also be subjected to these same reducing conditions thus the fraction exiting as H2 Will increase. These two effects both increase the rate of atomic oxygen which must be carried by carbon in the top gas. Fuel rate is constant, thus the top gas C02 : CO molar ratio increases so nco increases. Figure 4 shows key calculated parameters vs. blast replacement for the fixed metal temperature calculation sequence. For the simple recycling case, the model predicts that the production rate must decrease and the fuel rate increase dramatically with blast replacement to maintain hot metal temperature. As production decreases, burden descent decreases so the stack heat flow ratio increases. This causes the top gas temperature to increase. The corrected nco decreases with blast replacement fraction as fuel rate has increased. The top gas CO concentration characterises the top gas chemical potential, as CO has the potential to accept an additional oxygen atom. The difference between the top gas temperature and 100'C represents the top gas thermal potential, which is the ability of the ascending gas to supply heat to the burden while remaining above the steam condensation point at ambient pressure. For simple recycling, the decrease in nco and the increase in top gas temperature are both counter-productive, as they indicate both the chemical and thermal potential of the top gas have increased thus furnace efficiency has decreased. For the oxygen enriched recycling method also, production decreases and fuel rate increases, Ieading to similar trends in all parameters to the simple recycling case. Specifically, the top gas chemical potential and thermal energy (represented by nco and top gas temperature) both increase with increasing blast replacement, showing the furnace is operating less efficiently. For the HRG recycling method the excess heat available for smelting mentioned previously results in a maximum increase in production rate of about 25 o/o over the basc production rate at high blast replacement, with a decrease in fuel rate of about 20 o/o. While the trends of these results agree with the experimental data of Tseitlin,1) the spectacular increase in production and decrease in fuel rate reported by them are not observed. It is possible that superior gains could be obtained by optimising burden distribution or the ratio of coke to injected PC. Note that for this particular set of operating data the top gas temperature remains almost constant at the minimum feasible operating temperature of about 100'C (due to steam condensation below this temperature). At the same time, the corrected nco approaches lOO o/o due to both the decrease in fuel rate and the hydrogen recycling effect noted in the fixed ore:fuel ratio case. These two observations indicate both the thermal and chemical potential of the top gas are approaching full utilisation. Figure 5 shows the calculated change in top gas composition vs. blast replacement for the three recycling methodsand for both sequences. In all cases except HRG recycling at fixed metal temperature. CO molar fraction steadily increases and N2molar fraction decreases with increasing replacement faction. At 100 o/o blast replacement, the N2 molar fraction goes to zero in Figs. 5(d), 5(e) and 5(f) as no N2 is entering the furnace. For HRG recycling at high replacement rates with fixed metal temperature, the molar fraction of H2 increases markedly. The source of this hydrogen is recycled burden moisture. The increase in H2: CO ratio for HRG recycling is due to C02 being stripped from the recycled gas, while H20 from the burden moisture is recycled and partially reduced to H2 by the reducing conditions within the furnace as discussed earlier. From these results a key observation can be made. Top gas C02 recycling will decrease furnace efficiency due to its cooling effect on the bosh by the solution loss reaction. In order to increase furnace efficiency, C02 must be removedfrom recycled gas. Furnace Internal State In the following analysis, attention is focused on the final furnace state once each of the recycling methods has been taken to its operational limit at fixed metal temperature. For simple recycling, the blast replacement lirnit is taken as 40 o/o. For both oxygen enriched and HRG recycling, 100 o/, blast replacement is possible. Figure 6 showsthe calculated solids temperatures for the base case and the three limit calculations. In the following discussion, the CZ is defined as the region where the solid temperature is between 1 473 and l 673K. In the base case, the CZ is fairly thin and of nearly constant height above the deadman surface. In the simple recycling case, the CZ is high on the axis and very thick across most of the furnace radius. This very thick isothermal zone indicates the gas exiting the raceway has barely sufficient sensible heat to melt the iron and slag, even at the greatly reduced production rate. A similar situation occurs with oxygen enriched recycling. The cause of this heating problem is the bosh gas temperature decrease caused by recycled C02' This is confirmed by the HRG case, in which the C02 is stripped from the recycled gas. Here the CZ is similar in shape and position to the base case CZ, and the bosh and upper stack temperatures are also very similar to the base case. The temperature distribution suggests the furnace operates quite stably at 100 '/o HRG recycling. Figure 7 shows the mass fraction of CO in the gas phase for the four cases. Two features are noteworthy. The second feature to note is that all recycling cases show little or no change in CO concentration in or immediately above the CZ. This feature is addressed in the following discussion section. Figure 8 shows the degree of iron reduction for all cases. Zero reduction means iron is fully oxidised as hematite, and unity degree of reduction means iron is fully reduced, although not necessarily molten. Note that the apparent delay in reduction along the axis is caused by the ore diameter being large (40 mm) along the axis, but decreasing to 17.5mmat I m from the axis. A11 recycling methodsincrease the degree of reduction before melting. The base case predicted iron is 93.1 "/o reduced before melting whereas using simple, oxygen enriched C 1998 ISIJ 244 and HRG recycling the model predicted 98.9, 99.8 and lO0.00/0 reduction before melting respectively. In the simple recycling and oxygen enriched recycling cases, the increase in reduction is due to the decrease in direct reduction and solution loss, which would otherwise cool the hot metal below the aim temperature. In the HRG case the direct reduction and solution loss rates also decrease, however this is because the high CO and H2 partial pressures in the stack allow full indirect reduction to occur. Since the raceway is at almost the same temperature as in the base case and direct reduction and solution loss are greatly decreased, excess gas enthalpy is available for melting thus production increases. Finally, in the HRG case reduction is com- Discussion and Sumntary Regarding the CO concentration in and immediately above the CZ, in the base case CO concentration increased below and in the CZ due to solution loss and direct reduction. CO concentration then decreased in the first few metres above the CZ due to indirect reduction. When recycling is used the rates of solution loss and direct reduction decrease to almost zero as indirect reduction has fully reduced the iron oxides before the solid is hot enoughfor significant solution loss to occur. This results in no significant changeof CO concentration in and above the CZ in the recycling cases, particularly in the HRG case. Earlier it was noted from Fig. 3 that the top gas and metal temperatures gradually increased with HRG recycling at fixed ore:fuel ratio. The cause of these increases can now be understood to be the decrease in direct reduction and solution loss. These are both strongly endothermic reactions. As their rates reduce, both the solid and gas gain in sensible heat. Note that the decrease in coke consumption by direct reduction and solution loss is partially offset by increased consumption by carbon dissolution in hot metal and by silica reduction, thus the burden descent and production rates only decrease very slightly with recycling. The inactive zone above the CZ in the HRG recycling case raises an interesting possibility. In this zone, no reactions or phase transformations are occurring, thus the zone is essentially 'dead'. It may be possible that with HRG recycling the burden filling level in the furnace could be reduced by the height of this dead zone. This would result in reductions in wall heat losses, thus possibly lead to further fuel savings. 245 5. Conclusions Topgas recycling has been suggested as a method for reducing blast furnace fuel rates andthus reducing carbon emissions from the ironmaking process. Three methods of top gas recycling have been numerically investigated. Simple replacement of normal blast gases with recycled top gas caused the production rate to fall dramatically, accompaniedby significant cooling of the raceway, with an increase in fuel rate in order to maintain hot metal temperature. Oxygenenriched blast replacement is also predicted to decrease productivity and increase fuel rate at fixed hot metal temperature. Both simple and oxygen enriched recycling lead to decreases in furnace efficiency due to the cooling effects of C02 in the recycled gas. Using HRG recycling, where C02 is stripped from recycled gas, it is predicted that at constant metal temperature an increase of about 250/0 in production with a simultaneous decrease in fuel rate of about 20 olo can be achieved with high recycle fractions. Thefurnace internal state indicates it may be possible to stably operate the furnace at 100 o/o HRG recycling. At this recycling limit, the iron is completely reduced well before melting. Despite the possible engineering and economicproblems involved in large scale oxygen preparation and C02 separation, these calculations showthat top gas recycling could be used to increasing blast furnace efficiency while decreasing carbon emissions per tonne of iron produced thus making a positive contribution to efforts to reduce greenhouse gas emissions.
2019-04-28T13:09:42.239Z
1998-03-15T00:00:00.000
{ "year": 1998, "sha1": "05d13dc1d2d89cf72d8b6c90f303ffb5e2048b97", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/isijinternational1989/38/3/38_3_239/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2112177561a88c5e66445f97c37160448679b427", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
259579957
pes2o/s2orc
v3-fos-license
Automatic Detection of Group Recumbency in Pigs via AI-Supported Camera Systems Simple Summary For this study, several pens of weaned piglets were recorded with cameras on a commercial farm. The goal was to use velocity data to establish an automated method of identifying when all animals are lying down. This automated method had an accuracy of 94.1%. This method can benefit modern farm management and highlight otherwise overlooked conditions in the barn. Abstract The resting behavior of rearing pigs provides information about their perception of the current temperature. A pen that is too cold or too warm can impact the well-being of the animals as well as their physical development. Previous studies that have automatically recorded animal behavior often utilized body posture. However, this method is error-prone because hidden animals (so-called false positives) strongly influence the results. In the present study, a method was developed for the automated identification of time periods in which all pigs are lying down using video recordings (an AI-supported camera system). We used velocity data (measured by the camera) of pigs in the pen to identify these periods. To determine the threshold value for images with the highest probability of containing only recumbent pigs, a dataset with 9634 images and velocity values was used. The resulting velocity threshold (0.0006020622 m/s) yielded an accuracy of 94.1%. Analysis of the testing dataset revealed that recumbent pigs were correctly identified based on velocity values derived from video recordings. This represents an advance toward automated detection from the previous manual detection method. Introduction Recent scientific and technological developments yielded substantial opportunities for the use of AI-based surveillance technology to monitor animals on large-scale farms [1]. Opportunities for improvement range from animal welfare to economic benefits for farmers. In times of resource scarcity and energy crisis, improved management is essential to operate the business in an economically sustainable manner. Various approaches have already been used to monitor pigs with this technology. The method has been used to assess the recumbency of animals [2][3][4], estimate their weight [5] and detect a specific action such as tail biting [6]. Notably, observations of growing piglets are one of the greatest opportunities for the use of this technology. Since the 1990s, several researchers have studied different approaches to observe and evaluate the behavior of animals [7][8][9][10]. Researchers have discussed the critical factors of time, sample size, observer influence, and consistency of results and interpretation [9,11,12]. Over time, various methods have been developed for observing animal behavior. In focal sampling, one or two animals are observed while the rest of the group is ignored. This enables precise determination of the behavior of these individual animals. However, information regarding group dynamics is lost. In contrast, scan sampling of the entire group also leads to the loss of details, as a human observer is not able to watch them all at the same time. Changing the observation period or duration provides more options to tailor the behavioral observation methods to the species. Another approach is interval sampling, which involves collecting many hours of video footage. This method allows the processing of many hours of video material at once. However, much information is lost between sampling intervals, and a consistent picture of the daily routine of the observed individuals cannot be obtained. This problem can be overcome by observing the animals continuously (continuous sampling), but this requires an enormous amount of time [7,13]. AI-based monitoring and evaluation can be used to address this problem, such as by observing resting behavior in groups of pigs. Under normal conditions, pigs spend 80-90% of the day recumbent, but they do not all rest at the same time [14,15]. Indeed, this proportion differs for estimates of times in which all pigs are recumbent. In wild pigs or those reared in alternative housing systems with straw, the rates of resting behavior are significantly decreased. For example, wild boars spend between eight and eleven hours per day foraging. These periods are interrupted by interactions with conspecifics, locomotion, and exploration of the environment or rest phases [16,17]. Simultaneous resting behavior in the whole group of pigs is an indicator of welfare under different thermal conditions [18,19] or the health status of the group [20]. Studies have investigated recumbent behavior and its relationship with behavioral factors [21]. To detect this behavior, it is necessary to observe the entire group of pigs, but this makes it difficult to automate the process and assess situations in real-time. To date, various systems can recognize recumbent pigs and the posture of the recumbent individual. [2] demonstrated the benefits of automated scoring of recumbent behavior in groups of pigs as well as the potential for automated climate adjustments reliant on this behavior [22]. Several studies have also investigated the usefulness of video technology for monitoring and interpreting lying behavior. Thus, researchers have used cameras and image processing to study resting behavior and identify behavioral changes in pigs [22][23][24][25]. New computervision systems can also monitor the behavior of individual pigs, including standing, sitting, and recumbent positions [26]. However, when using technology to identify recumbent pigs and piglets in a group, segmentation technologies are often used to identify pigs and evaluate their recumbent behavior. A systematic review concluded that technologies using this approach have high accuracy in segmenting pigs but are unable to detect overlapping pigs [1]. In addition, this technology overlooks pigs that move while all other observed animals are recumbent. This prevents studies from evaluating the lying behavior of the entire group [22]. To date, research has shown that the use of AI has the potential to automatically assess pig behavior and that such measures can benefit animal welfare. However, there is no reliable method that can identify instances in which all pigs in a group are recumbent. In particular, small pigs overlap with other pigs in the group when cold. Automated analysis of group recumbency using image analysis is preferable to ensure that all pigs are recumbent. Piglets are particularly sensitive to external factors during the rearing phase. Stress after weaning, new feed, movements to different groups (and associated ranking fights), new housing environments, and climatic conditions can affect the growth and development of piglets. These immature pigs are not able to protect themselves effectively from attacks or to escape from dangerous situations. For these reasons, special precautions are needed to ensure that piglets remain healthy during this vulnerable stage [27]. In our research, we used the cumulative velocity of all observed animals to detect resting behavior. The observation system defines velocity as the movement of observed pigs in meters per second. To enhance the applicability of our findings to real-world settings, the data were collected on a real farm and not generated in an experimental setting. Since the various AI-supported detection techniques provide opportunities for farm management, it is essential to create an automated system that is usable in commercial farms given the existing infrastructure. For this purpose, a simple two-dimensional camera system was paired with an object-tracking AI system, which can be used in ordinary pig farms. Several studies have monitored pigs and piglets using similar technology with different objectives. The aim of our study was to obtain images of piglets when all animals in a group are recumbent. For this purpose, we used camera systems that perform object tracking with the help of artificial intelligence. The automatic identification of such images can help farmers recognize previously overlooked conditions and thus improve farm management. Moreover, such data are helpful for solutions that automatically interpret group resting behavior. The focus of our work was on piglets because individual recognition of recumbent piglets is often difficult in groups due to overlap. Farmers can use such images to interpret the behavior of their animals and make adjustments to housing conditions. Animals Eighteen weanling (four-week-old) pigs of the hybrid cross Tempo × BHZP Victoria were stocked in each of the 12 experimental pens. The animals were fed ad libitum and had constant access to fresh water. During the experiment, the weaners were kept under normal farm conditions and did not receive any special treatment. No actions during the study caused pain, suffering, or harm to these animals. Therefore, no additional permit was required under the Ordinance for the Protection of Animals Used for Experimental or Other Scientific Purposes. Experimental Design The animals were housed for the duration of the rearing period (postnatal weeks 4 to 10) in 12 identically furnished pens on a commercial farm. The trial was conducted under normal farm conditions. It was integrated into the farmer's daily work in order to obtain results as close to practice as possible. However, two of the pens were excluded from the study due to suboptimal camera placement. Thus, 10 pens and 180 piglets were included in the study. The pens were 2.55 m × 3.2 m in size (8.16 m 2 , 0.54 m 2 /animal) and included a narrowslatted floor (yellow) in some areas and a wide-slatted floor (blue) in others (see Figure 1). The piglets had constant access to fresh water via a nipple drinker and a trough drinker (red). Feed was provided from an automatic mash feeder. Various activity materials, such as chains or wooden sticks, were also attached to the walls of the pens. Animals 2023, 13, x FOR PEER REVIEW 3 of 12 system was paired with an object-tracking AI system, which can be used in ordinary pig farms. Several studies have monitored pigs and piglets using similar technology with different objectives. The aim of our study was to obtain images of piglets when all animals in a group are recumbent. For this purpose, we used camera systems that perform object tracking with the help of artificial intelligence. The automatic identification of such images can help farmers recognize previously overlooked conditions and thus improve farm management. Moreover, such data are helpful for solutions that automatically interpret group resting behavior. The focus of our work was on piglets because individual recognition of recumbent piglets is often difficult in groups due to overlap. Farmers can use such images to interpret the behavior of their animals and make adjustments to housing conditions. Animals Eighteen weanling (four-week-old) pigs of the hybrid cross Tempo × BHZP Victoria were stocked in each of the 12 experimental pens. The animals were fed ad libitum and had constant access to fresh water. During the experiment, the weaners were kept under normal farm conditions and did not receive any special treatment. No actions during the study caused pain, suffering, or harm to these animals. Therefore, no additional permit was required under the Ordinance for the Protection of Animals Used for Experimental or Other Scientific Purposes. Experimental Design The animals were housed for the duration of the rearing period (postnatal weeks 4 to 10) in 12 identically furnished pens on a commercial farm. The trial was conducted under normal farm conditions. It was integrated into the farmer's daily work in order to obtain results as close to practice as possible. However, two of the pens were excluded from the study due to suboptimal camera placement. Thus, 10 pens and 180 piglets were included in the study. The pens were 2.55 m × 3.2 m in size (8.16 m 2 , 0.54 m 2 /animal) and included a narrowslatted floor (yellow) in some areas and a wide-slatted floor (blue) in others (see Figure 1). The piglets had constant access to fresh water via a nipple drinker and a trough drinker (red). Feed was provided from an automatic mash feeder. Various activity materials, such as chains or wooden sticks, were also attached to the walls of the pens. no measurements in other pens or human interaction in the data analysis). The behavior and activity of the pigs were continuously recorded (24 h/day) with cameras and stored digitally. The recumbent behavior of animals was determined on a computer connected to the cameras with the aid of the PigBrother system (VetVise GmbH, Hannover, Germany). Measurements The animals were continuously observed with a camera system for 24 h on fourteen consecutive days. The recordings were stored and analyzed using the VetVise system. To evaluate resting behavior, single frames were extracted from the video recordings every 20 min, resulting in 9634 images during the experimental period. Pictures without animals were excluded. In addition, for privacy reasons, images showing people in whole or in part were not included in the analysis. The images were separately coded in a binary manner by two observers (A.K. and S.G.). The coding system is presented in Table 1. If all animals were recumbent, the image was coded as "0" (see Figure 2a). If at least one pig was in motion, i.e., not recumbent, or sitting, the image was coded as "1" (see Figure 2b,c). If pigs lay on top of each other and one pig could have been in a partially standing position, this case was coded as "0". The results of the assessment were entered into an Excel spreadsheet for further analysis. For the next step, an indicator was developed to provide automatic detection of all recumbent animals. The movement of animals was used. The movement was defined as velocity (v) in meters (Δp) per second (Δt) and was recorded by the camera. These data represent the average sum of distance changes from one frame to the next frame for all animals detected as standing divided by the time interval in seconds. = Δ ∕ Δ At the beginning of the evaluation, each observer evaluated 100 images according to the scheme described above. Images that were not uniformly labeled were discussed, and Animals 2023, 13, 2205 5 of 12 their code was decided by a majority vote. Based on these results, an observer adjustment was made to ensure that the results were comparable. After this coding pilot test, both coders received all image files and coded them independently. Of the 9634 images, 238 images were identified as having coding discrepancies after the independent coding process. These images were reviewed together and discussed, and a consensus was reached. The intercoder agreement was 97.5%. For the next step, an indicator was developed to provide automatic detection of all recumbent animals. The movement of animals was used. The movement was defined as velocity (v) in meters (∆p) per second (∆t) and was recorded by the camera. These data represent the average sum of distance changes from one frame to the next frame for all animals detected as standing divided by the time interval in seconds. v = ∆p/∆t The cumulative velocity value of the group was determined by the PigBrother system from VetVise GmbH. Using artificial intelligence, PigBrother generates animal velocity values from video data and combines them in each trial. PigBrother uses an object-tracking method [28] to calculate the cumulative velocity. The objects (pigs) are identified by an artificial neural network in the image recognition process [29]. The objects recognized by the artificial neural network are then tracked by object tracking and numerical values (meters per second) are compiled. This method of object classification has been used in related approaches [30][31][32]. The movement data were collected over the entire period to identify any movements of the group of pigs. To make precise measurements regarding the amount of movement, all measurements within a 5-min interval were summarized. To select images with a high probability of containing only recumbent pigs, we analyzed the 5-min average velocity of the groups of pigs in the different pens. The "velocity values" were assigned to the previously labeled images based on their time stamp and pen name in the database, such that each labeled image was assigned to the real "velocity value". To obtain information about the amount of movement and to determine thresholds for automatic detection, a sample of 1023 images over different time frames from different pens was linked to the movement data. The movement data were then combined with the active (=1) and inactive (=0) image data to identify the threshold for group recumbency of pigs. To ensure accurate prediction of whether animals were moving, the threshold for automation was defined as the mean value for recumbent animals plus the standard deviation. In the first analysis, the data showed large intervals of movement while all pigs were recumbent. Upon further examination, we found that accuracy was related to the condition of the cameras. Dirty camera lenses reduced detection efficiency. To quantify the dirtiness of cameras, a blur detection algorithm commonly used in image processing was applied using OpenCV packages (Python 3.10.7, Wilmington, DE, USA, OpenCV version 3.1.0, Mountain View, CA, USA). This algorithm evaluates the blurriness of an image by quantifying the edges in an image. For this purpose, the Laplacian is first calculated for each pixel of the image. Laplacians consider the second-order derivative of the image topology, which yields large values when there are large differences between two adjacent pixels. This means that the Laplacian of one pixel is especially large if it represents the edge of a shape in the image. In blurred images, there are fewer edges (and fewer solid areas). These images have Laplacians with a lower variance; hence, we required large variance when selecting images that are not blurred. The variance of the resulting values was then taken as a measure of blur. A Laplacian value greater than 1000 was set as the internal threshold for blurring, i.e., all images with a lower value were considered to be blurred and excluded from further analysis. After the application of this criterion, 3960 images remained. Statistical Analyses To examine whether there was a significant correlation between the measured velocity and the recumbency code of an image, an independent-sample t-test was performed. The probability of error was set at α < 0.05. The threshold method was used to calculate the velocity value with the highest matching and lowest error rate, i.e., the highest velocity value with the fewest images labeled as standing. This method was chosen because the determined value includes as few misclassified data points as possible (e.g., a low-velocity value but an image with standing pigs). Simple thresholding is a standard procedure in R statistical computing. First, all values are sorted in ascending order to obtain an overview of the values. After setting an initial threshold, all images coded as 1 and above the threshold were counted. Then, the threshold was increased in a stepwise manner using a loop function, and the number of images coded as 1 was counted again. The threshold value at which the number of images coded as 1 and above the threshold value no longer decreased was recorded as the highest threshold value. Subsequently, we checked whether the threshold value contained the highest velocity value in an image coded as 0. If it did not, the process was repeated by setting the initial threshold value to the highest velocity value in an image coded as 0. This method allowed us to identify the threshold with the fewest 1-coded images above the threshold and the highest velocity value in images coded as 0. To test our data, we used R Studio, version 2022.12.0+353 (package "caret" [33]). For reproducibility, set.seed was set to set.seed(123). The procedure implements a threshold method and evaluates its performance using a 10-fold cross-validation. In the first step, the 10-fold cross-validation is conducted to assess the model's performance. This method divides the dataset into 10 equally sized subsets or folds. In each iteration, one fold is used as the test dataset, while the remaining 9 folds are used for training the model. This process is repeated 10 times, with each fold serving as the test dataset once. For each iteration of cross-validation, the optimal threshold value is determined. The threshold is used to classify observations into either the positive class (movement) or the negative class (no movement). By trying out different threshold values, the one that maximizes accuracy is selected. Once the optimal threshold is obtained, predictions are made for the corresponding test dataset. The speed of each test observation is compared against the threshold to determine if movement is present or not. Subsequently, various performance metrics are computed to evaluate the model's accuracy. Accuracy represents the proportion of correct predictions compared to the total number of predictions. In addition, the average sensitivity (true positive rate, recall) and specificity are also calculated (see Table 2). Sensitivity measures the model's ability to correctly identify the actual positive cases, while specificity assesses its ability to correctly identify the actual negative cases. The average sensitivity is reported as 0.978, and the average specificity is reported as 0.608. Table 2. Performance criteria for the threshold method. sensitivity of 0.978, and an average specificity of 0.608. These values provide insights into the model's performance, its ability to classify both positive and negative cases accurately, and its overall sensitivity and specificity. Results The blur-adjusted dataset contained 3960 images, including 3549 pictures with moving animals and 411 showing group recumbency. Most of the recorded images were coded as containing moving animals (89.62%). Table 3 displays the frequencies of the images included in the analysis according to the pen (pens 1-10) in which they were taken and the code with which they were labeled (0 = all pigs recumbent/1 = at least 1 animal standing). Table 3. Number of images labeled as containing some standing or all recumbent pigs per pen. The independent-sample t-test revealed a statistically significant difference between the measured mean values of velocity for images containing all recumbent and some moving piglets (p < 0.001). This allowed us to use the data for further analysis. Figure 3 shows the frequencies of average pig velocities for images in which all pigs are moving (gray) or at least one pig is recumbent (black). As can be observed, images of recumbent pigs were associated with lower velocities. This indicates that the assumption that low velocities are associated with group recumbency is suitable for detecting this behavior in weaners. Our threshold method identified an optimal threshold of 0.0006020622 m/s that provided the highest accuracy (i.e., maximized correct predictions). Using a lower or higher threshold would reduce the accuracy of detecting recumbent and moving animals (see Figure 3). Our threshold method identified an optimal threshold of 0.0006020622 m/s that provided the highest accuracy (i.e., maximized correct predictions). Using a lower or higher threshold would reduce the accuracy of detecting recumbent and moving animals (see Figure 3). Figure 4 shows the classification accuracy according to the velocity threshold used to separate times when all pigs are recumbent from times when at least one pig is moving. The optimal threshold for correctly classifying whether all piglets are recumbent or at least one is standing yielded an accuracy of 94.1%, with a satisfactory sensitivity of 98.1% and an acceptable specificity of 60.8% for the used dataset. Table 4 shows the statistical results. Our threshold method identified an optimal threshold of 0.0006020622 m/s that provided the highest accuracy (i.e., maximized correct predictions). Using a lower or higher threshold would reduce the accuracy of detecting recumbent and moving animals (see Figure 3). Figure 4 shows the classification accuracy according to the velocity threshold used to separate times when all pigs are recumbent from times when at least one pig is moving. The optimal threshold for correctly classifying whether all piglets are recumbent or at least one is standing yielded an accuracy of 94.1%, with a satisfactory sensitivity of 98.1% and an acceptable specificity of 60.8% for the used dataset. Table 4 shows the statistical results. Table 4. Evaluation of the classification method. Discussion Monitoring group recumbency in nonexperimental conditions on commercial farms is challenging and is often not possible for farmers. Employing methods to automatically detect images showing group recumbency can help to improve farm management on various levels. The use of camera systems for pig and farrow management is diverse and has been evaluated in previous works. A systematic review summarized the variety of possible applications of such systems in research and in practical use [34]. AI-supported systems can facilitate a wide range of identification, from behavioral detection to tracking of individual animals and disease diagnosis. This research subject has already been addressed by various research groups investigating the analysis and evaluation of imaging procedures for monitoring pigs [22,35]. The methodological approaches are diverse, but no reliable and uniform approach has been established thus far. Other authors have focused on recumbent pigs [22,36]. However, they used manual verification that the animals are recumbent, while our method allows automated detection of those pictures. In contrast to other studies, the present study developed a method for automatically detecting images containing all animals in a recumbent position. Our work outlines a suitable method for addressing challenges in which individual observation by artificial intelligence is made difficult by overlap, especially useful for piglets. The image data generated for our study can be used in a variety of ways. They can help farmers make decisions regarding animal welfare and provide a new perspective on previously overlooked issues. In this analysis, the recumbent behavior of groups of weaners was investigated. The study revealed the overall good performance of this method, with a high accuracy of 94.1%. However, a high proportion of the collected data was not suitable for further analysis due to challenges regarding camera use on commercial farms. The different number of images per pen stems from the variation in the soiling of camera lenses over the pens during the test period. This had different causes but was mainly due to dust and flies. By calculating the blur for each image, different numbers of images per pen were removed from the dataset (9634 images at the beginning/3960 after adjustment). Filtering the data by an artificial neural network enabled fast data cleaning. That could be detected directly by the AI-supported camera system in the future to enable camera cleaning on-site. To generate a higher number of usable images, further investigations were made with a different type of camera. Bullet cameras tend to become less obscured under practical conditions than the dome cameras used in this experiment. However, this circumstance not only led to a limitation of the usable data but also provided insights for future research, especially regarding the practical use of different types of cameras. In farming practice, the usability of cameras is important. In addition to dirt obscuring the lens, there are other handling pitfalls, such as cleaning intervals, internet connectivity, and farmer acceptance of monitoring systems. Another important point is the data itself. Since the velocity values were very small, it was difficult to perform statistical analysis without converting the data. Another limitation is that when pigs lay on top of each other and one was in a half-standing position, the image was still coded as group recumbency. The results must also be interpreted with the caveat that pictures with recumbent pigs are underrepresented in the overall dataset, even though the pictures were selected at random. Future studies need to validate the method used here with a larger and more balanced dataset and test it in other barns. This study represents a preliminary evaluation of the methodology and provides evidence that this approach can be used for piglet assessments. The overall goal of the present study was to identify only groups in which all weaners were recumbent. Among the images containing only recumbent pigs, 60.7% were correctly identified, which reflects an acceptable number of images. This system can thus provide farmers with containing only recumbent animals. Using the optimal threshold, the system misclassified over one-third of the images in which all weaners were recumbent as containing standing or moving piglets. However, the aim was to provide images in which all animals are recumbent. Therefore, the sensitivity of 97.8% is a satisfactory result to prevent false positives. Consequently, the system provides farmers with only pictures in which all animals are recumbent. From a management point of view, targeted detection of piglets exhibiting group recumbency is essential. The automated provision of such image data can enable farmers to evaluate resting behavior from various aspects. Thus, through visual observation, the farmer can analyze resting behavior in relation to the barn temperature and respond accordingly when behavioral indicators indicate that the environment is too warm or too cold. Such responses can improve the management system and thus the performance of the weaners [37,38]. Furthermore, by detecting these images, conspicuous alterations of resting behavior by individual animals in the group can be identified, for example, as indicators of possible disease [39,40] or of group exclusion of individual animals. However, this ability was not the subject of this investigation. In addition, this automated method has the potential to enable further automation of barn temperature conditions, health, and animal welfare measures through the correct identification of image data. The correct detection of images can be used for rearing analyses to perform similar investigations and techniques as those carried out in pig fattening. Additionally, the use of the provided images can improve farm management, as the farmer can interpret the number of group-recumbency periods. A uniform group recumbency period can be used as an indicator of well-being, conformity, and homogenous growth (which is economically relevant). Furthermore, group recumbency (and possibly the length of the recumbent phase) can be used to draw conclusions about the barn conditions. This can provide an economic advantage. The circadian rhythm of pigs also reflects whether the animals are doing well, especially in the weaning phase. In addition to the management aspects for the farmer, the provision of images can be used as a tool for health management. Assessment of the conditions in which group recumbency is exhibited can inform disease monitoring. Given the presence of diseases such as African swine fever, this approach is of great benefit to other parts of the value chain, such as veterinarians. From the point of view of health management, deviations in the behavior of individual animals can also be observed, and depending on the indication, targeted treatments can be carried out. Various studies have investigated the use of technology to monitor pigs and weaners; these studies have had objectives ranging from health to animal well-being [41][42][43]. Our methodological approach enables the automatic detection of images in which all weaned pigs are recumbent, which can be used as a farm management tool. On the one hand, it represents an implementable solution for improved farm management using a system with object tracking. On the other hand, the automated detection of group recumbency in piglets is crucial input for further automation through artificial intelligence. The technique should be evaluated and compared with different camera systems. However, more images and data from different commercial operations are needed in the future. Conclusions In conclusion, the novel automated methodology developed in this study successfully detected group recumbency in piglets. These data were collected on farms; thus, this system can be used in practical farm management. More than 9000 images and associated velocity values were evaluated. Considering the abovementioned limitation of the dataset, the performance of our method showed high accuracy and sensitivity as well as acceptable specificity. Thus, this method could be used and marketed as a tool to improve farm management. This improvement not only assists farmers but can also others across the value chain as well as stakeholders. However, further studies and application of the generated images in automation scenarios are needed to enable widespread use and further exploitation of the data. Funding: This study was carried out as part of the "5G-Agrar: Nachhaltige Landwirtschaft". The project was supported by the German Federal Ministry for Digital and Transport (Funding number: 165GU066F). Institutional Review Board Statement: No special permission under the Animal Protection Act (Section 7(2)) was required because no actions were taken that caused pain, suffering or harm to these animals. Informed Consent Statement: Not applicable. Data Availability Statement: The datasets analyzed during the current study are available from the corresponding author on reasonable request.
2023-07-11T15:39:10.339Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "6ab6b69f614fd2568d7c61d4b3e9c03b5d76404b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2615/13/13/2205/pdf?version=1688549170", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "465c9e7476da523853505ad0a77d07fdb2b70e18", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
259321500
pes2o/s2orc
v3-fos-license
Socioeconomic Disparities in Patients Receiving Intravitreal Injections for Age-Related Macular Degeneration Amid the COVID-19 Pandemic Purpose: To determine the effects of socioeconomic factors on visit adherence and the resultant visual outcomes for patients receiving intravitreal injections for neovascular age-related macular degeneration during the COVID-19 pandemic. Methods: In this retrospective case-control study, medical records were reviewed to collect appointment attendance, age, sex, self-reported race/ethnicity, primary language, marital status, insurance, distance from clinic, and Area Deprivation Index (ADI), a measure of socioeconomic disadvantage. Multivariate regression models were created to determine differences in socioeconomic factors between individuals who attended (show group) and those who did not attend (no-show group) appointments. Results: The study enrolled 126 patients in the show group and 115 in the no-show group. On univariate analysis, nonadherence was significantly higher in non-White patients than in White patients (P = .04), urban sites than in suburban sites (P = 1.7 × 10−4), and non-English-speaking patients than in English-speaking patients (P = 4.0 × 10−3). The associations remained significant in multivariate analysis for non-English-speaking patients (P = .03) and urban-site patients (P = .01) after adjusting for age, sex, self-reported race/ethnicity, primary language, marital status, insurance, distance from clinic, site of visit, and ADI. At 6 months and 1 year, a 1-, 2-, and 3-line vision loss was significantly higher in the no-show group than in the show group on univariate and multivariate analysis after adjusting for age, sex, race, lens status, and presence of glaucoma and diabetic retinopathy. Conclusions: Non-English-speaking patients and urban-based patients were less likely to present for intravitreal injection appointments during the initial peak of the COVID-19 pandemic. This disparity translated to worse vision outcomes at 6 months and 1 year. Introduction Patients with neovascular age-related macular degeneration (nAMD) require regular monitoring and treatment with intravitreal injections to maintain vision.Lapses in treatment can lead to the formation of disciform scars or catastrophic subretinal hemorrhage with irreversible vision loss.][3][4][5][6][7][8][9] Although there has been extensive research on the impact of the pandemic on visit adherence and visual outcomes in nAMD patients receiving intravitreal injections, less is known about the effect of nonophthalmic factors, such as demographics and socioeconomic status.Research on health disparities has widely recognized that racial and ethnic minorities and those with lower income have worse health outcomes than White patients and those with a higher income. 10,113][14] To our knowledge, only 1 study has studied the effect of nonophthalmic factors (ie, race/ethnicity and systemic comorbidities) on adherence to intravitreal injection appointments during the pandemic. 15his study found that compared with 2019, there was a drop in visit adherence in all racial and ethnic groups except patients identifying as Hispanic/Latino at a Veterans Affairs Hospital in Los Angeles County. 15n our study, we investigated the effects of nonophthalmic factors on visit adherence for patients with nAMD having intravitreal injections.We retrospectively reviewed the medical charts of all patients scheduled to have an injection appointment during the initial wave of the COVID-19 pandemic at Boston Medical Center, the largest urban safety-net hospital in New England, or its 2 affiliated suburban eye clinics.We evaluated the effects of age, sex, self-reported race/ethnicity, primary language, marital status, insurance status, distance from the clinic, appointment location, and socioeconomic status and correlated these variables with short-term (6-month) and long-term (1-year) vision loss. Study Design, Setting, and Sample This retrospective single-center case-control study comprised patients with bilateral nAMD who had injection appointments scheduled from March 11, 2020, to May 26, 2020, at Boston Medical Center and its 2 affiliate suburban eye clinics.An injection appointment was defined as any appointment at which an intravitreal injection was given to the patient.This included procedure-only visits with a planned injection or possible injection visits with an examination and/or assessment before the decision to inject.Excluded were patients who had not received an injection within the 6 months before their scheduled appointment, did not have at least 1 year of follow-up, did not have bilateral nAMD, or lived outside Massachusetts.Institutional review board approval was obtained, and the study was conducted following the regulations set forth by the US Health Insurance Portability and Accountability Act of 1996 and adhered to the tenets of the Declaration of Helsinki. The 2020 study window coincided with the Massachusetts mandate authorizing only emergent medical appointments during the initial surge of COVID-19. 16The patients were categorized into 2 groups: the show group and the no-show group.The show group was defined as patients who attended their originally scheduled appointment or an appointment within 2 weeks of the original appointment.All other patients were placed in the no-show group.Patients were also categorized by appointment location: (1) urban if their appointment was scheduled at Boston Medical Center or (2) suburban if their appointment was at an affiliate suburban eye clinic. For each patient, the following data were also collected: age, sex, self-reported race/ethnicity, self-reported primary language, marital status, primary insurance, address of primary residence, and location of appointment (urban vs suburban).The better-seeing eye was selected as the study eye because patients tend to rely on the better-seeing eye for everyday activities.If a patient had the same VA at baseline, 1 eye was randomly selected using a random-number generator.The lens status, presence of concurrent ocular comorbidities, and VA were recorded for the study eye. VA was recorded at 4 timepoints; that is, at the 2 visits immediately before the study window, at the 6-month visit, and at the 1-year visit.The 6-month and 1-year visits were defined as within 2 months of the date that corresponded to exactly 6 months or 1 year after the scheduled appointment date in the study window.For patients in the show group and no-show group, the baseline VA was calculated by averaging the VAs recorded at the 2 visits before the study window within a 6-month period.The recorded Snellen VA was transformed to logMAR notation using the following equation: logMAR = −log 10 (Snellen VA).The change in VA was calculated via the following equation: logMAR VA at 6 months or 1 year minus the baseline logMAR VA.Then, for each patient the change in VA was evaluated for whether there was a loss of 1 line, 2 lines, or 3 or more lines. The Area Deprivation Index (ADI) at the state level was used as a proxy for socioeconomic status.8][19] The ADI was obtained via the University of Wisconsin School of Medicine's Neighborhood Atlas website. 20Patients' full addresses were used to obtain their ADI.The ADI at the state level was given as a decile (1 to 10), with a higher number indicating a more disadvantaged group.In the current study, socioeconomic disadvantage was defined as follows: 1-3 = low; 4-7 = average; 8-10 = high. Last, patients who had an appointment scheduled in the parallel 2019 study window (from March 11, 2019, to May 26, 2019) were also recorded as a control to compare visit adherence before and during the pandemic.Other visual outcomes data were not compared because of insufficient data availability. Outcomes The primary outcome was visit adherence.The secondary outcome was the percentage of patients with vision loss of 1 line, 2 lines, or 3 or more lines at 6 months and 1 year. Statistical Analysis The covariates were compared between patient cohorts in the show group and the no-show group.Categorical variables were compared using the chi-square test, and continuous variables were compared using the t test.Univariate and multivariate logistic regressions were used to examine the association between the show rate and no-show rate and vision loss of more than 1 line, 2 lines, or 3 lines during the pandemic and before the pandemic.Missing VA, race/ethnicity, marital status, ADI, and primary language data were imputed by Markov chain Monte Carlo multiple imputations. The multivariate regression models for visit adherence were controlled for age, sex, race/ethnicity, primary language, marital status, primary insurance type, distance from the clinic, appointment location, and state-level ADI.The covariates for the multivariate regression models for vision loss of 1 line, 2 lines, or 3 or more lines at 6 months and 1 year were controlled for age, sex, race/ethnicity, lens status, presence of glaucoma, and presence of diabetic retinopathy (DR).The presence of cataracts, glaucoma, and DR was included in the multivariate analysis because they were the most common ocular comorbidities that affected VA in our study population; other ocular comorbidities were rare.The analyses also examined the noshow rate and rate of vision loss of more than 1 line, 2 lines, or 3 or more lines for patients with a follow-up before and during the pandemic using the McNemar test. All analyses were performed using Stata/IC software (version 12.1, StataCorp LLC).For the univariate and multivariate analyses, a P value less than 0.05 was considered statistically significant.Mean values are ± SD. Effect of the Pandemic on Visit Adherence One hundred forty-nine patients had injection appointments scheduled in both the 2019 and 2020 study windows.Significantly more patients came to at least 1 scheduled appointment in 2019 (48%) than 2020 (10%) (P < .0001),indicating that the pandemic affected the overall visit adherence for nAMD patients receiving intravitreal injections by almost 5-fold. Demographic Factors Affecting Visit Adherence During the COVID-19 Pandemic The 2020 study cohort consisted of 241 patients, with 126 in the show group and 115 in the no-show group.The mean age of the 175 women (72.6%) and 66 men (27.4%) was 80.9 ± 8.5 years, with no significant difference between the 2 groups (P = .80). Table 1 shows the univariate analysis of the 2020 patient demographics by visit adherence status.Race/ethnicity (P = .04),primary language (P = .004),and appointment location (urban vs suburban) (P = .0017)were significantly different between the show group and the no-show group.After adjusting for age, sex, race/ethnicity, primary language, marital status, insurance, distance from the clinic, appointment location, and ADI, only the primary language and appointment location were significantly different in multivariate analyses.Non-English-speaking patients were less likely to come to their visits than English speakers (logistic regression coefficient [Coef], −1.13; 95% CI, −2.20 to −0.06; P = .03).Urban patients were less likely to present for their injection visits (Coef, −1.16; 95% CI, −2.11 to −2.18; P = .01).Non-White patients did not have worse visit adherence than their White Effects of Visit Adherence on Vision Loss Table 2 shows the impact of nonadherence on VA at 6 months and 1 year for the 2020 study cohort.Fewer patients in show group than in the no-show group had vision loss over the short term and long term.In the show group, which began with 126 patients, 122 patients had a follow-up appointment at 6 months and 110 had a follow-up appointment at 1 year.In the no-show group, which began with 115 patients, 86 patients had a followup appointment at 6 months and 80 patients had a follow-up appointment at 1 year.After adjusting for age, sex, race/ethnicity, lens status, presence of glaucoma, and presence of DR, the no-show group still had a significantly greater percentage of patients than in the show group with a 1-line, 2-line, or 3-line vision loss at 6 months and 1 year, as shown in Table 3. Conclusions To date, there are limited data on the impact of nonophthalmic factors on visit adherence by patients with nAMD receiving intravitreal injections during the COVID-19 pandemic.Ours is among the first few case-control studies to examine whether demographic factors, such as age, sex, race/ethnicity, primary language, marital status, insurance status, distance from appointment, appointment location, and socioeconomic health, affect visit adherence and long-term VA.We found that the initial surge of the pandemic negatively affected visit adherence by almost 5-fold over the previous year and that patients who were non-English speakers and sought care at our urban, hospital-based clinic were more likely to not to show for their injection visit than patients who were English speakers or attended our affiliated suburban eye clinics.There was no significant difference in visit adherence in all other demographic factors studied.In addition, a significantly greater percentage of patients in the no-show group than in the show group experienced vision loss at the 6-month and 1-year follow-ups, indicating the negative long-term consequences of visit nonadherence. To our knowledge, only 1 other study, by Ashrafzadeh et al, 15 examined the effect of nonophthalmic factors on injection appointment adherence during the COVID-19 pandemic.The results in our study differed from those of Ashrafzadeh et al in terms of race/ethnicity outcomes.Although they found that the visit adherence of Hispanic/Latino patients remained consistent, we noted a drop in univariate analysis for non-White patients but no differences on multivariate analysis.This can be explained by the higher percentage of Hispanic/Latino patients in our urban hospital-based clinic than in our suburban clinics and the lower visit adherence overall in the urban clinic on univariate and multivariate analyses.One possible explanation for why race/ethnicity was significant in our univariate analysis but not our multivariate analysis is that race/ethnicity, defined as shared physical traits or cultures, does not inherently affect visit adherence but rather that the social determinants of health and barriers to care disproportionately affect non-White individuals.Our multivariate analysis suggests that barriers to care, such as not speaking English and residing in an urban area, may be causes for visit nonadherence. Transportation and fear of COVID-19 might have disproportionately affected our urban patients because many rely on public transportation to travel to their appointments, resulting in lower adherence with the urban appointments for injections.Public transportation was a riskier mode of transportation during the initial surge of the pandemic and a known barrier to healthcare use in other marginalized communities. 21In general, suburban patients are less reliant on public transportation.Moreover, patients might have felt that a suburban clinic that provided only eyecare posed a lower COVID-19 risk than a large urban-based hospital serving multiple specialties with a higher census of COVID patients.][24][25] Our study also emphasized the impact of visit nonadherence on short-term (6-month) and long-term (1-year) VA.[3]9 Stattin et al 9 reported that VA decreased by 1.9 Early Treatment Diabetic Retinopathy Study letters 1 year after injection appointments were missed during the pandemic.In a study by Soares et al, 3 there was a drop in VA of 3 to 4 lines over a 1-year gap in care.The greatest proportion of patients in our no-show group had 1 line of vision loss.This indicates that missing an injection, even for a short period, can cause permanent damage and might be a predictor for worse long-term VA. This study was limited by the retrospective design and inclusion of patients who had a follow-up visit.In focusing on patients who had at least some follow-up data at 6 months and 1 year to assess the impact on VA, we excluded all patients who continually did not show at these timepoints.This could reflect a particularly vulnerable population who would benefit from further investigations.In addition, the impact of race/ethnicity, although significant on univariate analysis, was likely not adequately powered to show significance on multivariate analysis, probably because of the smaller samples of nAMD patients in those subgroups.Further studies are warranted to address this as a risk factor.Finally, it is unclear whether our typical reminder and rescheduling protocol was consistently functioning throughout the pandemic given the chaotic nature of the healthcare system, with staffing limitations and constant shifting of priorities.This could have been a potential confounder. In summary, the initial surge of the Covid-19 pandemic significantly affected visit adherence for patients with nAMD receiving intravitreal therapy.Non-English-speaking and urban hospital-based patient populations had a lower injection appointment attendance rate, which was a predictor for longterm visual consequences.This highlights disparities in healthcare that disproportionately affect marginalized populations and is a starting point for understanding and developing interventions to reduce visual harm to vulnerable populations. Table 2 . Percentage With Short-Term and Long-Term Vision Loss by Visit Adherence (Univariate Analysis).Patients who did not show up within 2 weeks of scheduled appointment in 2020. Table 3 . Vision Loss in Show Group and No-Show Group During First Surge of COVID-19 Pandemic (Logistic Regression).a Adjusted for age, sex, race, lens status at 6 months or 12 months, presence of glaucoma, and presence of diabetic retinopathy. a Change in vision was calculated by visual acuity (VA) at timepoint (6-month or 12 month visit) minus average baseline VA and categorizing VA changes based on lines of vision loss.b c Statistically significant (P < .05).
2023-07-05T05:06:50.862Z
2023-06-29T00:00:00.000
{ "year": 2023, "sha1": "e91b242caafb315c56de059bef7f9157d4f68f56", "oa_license": null, "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10311364", "oa_status": "GREEN", "pdf_src": "Sage", "pdf_hash": "4ce1cb3c761e5d6de5fbaa7ad9372a2ded4b03bf", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
51894040
pes2o/s2orc
v3-fos-license
Endometrial cancer with an EML4-ALK rearrangement An 85-yr-old woman was diagnosed with endometrial adenocarcinoma, endometrioid type. Imaging studies showed a large tumor distending the endometrial canal without evidence of local invasion or extrauterine disease. A hysterectomy was performed, followed by microscopic examination of longitudinal tissue sections. Histopathological review showed only focal myometrial invasion, equivocal lymphovascular invasion, and negative bilateral sentinel lymph nodes (FIGO stage IA). A sample of the tumor was submitted for molecular testing (massively parallel sequencing on OncoPanel) and was found to harbor an inversion on Chromosome 2 resulting in an EML4-ALK gene fusion. Confirmatory immunohistochemistry showed ALK overexpression in just a portion of the tumor. Additional genomic characterization on a region of the tumor lacking ALK overexpression by immunohistochemistry was highly congruous with the genomic profile of the ALK-positive portion, showing similar patterns of copy-number variation and mutations in TP53 and KDM5C, with no evidence for an EML4-ALK gene fusion, confirming that EML4-ALK rearrangement had occurred as a subclonal process. EML4-ALK fusions are driver events in 2%–5% of non-small-cell lung cancers; crizotinib is an approved targeted therapy for these patients. EML4-ALK rearrangements have not previously been reported in endometrial cancer. Abstract An 85-yr-old woman was diagnosed with endometrial adenocarcinoma, endometrioid type. Imaging studies showed a large tumor distending the endometrial canal without evidence of local invasion or extrauterine disease. A hysterectomy was performed, followed by microscopic examination of longitudinal tissue sections. Histopathological review showed only focal myometrial invasion, equivocal lymphovascular invasion, and negative bilateral sentinel lymph nodes (FIGO stage IA). A sample of the tumor was submitted for molecular testing (massively parallel sequencing on OncoPanel) and was found to harbor an inversion on Chromosome 2 resulting in an EML4-ALK gene fusion. Confirmatory immunohistochemistry showed ALK overexpression in just a portion of the tumor. Additional genomic characterization on a region of the tumor lacking ALK overexpression by immunohistochemistry was highly congruous with the genomic profile of the ALK-positive portion, showing similar patterns of copy-number variation and mutations in TP53 and KDM5C, with no evidence for an EML4-ALK gene fusion, confirming that EML4-ALK rearrangement had occurred as a subclonal process. EML4-ALK fusions are driver events in 2%-5% of non-small-cell lung cancers; crizotinib is an approved targeted therapy for these patients. EML4-ALK rearrangements have not previously been reported in endometrial cancer. CASE PRESENTATION An 85-yr-old woman was diagnosed with endometrial adenocarcinoma, endometrioid type, grade 3 (of 3), via endometrial biopsy performed for post-menopausal bleeding. Imaging studies showed a large tumor distending the endometrial canal without evidence of local invasion or extrauterine disease. Subsequent total laparoscopic hysterectomy bilateral salpingo-oophorectomy with surgical staging revealed a 5.8 cm, tan/white, exophytic mass involving the anterior aspect of the endometrial cavity. The polypoid nature of the tumor was most easily appreciated by microscopic examination of longitudinal tissue sections, where it appears folded over residual areas of benign endometrium (Fig. 1A). Further histopathological review showed only focal myometrial invasion, equivocal lymphovascular invasion, and negative bilateral sentinel lymph nodes (FIGO stage IA). TECHNICAL ANALYSIS AND METHODS A sample of the tumor ("sample 1"; estimated percentage of neoplastic cells was 70%) submitted for genomic characterization using OncoPanel (version 3)-a hybrid-capture and massively parallel sequencing assay. Testing was performed in a CLIA-certified laboratory as previously described (Sholl et al. 2016;Garcia et al. 2017). DNA was isolated with a kit (Qiagen) followed by ultrasonic fragmentation (Covaris), size selection, and quantification. Sequencing libraries were prepared using KAPA HTP library preparation kits (Roche) and hybridized to a biotinylated RNA bait set (Agilent SureSelect). The assay (OncoPanel) is designed to capture and sequence the full coding regions of 447 cancer genes; additional baits are tiled on intronic regions for 60 clinically relevant known rearrangement loci. Streptavidin-captured libraries were PCR-enriched, size selected, and sequenced on an Illumina HiSeq 2500 with 2 × 100 paired-end reads. VARIANT INTERPRETATION Interestingly, sample 1 was found to harbor an inversion on Chromosome 2, involving a breakpoint within intron 19 of ALK and intron 21 of EML4. The resulting EML4-ALK fusion (Table 1 for genomic coordinates; Fig. 1B) contains the amino-terminal portion of EML4 and the entire intracellular portion of the ALK protein-including the juxtamembrane domain and the catalytic tyrosine kinase domain-the latter of which is the presumed functional driver. Immunohistochemistry (IHC) showed ALK overexpression in just a portion of the tumor, with the marbled interface between ALK-positive and ALK-negative areas showing otherwise homogenous morphologic features (Fig. 1A, black rectangle, and 1C). Subsequent molecular testing on a region of the tumor lacking ALK overexpression by IHC ("sample 2"; 70% tumor) was highly congruous with the genomic profile of the ALK-positive portion (see Table 1): Both cases harbored mutations in TP53 (I251S) and KDM5C (K498N), among others, and similar patterns of copy-number variation (other than the inversion on Chromosome 2) across regions of the genome covered in our assay (Fig. 1D). The EML4-ALK gene fusion was notably absent, however, confirming that EML4-ALK rearrangement had occurred as a subclonal process (roughly estimated using mapped reads over the locus at ∼20%-30% allele burden in the tumor). SUMMARY This is the first report of an EML4-ALK rearrangement in endometrial cancer. EML4-ALK fusions are driver events in 2%-5% of non-small-cell lung cancers; moreover, crizotinib is an FDA-approved targeted therapy for lung cancer patients. Standard of care treatment for a FIGO Stage IA grade 3 endometrial cancer is vaginal brachytherapy, which was pursued in this case. It was noted that if the cancer recurs, ALK-directed adjuvant chemotherapy would be considered. Overall, the identification of this genomic result in an unexpected cancer type highlights the utility of using a comprehensive genomic approach to direct precision cancer therapy. Ethics Statement The patient provided written, informed consent (protocol 11-104). This study was approved by the institutional review board of the Dana-Farber Cancer Institute and the Partners Human Research Committee.
2018-08-14T13:08:09.694Z
2018-08-01T00:00:00.000
{ "year": 2018, "sha1": "fc229e67d9e5af26fdf6ada21972ac95547115e7", "oa_license": "CCBYNC", "oa_url": "http://molecularcasestudies.cshlp.org/content/4/4/a003020.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fc229e67d9e5af26fdf6ada21972ac95547115e7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
208002863
pes2o/s2orc
v3-fos-license
Assessment of levetiracetam and valproic acid as monotherapy for quality of life in partial epilepsy patients Epilepsy is a common neurological disorder manifesting as recurrent neuronal discharges which may be limited to either one region (focal or partial) or diffusely spread over multiple regions (generalized tonic clonic seizure (GTCS)) of brain and is characterized by loss of consciousness which is preceded by cry, foaming, twitching and vigorous jerky movements of limbs. The annual incidence of epilepsy in world population is 50/100000 and that in the Indian population is 27.3 per 100000 with prevalence of 5.59 per 1000. 1,2 For the most definitely diagnosed epilepsy, long-term treatment with anti-epileptic drugs (AEDs) is needed. 3 INTRODUCTION Epilepsy is a common neurological disorder manifesting as recurrent neuronal discharges which may be limited to either one region (focal or partial) or diffusely spread over multiple regions (generalized tonic clonic seizure (GTCS)) of brain and is characterized by loss of consciousness which is preceded by cry, foaming, twitching and vigorous jerky movements of limbs. The annual incidence of epilepsy in world population is 50/100000 and that in the Indian population is 27.3 per 100000 with prevalence of 5.59 per 1000. 1,2 For the most definitely diagnosed epilepsy, long-term treatment with anti-epileptic drugs (AEDs) is needed. 3 Monotherapy is considered the gold standard in epilepsy and is preferred over polytherapy because of lesser risk of adverse events and drug interactions, the decreased cost of therapy and greater patient compliance. 4 Adverse effects (diplopia, ataxia, sedation, cognitive issues, hyponatremia, headache, weight gain, dizziness, depression and paresthesia) occur at therapeutic doses in patients of epilepsy. Adverse effects of drugs also play a major role in ensuring quality of life in epilepsy patients along with the effects of epilepsy. There are many disease specific tools for measurement of quality of life. These tools are in the form of questionnaire that can be administered to patients in the outpatient department. These tools help assess effect of both disease as well as treatment administered. Epilepsy specific tools are (Research and Development Corporation (RAND) 36-Item Health Survey (SF-36), Quality of life in Epilepsy (QOLIE-89, QOLIE-31 and QOLIE-10). Efficacy of conventional AEDs has been well established but the area they lack is in the adverse effect caused by them. Newer AEDs though were started as add-on therapy to the conventional AEDs and have shown equal efficacy to conventional AEDs. Newer AEDs with their better safety profile hold an edge over the conventional AEDs. In this study we conceptualized comparing broadspectrum AED from the older generation which is VPA with a drug of newer generation i.e. LEV. LEV has found its usage both approved as well as off label use in majority of types of seizures. Even after extensive search there was a lack of studies which compared VPA with LEV on efficacy, safety and quality of life both in India as well as world till date. Hence this study was planned to compare valproic acid and levetiracetam as monotherapy for comparison of quality of life in patients of partial seizures. METHODS This was an observational analytical follow-up study in newly diagnosed patients of partial seizure. Study was conducted over a period of one year from January 2016 to December 2016. Minimum sample size which was required was 60 patients, with 30 patients in each group. Sample size was based on previous study which compared quality of life in epilepsy patients. 5 Ethical clearance was taken from Institutional Ethical Committee. Patients were included after taking a written informed consent. Patients were selected from out-patient department of Department of Neurology. Patients were followed up for a period of 12 weeks. Patient satisfying below mentioned inclusion criteria were included in the study: Patients with diagnosis of partial epilepsy, both sexes in the age group of 18-60 years, patients who have been stabilized on their respective drug dosage for more than 1.5 months or less than 4.5 months. Subjects excluded from the study were patients suffering from any other type of epilepsy, patients with progressive CNS disease and lesion, any uncontrolled co-morbid condition, malignancy, hypersensitivity to the study drugs, participating in another study, subjects with deranged liver and renal functions, pregnant and lactating mothers, patients who have experienced acute onset of seizures related to drugs, alcohol, acute medical illness, patients leaving the study due to any reasons will be excluded from final analysis. Demographic profile and detailed history was obtained from each recruited patients; this included family history, educational status, age of onset of epilepsy, duration of disease, personal habits. A general physical examination was performed and blood pressure was recorded EEG and CT heads was done. Blood test (haematological and biochemistry were done before starting of the treatment. Study subjects included in the study were divided into two groups of 30 each. The drugs were given to subjects on the basis of physician's discretion. The dose ranges of the two drugs at the start of the study were as follows for levetiracetam (LEV) 500-2000 mg/day and for valproic acid (VPA) 300-1000 mg/ day. After recruitment patients were assessed for the qualities of life based on QOLIE-10 questionnaire, and were also evaluated for efficacy and safety. Patients were evaluated at 0, 6 and 12 weeks or earlier as the need arose. For efficacy and safety they were assessed on each visit with the help of patient maintained seizure diary, and self-reporting of adverse drug reaction. Patients were evaluated at baseline (0 visit) and at 12 weeks for quality of life. Assessment of quality of life in patients: The QOLIE-10 is a brief standardized instrument for screening patients with epilepsy about the impact of epilepsy on their lives. QOLIE-10 evaluates patients in three domains: (i) epilepsy effects which evaluated patients for memory, physical effects and mental effects, (ii) mental health assessing for energy, depression and overall quality of life, (iii) role functioning which evaluated patients for seizure worry, work, driving and social limits. Scores for QOLIE-range from 1-5 for each question with minimum of 10 and maximum of 50. Higher the score poor is the expressed quality of life. Assessment of safety of treatment: A checklist of adverse drug reaction was prepared according to the most common adverse events occurring due to study drugs. Adverse drug reactions were recorded at every visit of the patient i.e. at monthly intervals. Seizure diary was used to record patient's experiences weekly and how their seizures improved or deteriorated, frequency of seizures, duration, post-ictal confusion seizure related injury. Data management and analysis was done using Microsoft Excel 2007 and IBM SPSS version 20.0. Demographic data was presented as either frequency or mean±SD. Intra-group comparison was done using paired sample student t-test and inter-group analysis was done using unpaired student t-test. Adverse events were interpreted and analyzed using descriptive statistics and chi-square test. RESULTS Total 80 patients were included after primary screening. Out of these 13 patients were less than 18 years, 7 were above 60 years of age, hence total no. of patients included for final analysis were 60 out of which 30 were in the LEV group and 30 in the VPA group. Basic demographic profile of the patients included in the study is given in Table 1. There was no significant difference between the two groups based on the baseline characteristics. Baseline pattern in included patients for epilepsy which are included in this are family history, duration of disease and frequency of seizures (Table 2). Quality of life in these patients was recorded both at start of study and at the end of 12 weeks. Quality of life was assessed using QOLIE-10 questionnaire. The comparison was done both within the groups as well as between the groups (Table 3-5). Study subjects included were also assessed for voluntary reported adverse effects which are given in (Table 6). Adherence was assessed at 6 and 12 weeks. 6.67% and 10% in LEV and VPA group were found to be non adherent and these also suffered from seizure episode during the study period. Cost comparison was also done to see the total cost of monthly therapy which was INR 1394±209.427 for LEV and 706.25±152.616 for VPA. DISCUSSION The ultimate goal for treatment of epilepsy is providing patients with a life free from seizures in combination with optimal quality of life. Inclusion of evaluating quality of life outcomes in the standard management plan along with traditional measures of assessment of seizure frequency and adverse effects needs to be encouraged. To address this objective, the present study compares the drugs levetiracetam and valproic acid on the basis of the quality of life in newly diagnosed patients with epilepsy. Demographic details included in this studies were age, gender, and place of residence. Mean age of population in this study was 28.05±11.853 years and 23.00±5.279 years in LEV and VPA groups respectively which was similar to a study where mean age of the patient was 31.8±11.0 years. 7 In present study male to female ratio in whole study population was 52:48 which was different from the same study where it was 57:44 but not a very marked difference. 7 Rural urban divide among the patients included in the study groups was also seen which was 63.33 and 36.66 in LEV group and 36.66 and 63.33 in VPA group (Table 1). The mean duration of illness in LEV group was 4.37±2.587 years and in VPA it was 4.12±1.821 years ( Table 2) which was lower than another study where the mean duration of the disease was found to be 6.62±4.21 years. 8 There were no episodes of status epilepticus recorded in both groups during the entire duration of this study as all patients at the time of enrollment had already completed the titration phase. People with positive family history were found in both groups and were 16.66% and 20% in LEV and VPA group respectively ( Table 2). This result was a higher as compared to another study. 8 Epilepsy is a medical and social diagnosis as epileptic individuals face numerous psycho-social problems (anxiety, social stigma, driving troubles, unemployment), which can negatively influence the quality of their lives. The growing awareness of the importance of the psychosocial effects of epilepsy has led to the need to measure the quality of life of affected individuals. Therefore, the proper use of AEDs as well as the monitoring of adverse impacts as a result measure, and the assessment of quality of life are important for the management of epilepsy in addition to the optimal seizure control. 9 Standardized QOLIE-10 questionnaires were the main measurements of the quality of life assessed for our research. The QOLIE10 questionnaire evaluates three elements of the epileptic patient's health: epilepsy effects, mental impact and role functions. The score for each scale was calculated, together with the QOLIE-10 complete score. 10 In the beginning of the trial, baseline QOLIE-10 score was 34.58±1.835 for LEV group, which at the end of 12 weeks decreased to 16.63±1.832 (Table 3), which showed a significant mean change of 17.95±2.527 (Table 5). Score values showed a 35.9% increase over the baseline in the LEV group. This was backed by a research carried out by SS Hassan et al. that showed a percentage change of 34.82%. 11 In this research the subgroup analysis also showed improvements in all areas where distinct aspects of the results QOLIE-10 were compared. The mean change in effects of the epilepsy (4.84±1.463), mental (5.32±1.945) and role functions (7.79±1.988) ( Table 5). The role function in the current study showed the greatest improvement. The baseline score in the initial study QOLIE-10 VPA group was 29.00±3.204 which at the ends of 12 weeks was reduced to 17.44±1.413 (Table 4), showing a mean change 11.56±3.540 which was statistically significant (p<0.05) ( Table 5). In the VPA group, the results showed an improvement of 23.12%. Two different studies have supported these findings. SANAD trial comparing VPA to LTG and TPM. 12 Similar research was conducted in Spain comparing VPA and LTG which showed improvements in the quality of life from baseline. 5 Subgroup analysis showed improvement in all spheres. The mean change in epilepsy effect (3.56±1.750), mental effects (3.50±1.966) and role function effects (4.50±2.191) ( Table 5). Role function showed the maximum improvement. We could not find studies where these two drugs were compared head to head even after an extensive literature search. Inter group comparison between the two groups showed statistically significant (p<0.05) difference in mean change in QOLIE-10 score i.e. 17.95±2.527 for LEV and 11.56±3.540 (Table 5). Seizure freedom is an important parameter for measuring the effectiveness of epileptic treatment. The duration of treatment is determined by how quickly seizure control is accomplished as well as how well seizure control is done. This was therefore evaluated by the seizure diary reported in our research by patients. The monthly mean seizure frequency in LEV and VPA at the start of the trial was 3.37±0.831 and 3.19±0.834 ( Table 2). The seizure rate was lower than in other epilepsy studies, but this could result from newer patients who have participated in this study. 13 The total seizure freedom of patients at 6 weeks was 93.33%, 90% was LEV and VPA, and at 12 weeks 100% patients were seizure free. This is consistent with another study where seizure freedom between older and new AEDs did not differ. 14 Medicament adherence is significant in the treatment of chronic diseases such as epilepsy; it can influence the recurrence of the seizure and impact the quality of life. Adherence in our research was evaluated by counting pills. The adherence of LEV and VPA groups (not statistically significant p<0.05) was 93.33% and 90% at 6 weeks, which could be caused by more adverse effects as compared to LEV as a result of the VPA. Enhanced compliance improves quality of life. 15 Adverse drug reaction is an important factor that can demotivate patients to continue the treatment. Adverse effect leads to reduced adherence to medication resulting in enhanced likelihood of seizure episodes resulting in lower quality of life. The adverse events recorded in the present study were based on the adverse effect check list during the entire study period. Total number of adverse events reported in the research was 11, out of which 8 reported with VPA and 3 with LEV (Table 6). It were found that both groups did not differ significantly. As we did not find any head to head comparison of our study drugs we tried to correlate results with other studies which compared older versus newer AEDs. Our findings were in accordance with other studies where it was inferred that both do not differ statistically in terms of adverse events. 16 Adverse event in group LEV group were drowsiness (2) and irritability (1). In VPA adverse events were anorexia (4), drowsiness (2), irritability (1) and loose stools (1). The most common adverse effect in LEV group was drowsiness and in VPA group were anorexia (Table 6). An important part of any study which compares two different drugs is to assess for the cost-benefit ratio in terms of efficacy and safety. In the present study, we determined that the average monthly cost of therapy for LEV was INR 1394±209.427 and for VPA was INR 706.25±152.616. There was a significant difference in monthly cost of the two drugs, but this did not affect the patient's adherence as is expected with costly medication. As cost is an important factor which determines the continuation of medication by patients as stated by another study. 17 Anti-epileptic treatment effectively controls seizure in patients of epilepsy. Both the drugs in our study effectively provided seizure control. Both the drugs in the study provide a positive influence on quality of life. Quality of life was not affected by gender. Seizure type and treatment administered has a positive influence on quality of life. There were no serious adverse events in this study in both groups. The major limitation of our study was its short duration and only monotherapy was included. The results of the present study does not give information about the epilepsy pattern and its effect in patients less than 18 years and more than 60 years as well as in pregnant females or patients with co-morbid conditions. In spite of this it can pave path for further studies which can compare newer AEDs with older AEDs for comparison of quality of life in epileptic patients which is mostly overlooked. Antiepileptic drugs are the mainstay of epilepsy treatment. In the present study it was seen that LEV as compared to VPA was equal in efficacy in terms of seizure control, lesser side effects and showed significant improvement in terms of quality of life in patients of GTCS.
2019-10-24T09:11:28.996Z
2019-10-22T00:00:00.000
{ "year": 2019, "sha1": "4907547dd80b9372480ccbf8b6727173e916dd91", "oa_license": null, "oa_url": "https://www.ijbcp.com/index.php/ijbcp/article/download/3642/2654", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "66241c947fb87932ace8eb45298efee215daec9e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
26496125
pes2o/s2orc
v3-fos-license
NICOTINE DEPENDENCE MEDIATES THE RELATIONS BETWEEN INSOMNIA AND BOTH PANIC AND POSTTRAUMATIC STRESS DISORDER IN THE NCS-R SAMPLE . INTRODUCTION Insomnia is the most common sleep problem (cf., hypersomnia), with prevalence rates ranging from 30 to 35% [Breslau et al., 1996]. Primary insomnia is defined by problems falling and staying asleep or nonrestorative sleep that persists longer than 1 month and results in functional impairment [American Psychiatric Association, 2000;American Sleep Disorders Association, 1990]. Research has demonstrated high comorbidity rates between chronic insomnia (i.e., greater than 6 months) and other types of psychopathology, including major depression, anxiety disorders, and substance use disorders [Eaton and Kessler, 1985;Reiger et al., 1984]. Specifically, of those with chronic insomnia, 40% meet criteria for at least one type of psychiatric comorbidity. Within the anxiety disorders, panic disorder (PD) and posttraumatic stress disorder (PTSD) both demonstrate prominent symptoms of insomnia [Breslau et al., 1996;Sheikh et al., 2003]. Specifically, individuals with PD report higher rates of insomnia symptoms than persons without psychopathology [Overbeek et al., 2005]. Indeed, as many as 67% of those with PD experience clinically significant insomnia [Mellman and Uhde, 1990], with approximately 68% of persons with PD reporting difficulties falling asleep and 77% reporting disturbed and restless sleep [Sheehan et al., 1980]. Moreover, this relation remains after controlling for depression and nocturnal panic, suggesting these factors are not entirely accounted for by the relation between PD and sleep problems [Overbeek et al., 2005]. Sleep difficulties also are related to PTSD [Brewin et al., 1999;Kramer and Kinney, 1987]. For instance, adults with PTSD, compared to those without, report greater levels of sleep complaints including trouble falling asleep, maintaining sleep, awakenings during the night, nonrestorative sleep, and daytime fatigue [Lavie et al., 1979;Mellman et al., 1995;Neylan et al., 1998;Pillar et al., 2000;Rosen et al., 1991;Sadavoy, 1997;Silva et al., 1997]. Although the associations among panic, PTSD, and sleep loss are well documented, relatively less is known about the processes that may account for this relation. One promising candidate in this domain is nicotine dependence. Smoking is common among persons with PD and PTSD [Lasser et al., 2000]. For instance, high rates of smoking have been documented among persons with PD [Zvolensky et al., 2005a] with estimates as high as 56% [Amering et al., 1999]. In fact, smoking rates among persons with PD are approximately twice as high as those observed among the general United States population [Lasser et al., 2000]. Moreover, persons with PD are frequently heavy smokers [McCabe et al., 2004]. Similarly, approximately 45% of persons with PTSD are current smokers and approximately 40% of these persons smoke heavily (Z26 cigarettes per day) and a large proportion of these individuals are nicotine dependent [Beckham, 1999;Feldner et al., 2007]. On the basis of this evidence of high rates of smoking and nicotine dependence among persons with PD and PTSD, it is likely that these persons are frequently encountering nicotine withdrawal symptoms. Although nicotine withdrawal is an unnecessary criterion for nicotine dependence, research has suggested those who are nicotine dependent are significantly more likely to experience withdrawal symptoms compared to those who are nondependent [Shiffman and Paty, 2006;Shiffman et al., 1994]. Furthermore, research has linked increased nicotine withdrawal severity to relatively greater nicotine dependence [r 5 0.37; Gritz et al., 1991]. Therefore, nicotine withdrawal among people with PD or PTSD may, in part, be contributing to symptoms of insomnia. Indeed, symptoms of insomnia are common among smokers [Soldatos et al., 1980] and evidence suggests that nicotine-related withdrawal, at least in part, interferes with sleep [Colrain et al., 2004]. For instance, sleep disturbances such as frequent awakenings during the night [Prosise et al., 1994] are often reported during smoking quit attempts [Hughes et al., 1994]. Therefore, it may be nicotine-related withdrawal that, at least in part, interferes with sleep among these groups. Taken together, there is evidence to suggest insomnia symptoms are related to both PD and PTSD. However, relatively little research has examined theoretically relevant processes through that this relation may be maintained [i.e., mediator;Holmbeck, 1997]. For this reason, examination of likely mediators that account for this relation is timely and an important next step to better understand the comorbidity between insomnia symptoms and both PD and PTSD. Given this backdrop, the present investigation examined the expected mediational role of nicotine dependence, as an index of associated theoretically relevant nicotine withdrawal, in the relation between PD and insomnia symptoms as well as PTSD and insomnia symptoms. Specifically, the hypotheses that nicotine dependence significantly influences the relation between PD and insomnia symptoms and PTSD and insomnia symptoms were tested. These hypotheses were tested among persons with PD and PTSD, as opposed to among groups characterized by other types of anxietyrelated psychopathology, as both smoking and insomnia are particularly elevated in, and therefore relevant to, PD and PTSD. Consistent with recommendations [Baron and Kenny, 1986;Holmbeck, 1997] mediation, as opposed to moderation, was hypothesized due to our a priori conceptualization of the role of nicotine withdrawal in the association between insomnia symptoms and both PD and PTSD. Specifically, nicotine withdrawal was conceptualized as a mechanism that, at least partially, accounts for this relation (mediator), rather than a factor that affects the magnitude or direction of this relation (moderator). Moreover, consistent with models of mediation [Kraemer et al., 2001] evidence suggests PTSD and PD frequently precede the onset of nicotine dependence [Feldner et al., 2007;Zvolensky et al., 2005a]. Complete mediation was not expected in either case as it is likely that nicotine withdrawal is not the only factor underlying the associations between insomnia symptoms and both PD and PTSD, but that this is one factor among others (e.g., traumatic event exposure-related nightmares, basal levels of worry) that may account for these relations. Given a documented association between depression and (1) insomnia [Breslau et al., 1996], (2) PD [Gorman and Coplan, 1996], and (3) PTSD [Breslau et al., 2000], histories of major depressive episodes were statistically controlled to increase confidence that any observed findings were not accounted for by this factor. Finally, to further ensure any observed findings were due to nicotine-dependence-related factors (nicotine withdrawal), diagnoses of drug dependence, and alcohol dependence as well as gender were also entered as covariates. METHODS This study draws on the National Comorbidity Survey-Replication (NCS-R), which is a nationwide epidemiological study designed to assess the prevalence and psychosocial correlates of Diagnostic and Statistical Manual-Fourth Edition [DSM IV; American Psychiatric Association, 1994] psychiatric disorders [Kessler et al., 1997]. Methods, weighting, and sampling procedures have been described comprehensively by Kessler and colleagues [Kessler et al., , 2005Kessler and Merikangas, 2004]. Briefly, the study was based on a nationally representative sample of 9,090 respondents, English-speaking adults, 18 years and older. Data were collected between February 2001 and April 2003. Interviews were conducted face-to-face in respondents' homes. Participants were selected from 48 states, based on a stratified multistage probability sample. The response rate was 73%. Respondents were interviewed in two parts. All respondents (n 5 9282) received a diagnostic interview (part I) that took an average of about 1 hr to administer. In an effort to reduce respondent burden and control study costs, a subgroup (n 5 5692) received an additional assessment (part II) focused on risk factors, consequences, other correlates of psychopathology, and additional disorders. Part II participants included all part I respondents who met criteria for any lifetime core disorder plus a probability subsample of other respondents. The part II sample was used to examine comorbidity with disorders of secondary interest to the survey, (e.g., sociodemographic correlates). SAMPLING AND WEIGHTING A stratified, multistage probability sample was used. The part I sample was post-stratified to match the 2000 Census population. The part I sample was weighted to adjust for differential probability of selection and discrepancies between the sample and the United States population concerning census sociodemographic and geographic variables. The part II sample was also weighted to adjust for differential probability of selection from the part I sample. More details on NCS-R sampling and weights are reported in detail elsewhere . RESPONDENT RECRUITMENT Participants received a letter in the mail a few days before they were contacted by the interviewer. Respondents were interviewed in their homes and compensated $50 for participation. CONSENT Interviewers explained the study and obtained verbal informed consent before beginning each interview. The human subjects committees of Harvard Medical School (Boston, Massachusetts) and the University of Michigan (Ann Arbor, Michigan) approved the NCS-R recruitment and consent procedures. INTERVIEWERS Face-to-face interviews were carried out by professional interviewers who had obtained extensive training and were closely supervised from the Institute for Social Research at the University of Michigan, Ann Arbor. MEASURES Diagnostic assessment. The NCS-R diagnoses are based on the World Health Organization Composite International Diagnostic Interview , which is a structured lay-administered diagnostic interview from that DSM-IV diagnoses could be derived. As described by Kessler and colleagues [Kessler et al., , 2005Kessler and Merikangas, 2004], DSM-IV diagnoses include anxiety disorders (PD, agoraphobia without PD, specific phobia, social phobia, generalized anxiety disorder, PTSD, obsessive-compulsive disorder, separation anxiety disorder), mood disorders (major depressive disorder, dysthymia, bipolar I and II disorders), impulse control disorders (intermittent explosive disorder, oppositional-defiant disorder, conduct disorder, attention-deficit/hyperactivity disorder), and five substance use disorders (alcohol abuse, drug abuse, alcohol dependence, drug dependence, and nicotine dependence). Drug abuse, drug dependence, PTSD, and nicotine dependence, were included only in part II because they all required extensive introductory questions. Furthermore, nicotine dependence was assessed separately from drug dependence as nicotine dependence was neither necessary nor sufficient for a drug-dependence diagnosis. The four disorders that require onset of symptoms in childhood (separation anxiety disorder, oppositional-defiant disorder, conduct disorder, and attention-deficit/hyperactivity disorder) were also included in part II and limited to respondents in the age range of 18 to 44 years because of concerns about recall bias among older respondents. All other disorders were included in part I. Organic exclusion rules and hierarchy rules were used to make all diagnoses other than the diagnoses of substance use disorders. Substance use disorders were diagnosed without hierarchy in the recognition that abuse often is a stage in the progression to dependence. Sleep problems. In a section on health-related behaviors and problems conducted in part II of the NCS-R, which was separate from the psychopathology (e.g., depressive symptoms) assessment, participants were asked several questions related to sleep problems. Specifically participants were asked (Yes/No) did you have a period lasting 2 weeks or longer in the past 12 months when you had any of the following problems with your sleep: problems getting to sleep, when nearly every night it took you 2 hr or longer before you could fall asleep; problems staying asleep, when you woke up nearly every night and took an hour or more to get back to sleep; problems waking too early, when you woke up nearly every morning at least 2 hr earlier than you wanted to; and problems feeling sleepy during the day (daytime fatigue). Other aspects of sleep problems also were measured in part II of the NCS-R, including life-interference related to sleep deprivation (e.g., falling asleep while having a conversation). However, as this study did not focus on this type of interference, this aspect of the assessment will not be described in detail. Instead, in this study, insomnia symptom levels were indexed by summing the presence (1) versus absence (0) of the above four symptoms of insomnia while excluding hypersomnia due to the conceptual relevance of insomnia symptoms, rather than hypersomnia. Thus, insomnia symptom level scores ranged from 0 to 4 symptoms. In this study, this index of insomnia symptoms demonstrated adequate internal consistency (Cronbach's a 5 0.73). DATA ANALYTIC APPROACH All analyses were conducted using insomnia symptoms and dichotomous (yes or no) diagnoses of PTSD, PD, major depressive episode (MDE), alcohol dependence, drug dependence, and nicotine dependence present in the 12-months before the interview. First, descriptive analyses were conducted to examine associations among the study variables. Second, mediational hypotheses were tested with a series of three regression (i.e., logistic and linear) models [as recommended by Judd and Kenny, 1981] for each of the two independent variables (i.e., PD and PTSD). Diagnoses of major depressive episodes, alcohol dependence, drug dependence, and gender were entered as covariates at step 1 in all models to ensure any effect demonstrated was not due to these factors. Then, four primary associations (A, B, C, and C 0 ) were examined (see Fig. 1). These associations are based on Baron and Kenny's [1986] recommendations for demonstrating mediation. These recommendations are as follows: (1) the predictor variable is significantly related to the mediator (pathway A), (2) the mediator is significantly related to the outcome variable (pathway B), (3) the predictor variable is significantly related to the outcome variable (pathway C), and (4) removing variance from the association between the predictor and outcome associated with the mediator results in a statistically significant reduction in the association (pathways C minus C 0 ). In the first model, logistic regression analyses were used to examine pathway A in which either PD or PTSD diagnoses were regressed separately on the mediator (nicotine dependence). In the second model, hierarchical linear regression analyses were employed to examine pathway C in which PD or PTSD diagnoses were regressed on the outcome variable (insomnia symptoms). Finally, in the third model, hierarchical linear regression analyses were used to examine pathways C 0 and B in which PD and PTSD diagnoses and the mediator (nicotine dependence) were regressed simultaneously on insomnia symptoms. Consistent with recommendations for examining the statistical significance of a test of mediation [Holmbeck, 2002], we then conducted a Sobel [1988] test that indexed the significance of nicotine dependence as a mediator of the relation between insomnia symptoms and both PD and PTSD diagnoses. The Sobel test is a conservative [MacKinnon et al., 1995] and recommended test of the significance of a mediator [MacKinnon et al., 2002]. Specifically, this test uses standard errors and raw coefficients to calculate a ratio that indexes if the indirect effect of the predictor variable on the outcome variable through the mediator is significantly different from zero. Pathway A represents the relation between the predictor variable and the mediator. Pathway B represents the relation between the mediator and the outcome variable. Pathway C represents the relation between the predictor and outcome variables. Finally pathway C 0 represents the relation between the predictor and outcome after accounting for variance associated with the mediator. DESCRIPTIVE DATA Independent samples t tests were conducted to assess differences in insomnia symptoms between those with versus without PTSD, PD, and all covariates (major depressive episodes, drug dependence, alcohol dependence, and gender). w 2 analyses were conducted to examine differences in nicotine dependence between those with versus without PTSD, PD, and all covariates. People with, compared to without, PD endorsed significantly higher levels of insomnia symptoms ( In relation to gender, females endorsed significantly higher levels of insomnia symptoms than men (M 5 1.17 [SD 5 1.32] versus M 5 .94 [SD 5 1.26], respectively; t 5 6.65; Po.05), whereas men were significantly more likely to be nicotine dependent than women (6.1 versus 4.9%, respectively; w 2 5 4.29; Po.05). Finally, nicotine-dependent persons, relative to those not dependent, reported significantly higher levels of insomnia symptoms (M 5 1.47 [SD 5 1.37] versus M 5 1.03 [SD 5 1.28], respectively; t 5 7.98; Po.05). ASSOCIATIONS AMONG PRIMARY VARIABLES Consistent with recommendations for testing mediation [Baron and Kenny, 1986], the criterion that all factors involved in a mediation model be significantly related to each other was tested. See Table 1 for a summary of regression analyses. In terms of PD, logistic regressions demonstrated the presence of PD, compared to the absence, was significantly related to an increased likelihood of the presence of nicotine n 5 5,692. b, standardized b weight; MDE, PD, PTSD, ND 5 12-month diagnoses of major depressive episodes, panic disorder, posttraumatic stress disorder, and nicotine dependence, respectively, and were coded as 0 5 not present and 1 5 present. Gender was coded as 1 5 male, 2 5 female. ns, not significant. a Logistic regressions. dependence (odds ratio, OR 5 1.65; Po.05) after controlling for variance accounted for by histories of major depressive episodes, drug dependence, alcohol dependence, and gender. Second, regression analyses suggested PD was significantly related to insomnia symptoms (b 5 .11; Po.01) after controlling for variance accounted for by the covariates, such that the presence of PD was related to higher levels of insomnia symptoms. Finally, analyses suggested that both PD and nicotine dependence, after controlling for depressive episodes, drug dependence, alcohol dependence, and gender significantly predicted insomnia (DR 2 5 .01; adjusted R 2 5 .09; Po.01). However, the relation between PD and insomnia remained significant even after nicotine dependence was included in the model. Thus, as predicted, nicotine dependence did not fully mediate the relation between PD and insomnia. In terms of PTSD, logistic regressions (see Table 1) demonstrated the presence of PTSD, compared to the absence, was significantly related to an increased likelihood of the presence of nicotine dependence (OR 5 2.91; Po.01) after controlling for variance accounted for by the covariates. Analyses also suggested the presence, versus absence, of PTSD was significantly related to increased insomnia symptoms (b 5 .15; Po.01). Finally, when PTSD and nicotine dependence were entered simultaneously, both PTSD and nicotine dependence significantly predicted insomnia symptoms (DR 2 5 .02; adjusted R 2 5 .10; Po.01) after controlling for variance accounted for by the covariates at step 1 in the model. Importantly, the association between PTSD and insomnia remained significant after nicotine dependence was included in the model, supporting the hypothesis that nicotine dependence does not fully mediate the relation between PTSD and insomnia. ANALYSES OF PARTIAL MEDIATION The final recommended step for demonstrating partial mediation [Baron and Kenny, 1986;Holmbeck, 2002] was then conducted. Specifically, the significance of the decrease in the relation between the independent variable (PD or PTSD) and the outcome variable (insomnia symptom levels) when both the mediator (nicotine dependence) and independent variable are included in the regression equation was examined. First, in terms of PD, a Sobel [1988] test suggested that the inclusion of nicotine dependence significantly decreased the strength of the association between PD and insomnia symptoms (z 5 2.06; Po.05), such that the associated b weight decreased from .115 to .113. Similarly, a Sobel test indicated that the inclusion of nicotine dependence significantly (z 5 3.31; Po.01) decreased the relation between PTSD and insomnia symptoms, as evidenced by a decrease in the b weight from .15 to .14. Together, these patterns of results suggest the relations between insomnia symptoms and both PD and PTSD are partially mediated by nicotine dependence. DISCUSSION Although there is increasing recognition of the relation between insomnia symptoms and both PD and PTSD, additional research is needed that examines possible mechanisms underlying this relation. The following three lines of evidence suggest nicotinedependence-related nicotine withdrawal may be one such mechanism: (1) persons with PD and PTSD often are heavy smokers [Zvolensky et al., 2005a;Feldner et al., 2007], (2) smokers report elevated levels of sleep problems [Soldatos et al., 1980], and (3) smoking quit attempts are related to increased sleep disruption [Colrain et al., 2004;Hughes et al., 1994], which may be due to nicotine withdrawal. Building on this backdrop, this study aimed to further evaluate the relations between insomnia symptoms and both PD and PTSD by examining nicotine dependence, as an index of nicotine withdrawal, as a partial mediator of this relation among a nationally representative sample of adults from the NCS-R [Kessler et al., 1997]. Consistent with prediction, nicotine dependence was found to be a significant partial mediator of both the PD-insomnia symptoms and PTSD-insomnia symptoms relations. Specifically, after controlling for variance accounted for by histories of major depressive episodes, drug dependence, alcohol dependence, and gender, the significant relations between insomnia symptoms and both PD and PTSD were significantly attenuated when variance accounted for by nicotine dependence was removed. This pattern suggests symptoms of insomnia reported by individuals with PD or PTSD may be attributable, in part, to nicotine withdrawal. It is important to consider two points in relation to the possible significance of this finding as the reductions in the relations between insomnia and both PD and PTSD when nicotine dependence was included in the models were relatively small in overall magnitude. First, all criteria for demonstrating mediation were met, even after controlling for multiple theoretically and empirically relevant covariates. Second, the Sobel [1988] test used to test the statistical significance of a mediator is a particularly conservative test of mediation [MacKinnon et al., 1995]. Due to these conservative aspects of the design, it is particularly noteworthy that results supported the hypothesized mediational models [Abelson, 1985]. Despite the importance of demonstrating statistical significance in this conservative test, future research is needed that examines the clinical significance of these findings. For instance, monitoring the effects of successful smoking cessation among persons with PD or PTSD on insomnia would aid in understanding the clinical significance of these relations. Also, nicotine dependence only partially accounted for the PD/PTSDinsomnia symptom associations, highlighting there likely are other mechanisms involved in this relation (e.g., psychophysiological hyperarousal). This pattern of findings coalesces well with theoretical and empirical work suggesting persons with PD and PTSD may be particularly sensitive and reactive to nicotine withdrawal symptoms [Feldner et al., 2007;Zvolensky and Bernstein, 2005]. Specifically, nicotine withdrawal is characterized, in part, by increased bodily arousal [Hughes et al., 1990] and negative affect [e.g., anxiety; Baker et al., 2004;Patten and Martin, 1996]. The complete withdrawal syndrome emerges gradually, with early signs occurring almost immediately after smoking a cigarette [Jarvik et al., 2000;McCarthy et al., 2006;Schuh and Stitzer, 1995]. Therefore, for nicotine-dependent smokers, nicotine withdrawal symptoms begin shortly after the last cigarette smoked during the day (likely before sleep onset) and gradually increase during sleep. Individuals with PD and PTSD likely experience these withdrawal symptoms with more intensity than persons without these conditions due to elevated trait-like fear of withdrawal-relevant sensations, such as increased heart rate and anxiety [Taylor et al., 1992]. That is, one of the key characteristics of both PD and PTSD is these individuals react fearfully to anxiety-related sensations [Schmidt et al., 1997[Schmidt et al., , 1999Taylor, 2003], which characterize, in part, nicotine withdrawal. Also, elevations in this sensitivity to anxiety-related sensations interact with acute nicotine withdrawal to increase reactivity to bodily sensations [Zvolensky et al., 2005b]. Together, persons with PD or PTSD are sensitive and reactive to nicotine-withdrawal-related sensations and nicotine-dependence-related withdrawal, and the current findings are consistent with the hypothesis that nicotine-related withdrawal, at least in part, interferes with sleep among these groups. Although this study is not a direct test of this proposed theory, this is an early step in this area that requires future research to substantiate the current findings. For instance, as research emerges on the relations among smoking, insomnia, and other types of psychopathology, particularly those within the anxiety spectrum (e.g., obsessive-compulsive disorder, generalized anxiety disorder), of which there currently is little, it may be beneficial to extend the current test to these areas as well. Also, it may be beneficial to extend the current model to examinations of other types of withdrawal syndromes to broaden our understanding of the role of drug-related withdrawal, generally, in insomnia. Finally, laboratory-based studies that allow for more controlled investigations of the role of nicotine withdrawal in insomnia are now needed. For instance, monitoring the effects of experimental manipulations of nicotine withdrawal as a function of PD and PTSD in terms of sensitivity and reactivity to withdrawal symptoms would strengthen confidence in the model proposed herein. The main effects observed in this study also represent important extensions to previous research. First, this study further supports nationally representative [Leskin et al., 2002] and other empirical research suggesting PTSD and PD are linked to elevated symptoms of insomnia [Mellman and Uhde, 1989;Neylan et al., 1998;Stein et al., 1993], but extended this work to the NCS-R sample. It is also noteworthy that the increased likelihood of nicotine dependence among those with, versus without, PD and PTSD independently replicates previous findings [Farrell et al., 2001;Hapke et al., 2005]. A direction for future research that would continue to clarify these relations would be to examine the relative contributions of panic versus PTSD in terms of insomnia and nicotine dependence. At this early stage in this research program, it was not possible to make a priori predictions regarding such relative contributions given the high degree of overlap between the two conditions [Falsetti and Resnick, 1997]. However, future research examining such relative contributions may highlight specificity between these conditions in terms of nicotine dependence and withdrawal as well as insomnia, thereby suggesting disorder-specific (e.g., PTSD-related nightmares) mechanisms. There are several caveats that need to be considered when interpreting the current results. First, this study relied on self-reported nicotine dependence as a proxy for the hypothesized mechanism interfering with sleep: nicotine withdrawal. At this early stage of the research program, such a proxy was useful as nicotine withdrawal is a diagnostic criterion for nicotine dependence. However, the limitations of this approach also are important to consider. Nicotine dependence can be diagnosed in the presence of nicotine tolerance without endorsing withdrawal. As there currently is not a theoretical rationale for predicting such tolerance would influence insomnia, it was helpful to adopt nicotine dependence as a proxy for withdrawal. However, future studies can improve upon this methodological approach. Although nicotine withdrawal symptoms may be difficult to monitor during sleep, it may be possible for future studies to directly measure certain aspects of the withdrawal syndrome (e.g., increased bodily arousal) to directly test the role of nicotine withdrawal in sleep interference. Similarly, this study relied on self-reported insomnia symptoms, which may reflect memory biases associated with sleep deprivation-related negative affect. It will be important for future studies in this domain to include direct observation of sleep parameters, such as latency to sleep onset and sleep efficiency. This study also examined the role of relatively broad categories of smoking-related behavior (i.e., nicotine dependence versus nondependence). Although advantageous at this early stage of research as such an approach may emphasize smoking-related affects, future research may increase our understanding of the role of smoking in insomnia by looking at other continuous indices of smoking behavior (e.g., cigarettes smoked per day, continuous indices of nicotine dependence, severity of nighttime nicotine withdrawal). Finally, this research is cross-sectional in nature and therefore does not allow us to draw causal inferences regarding the role of nicotine withdrawal in insomnia. The cross sectional nature of this study, however, plays an important role in informing the conceptualization and methodology of required longitudinal research to claim causal relations [Kraemer et al., 2000]. Taken together, this study uniquely contributes to a growing literature focused on understanding processes that may underlie insomnia experienced by persons with PD and PTSD. Importantly, smoking is a malleable factor that could be targeted via therapeutic intervention. Thus, continued research on these types of processes may significantly advance our ability to prevent and treat insomnia among high-risk groups, such as nicotine-dependent persons with PD or PTSD.
2018-04-03T00:44:58.189Z
2008-08-01T00:00:00.000
{ "year": 2008, "sha1": "65e506f980f6278cdd36011d670acaf469f81bf3", "oa_license": null, "oa_url": "https://doi.org/10.1002/da.20374", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "1d5bba0b0ec4c883d83ec07364097737b8240b25", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
255930946
pes2o/s2orc
v3-fos-license
Preparation and Application of Environmentally Friendly Electroactivated Solutions for Use in Beekeeping in Disinfection of Hives and Equipment . Analysis of the health status of bee colonies has shown that constant prevention and treatment of bees from concomitant diseases is necessary. The health of bees is directly related to their productivity. Most beekeepers use highly active chemicals to process beehives and inventory. To change this situation, it is necessary to search for new solutions for the treatment of bees. One of the ways to solve the problem is the use of environmentally friendly and harmless to bees and beekeepers electroactivated solutions obtained using diaphragm electrolyzers of water. The conducted studies of the developed model of a diaphragm electrolyzer using the Comsol program allowed us to obtain information about the physicochemical processes occurring in it over time. Comparison of simulation results and experimental data showed their good coincidence. Studies on the treatment of bee colonies in the apiary with electroactivated solutions can halve the use of chemicals and reduce the likelihood of drugs getting into bee products. Introduction To obtain high-quality beekeeping products in large quantities, it is necessary to carefully monitor the health of the bees themselves. The relevance of this issue is related to the fact that bees are social insects and with the disease of one individual, the disease can be rapidly transmitted to the whole family. In the bee family, both diseases of bees and their brood larvae can be present. The causes of infectious diseases can be bacteria, fungi, viruses, parasites. The most widespread are the following infectious diseases of bees: hafniasis, nosematosis, paratyphoid, braulosis, acarapidosis. In addition to infectious diseases, bees also suffer from non-infectious diseases, for example, toxicosis. The strength of the bee family largely depends on its timely replenishment with new bees from the brood. One of the most frequent diseases of the brood is its defeat by foul brood, the causative agent of which are bacteria of several species. The source of this disease is the bees themselves. They begin to remove the dead larvae and at the same time infect the mouthparts, which leads to the spread of infection throughout the brood nest. Foul brood spores can remain viable for many years and are also particularly resistant to chemicals. In this regard, concentrated solutions and mixtures of highly active disinfectants are used for treatment. A particularly serious disease of bee colonies is varroatosis, which is accompanied by the death of larvae and a strong weakening of the family. The cause of this disease is the ectoparasitic mite Varroa jacobsoni. Female mites lay eggs in cells and by the time the bees leave the honeycomb, the mites become adults. Treatment and prevention are carried out with the help of physical, chemical and biological preparations. In Russia, chemical methods of combating bee diseases have become the most widespread, and most often with the use of antibiotics. The use of formic and oxalic acids, which are sprayed in compliance with precautionary measures (gas masks, special clothing, etc.), proves its effectiveness. The use of various chemicals leads to the fact that toxic drugs get into bee products. Over time, there is an adaptation of pathogenic microorganisms to chemicals, and beekeepers increase the dose. This situation can be changed if other methods of combating bacterial diseases of bees are used, which are based not on increasing the concentration of chemicals, but on blocking individual processes of the vital activity of microorganisms. Ozonation and the use of electroactivated solutions of anolyte can be recommended as such methods [1,2]. Materials and methods Diaphragm electrolysis of water leads to the appearance of solutions (in particular anolyte) with unique properties compared to traditional disinfectants [3]. On the part of beekeepers, the following requirements can be formulated for an antimicrobial agent: a wide spectrum of action, effective defeat of bacteria, viruses, fungi and spores, safety for humans during the preparation of the product and after its use, having minimal corrosion activity in relation to the materials used in beekeeping, maximum ease of use, low cost. The production and further decomposition of H 2 O 2 in the process of diaphragm electrolysis of water is accompanied by the formation of compounds with high antimicrobial activity. The resulting radicals and atomic oxygen also take part in the destruction of microorganisms. [3]. Thus, a solution with the presence of such elements acquires a universal spectrum of action: it damages pathogenic microorganisms, while it is safe for humans. The main active substances in anolyte are peroxide and chloro-oxygen compounds. Their combination makes it impossible for microorganisms to adapt to the biocidal action of the anolyte, and low concentrations of these compounds make these solutions safe for their use in the apiary. Household diaphragm electrolyzers are on sale. They are easy to operate and suitable for use by beekeepers ( Figure 1). There are studies [4] proving the strengthening of the antimicrobial effect of anolyte due to its bubbling with ozone. In the Kuban State Agrarian University, studies have been conducted to combat bee diseases by dissolving ozone in anolyte [5]. The results obtained show the high efficiency of the use of such solutions. When the current passes through the diaphragm electrolyzer, various chemical reactions occur in the anode and cathode chambers. The speed and presence of these reactions depends on the initial salt content in the water, the operating mode of the electrolyzer, as well as on the materials used in it, in particular, the materials of the electrodes. In the cathode chamber, weakly mineralized water acquires alkaline properties due to the transformation of salts into hydroxides, and metal reduction processes are also underway in this chamber. Reactions associated with the formation of chlorine-containing substances occur in the anode chamber of the diaphragm electrolyzer. It was found that as a result of electrolysis, the anolyte is saturated with such oxidizing agents as: HClO, HCl, ClO, ClO 2, ClO 3, O 3. Hypochlorous acid HClO is the most powerful oxidizer. Based on the chemical nature of the disinfecting effect of electroactivated aqueous solutions, it is necessary to determine the parameters of the electrolyzer and its operating modes. As our research shows, the solution should be used 1-2 hours after preparation. The time is limited by a decrease in the concentrations of active compounds after electrolysis. The main chemical reactions were established and included in mathematical models [6]. The Comsol program was used to solve mathematical models. Initially, a geometric model of the investigated diaphragm electrolyzer was developed (Figure 2). Interfaces for the study of physical processes were selected in the software product and multiphysical connections were established. Modeling of thermal processes occurring in a diaphragm electrolyzer was obtained using the Heat Transfer interface, in which the equations of thermal conductivity in solids BIO Web of Conferences 57, 0 (2023) ITSM-2022 https://doi.org/10.1051/bioconf/20235707001 7001 (electrodes, installation housing), liquids (electrolyte) and porous media (diaphragm) were solved. Steel and ruthenium were chosen as the cathode and anode materials, respectively (as a rule, in household diaphragm electrolyzers, the anode is a steel electrode coated with ruthenium). The diaphragm material is a tarpaulin. The initial temperature was assumed to be 20 °C. Convective heat flow was described by the following equation [7]: where h is the coefficient of heat transfer through the surface. Using the "Heat Flux" functions in the Comsol environment and accepting the condition of natural convection on the outer part of the electrolyzer, heat transfer coefficients were calculated. The solution of the hydrodynamic problem was carried out using the "Laminar Flow" interface, since the fluid velocities are low and caused by natural convection. Electrochemical processes were modeled using the interface "Tertiary Current Distribution". The concentrations of the following chemical compounds were set as initial conditions: Na, Cl, Mg, SO 4, HCO 3, K, Ca, Cl 2, O 2, H 2, Fe, HClO, HCl, ClO 2, CaCO 3, O 3, NaOH, ClO 3, ClO. The kinetics of electrode processes was implemented using the "Electrode Surface" interface. As electrode reactions, reactions at the anode proceeding with the formation of O2 и Cl2, as well as reactions at the cathode with the reduction of H2 and Fe were modeled. The rates of the main chemical reactions occurring in the electrolyte were calculated in the "Chemistry" block. As a result of solving the described physico-chemical problems, fields of fluid velocities, concentrations of the resulting chemical compounds, temperature and pH were obtained. The analysis of images of fluid velocities during thermal convection inside the electrolyzer is carried out. It can be seen from Figure 3 that the highest speeds are in the anode chamber -up to 4 ·10 -3 m/s, liquid circulation zones are also observed near and behind the electrodes. Of particular importance is the formation of Cl 2 in the anolyte. Figure 5 shows that after six minutes the concentration of chlorine gas is mainly behind the anode and its maximum value is about 0.075 mol/m 3 , and after sixteen minutes the gas fills the entire anode chamber of the electrolyzer and the process of its dissolution is underway. Experimental studies were conducted to confirm theoretical studies. For the experiments, a laboratory installation was assembled using the Iva ionizer. To saturate the anolyte with ozone, a plate-type electric detonator was additionally used, the capacity of which was 600 mg/h. To measure ion concentrations, an Expert-001 water analyzer with electrodes was used: ESC-10603/7 pH, ALICE-121 Ca, ALICE-121 K, ALICE-112 Na, HC-Mg-001, ALICE-131 Cl, ESr-10103/3.5. The conductivity of water was measured by a TDS meter. The assessment of the disinfecting effect of the obtained anolyte on beekeeping objects (beehives, bee inventory) was carried out as follows. We took 1 ml of microbial solution and added it to an anolyte with a volume of 9 ml. After 3 minutes, inoculation of a Petri dish was carried out. Quantitative analysis of viable microorganisms was carried out using the Koch cup method. Results and discussion The experimental data obtained were compared with the simulation results. Figure 7 shows graphs of changes in the concentration of Cl in the anolyte over time and with different values of the exchange current density i 0 at the anode-electrolyte interface. It was determined that the value of this current slightly affects the change in the concentration of chlorine in the anolyte. By the 14th minute, the concentration of Cl has stabilized. The slight difference between the theoretical and experimental graphs in Figure 7 can be explained as follows. Initially, the concentration of Cl in the anolyte decreases due to the high reaction rate of Cl 2 formation. During this period, Cl ions flow from the cathode chamber, but this process is slower than the formation of Cl 2. At the same time, chlorinecontaining elements (HCl, HClO, ClO, ClO 2, ClO 3) are formed in the anode chamber and the concentration of Cl is leveled. There is a process of accumulation of chlorine-containing elements and their transition from one compound to another. There is also a dissolution of part of the Cl 2 released at the anode, which leads to an increase in Cl ions and the graph takes an increasing form. Experimental and model data were also compared for all incoming chemical elements [4,8]. Experimental studies were conducted on the effect of ozone-bubbled anolyte on microbial solution. The analysis of such experiments showed that the maximum effect is observed during electrolysis for 12.5 minutes and subsequent bubbling of the resulting anolyte with ozone for 2.2 minutes. A regression model of the effectiveness of the effect of the resulting solution on pathogenic microorganisms from the time of electrolysis and bubbling with ozone was obtained. Figure 8 shows the boundaries of the rational parameters of electrolysis x 1 (11 minutes) and ozonation x 2 (1.4 minutes). At a separate apiary in the Krasnodar Territory of the Russian Federation, a field experiment was conducted to study the effectiveness of the use of electroactivated solutions for the maintenance of bee colonies. Using the resulting disinfectant solution, 20 hives were periodically treated after wintering, as well as bee equipment. There were also 5 control hives that were periodically treated with Nosemat (including Metronidazole and Oxtitetracycline). The results of such comparative experiments showed the following: the number of antibiotics used (such as Oxytetracycline) for the treatment of bees has decreased by 2 times, the number of diseases has decreased, the probability of antibiotics getting into bee products has decreased. Conclusions The use of the Comsol medium made it possible to implement mathematical models of the main physico-chemical processes occurring in the diaphragm electrolyzer and to link them through the appropriate interfaces. As a result of solving the obtained models, it was found that changes in HCl and ClO concentrations in the anolyte occur with an increase in concentrations from 0 to 0.001 mol/m 3 . The effective operating time of the considered diaphragm electrolyzer in terms of the number of Cl compounds in solution is from 10 to 14 minutes at a current of 0.5 A and an initial content of Cl ions is 0.03 mol/m 3 . Experimental studies have confirmed the rates of change and the values of the concentrations of the main chemical compounds obtained during the simulation. At the same time, by the end of 12 minutes, the relative errors of the values of experimental concentrations differed from the model ones in the range from 2 to 6%. Experiments on the effectiveness of the anolyte's effect on pathogens have shown that the greatest effect is exerted by the anolyte during the electrolysis time from 12 to 14 minutes and its subsequent bubbling with ozone from 2 to 3 minutes.
2023-01-17T17:07:34.884Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "99a6cb70ea796a9441c9facc842ef449d66cf0fa", "oa_license": "CCBY", "oa_url": "https://www.bio-conferences.org/articles/bioconf/pdf/2023/02/bioconf_itsm2023_07001.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5d18ba1c7f06a7b0db0b80d76c82edf513561074", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
54500116
pes2o/s2orc
v3-fos-license
Quantitative sampling and analysis of trace elements in atmospheric aerosols: impactor characterization and Synchrotron-XRF mass calibration . Identification of trace elements in ambient air can add substantial information to pollution source apportionment studies, although they do not contribute significantly to emissions in terms of mass. A method for quantitative size and time-resolved trace element evaluation in ambient aerosols with a rotating drum impactor and synchrotron radiation based X-ray fluorescence is presented. The impactor collection efficiency curves and size segregation characteristics were investigated in an experiment with oil and salt particles. Cutoff diameters were determined through the ratio of size distributions measured with two particle sizers. Further-more, an external calibration technique to empirically link fluorescence intensities to ambient concentrations was developed. Solutions of elemental standards were applied with an ink-jet printer on thin films and area concentrations were subsequently evaluated with external wet chemical methods. These customized and reusable reference standards enable quantification of different data sets analyzed under varying experimental conditions. Introduction Information on temporal variation and size segregation of trace elements in ambient air greatly facilitates the identification of pollution sources. Particulate matter (PM) emissions caused by traffic exhaust and combustion processes are predominant in the fine size range (particle diameter < 1 µm). In contrast, mechanically produced particles and mineral or resuspended road dust particles are mainly found in the coarse size fraction (particle diameter > 1 µm). Most emissions such as traffic and industry related emissions or atmospheric dilution processes vary within a few hours. Conventional 24-h filter measurements do not detect these rapid changes due to the missing time resolution. Therefore, cascade impactors with shorter measurement intervals (in the order of hours) as developed by Lundgren (1967) are very valuable in combining size segregation with high time resolution. A variety of impactors with different size cuts, number of impaction stages, number of nozzles and substrates for various analysis methods exist (Hering, 1979;Raabe et al., 1988;Berner et al., 1980;D'Alessandro et al., 2003;Marple et al., 1991;Marple, 2004). Cahill et al. (1985) and Cliff et al. (2003) designed a three-stage rotating drum impactor. Bukowiecki et al. (2005) presented a modified design to obtain particle cutoff sizes of 2.5 µm, 1 µm, and approximately 0.1 µm. They included a first characterization of the impactor Published by Copernicus Publications on behalf of the European Geosciences Union. 1474 A. Richard et al.: Time and size-resolved trace element analysis by determining the collection efficiency curves. The highly time resolved measurements of trace elements in ambient air result in low amounts of sample material in the range of a few µg per analyzed area. This demands a highly sensitive detection method such as synchrotron radiation based X-ray fluorescence spectrometry (SR-XRF), which provides a high sensitivity on small analysis areas. Bukowiecki et al. (2008) established an automated procedure to analyze many spectra in a reasonable time. Since deviations relative to the desired size cuts would result in an incorrect size attribution of the particulate matter, knowledge about the size segregation characteristics of the impactor is crucial for the data quality. This paper describes the determination of the inverse collection efficiency curves for the three size ranges by the use of an artificial aerosol generator. The rotating drum impactor is introduced in Sect. 2.1, followed by the results of the characterization study in Sect. 2.2. Trace elements sampled with the described impactor were analyzed with SR-XRF (see Sect. 3.1). For a quantitative analysis, raw spectral count rates have to be linked to ambient concentrations. The production of adequate reference standards for a consistent elemental mass calibration under different experimental conditions is the main focus of this paper, discussed in Sects. 3.2 and 3.3. Mass calibrated time resolved and size-segregated impactor data were finally compared to 24-h filter data in Sect. 4. Impactor characterization The rotating drum impactor (RDI; Bukowiecki et al., 2005Bukowiecki et al., , 2009) is a modification of the 3-stage UC Davis Rotating Drum Unit for Monitoring (3DRUM; Cahill et al., 1985). The 3DRUM was designed for continuous sampling of ambient aerosols in three size ranges of aerodynamic diameter: 2.5-1.15 µm, 1.15-0.34 µm, and 0.34 to approximately 0.1 µm with a sample flow of 22.7 l min −1 . The objective of the modification was to design a new impactor with step-wise rotation for a volumetric flow of 16.6 l min −1 (corresponding to 1 m 3 h −1 ) and particle size segregation in ranges of 10-2.5 µm, 2.5-1 µm, and 1 to approximately 0.1 µm. The three impactor stages will be referred to as stage 10 (PM 10−2.5 ), stage 2.5 (PM 2.5−1 ) and stage 1 (PM 1−0.1 ) in the following. Particulate matter ≥ 10 µm is removed by the PM 10inlet (Digitel AG, DPM10/2,3/01) on top of the instrument. Due to the stepwise movement of the drum, aerosol particles are deposited in a bar-code-like structure on the film, as illustrated in Fig. 1. Modified RDI drums were designed to be used for sampling as well as for subsequent SR-XRF analysis, see Bukowiecki et al. (2008) and Sect. 3.1. These notched aluminum wheels allow the beam to pass through the wheels without interaction with metallic wheel material. They are covered with a 6-µm polypropylene (PP) film 300 µm 10 mm Fig. 1. RDI sampling drum: a notched aluminum wheel, coated with a 6-µm PP film, used for sampling and subsequent SR-XRFanalysis as well as for the calibration. The bar code-like structure of deposited particulate matter is visible on the film. The black color on the depicted stage 1 bars with a width of 300 µm and a height of 10 mm is mainly caused by soot deposition. coated with Apiezon M (M & I Materials Ltd.), a siliconfree hydrocarbon grease, to reduce sampling losses due to bouncing effects. One wheel has a capacity for 96 sample bars. An advantage of the combination of RDI sampling on customized wheels with subsequent SR-XRF analysis is that measurements can take place without further sample treatment, unlike most conventional techniques such as inductively coupled plasma optical emission spectrometry (ICP-OES) and mass spectrometry (ICP-MS), reducing the risks of contamination and loss of analyte. The impaction parameter, or Stokes number, is defined for an impactor as the ratio of the particle stopping distance at the average nozzle exit velocity U and the nozzle half-width, D j /2: with τ being the relaxation time, ρ p the particle density, d 50 the cutoff diameter, C c the Cunningham-slip correction factor and η the viscosity of air. Under the assumption that the Stokes number remains the same for a given impactor design, the nozzle dimensions for the new RDI were derived through Eq. (1): first, Stokes numbers were calculated for the setup of the original 3DRUM. Based on this, the dimensions for the rectangular nozzles of the RDI with new cutoff sizes were determined accordingly as (1.52 × 10) mm, (0.68 × 10) mm and (0.3 × 10) mm, see Table 1 for details. Since the cutoff diameter d 50 of the lowest stage of the 3DRUM is not known precisely, it was estimated to lie in the range between 0.06 and 0.12 µm. Due to this uncertainty, the d 50 of the RDI for the lowest stage is expected to lie between 0.1 and 0.2 µm. Hinds (1982) suggested an ideal Stokes number of Stk 50 =0.59 for 50% collection efficiency for impactors with rectangular nozzles. To obtain a similar Stokes number for stage 1 as well, either a pressure of approximately 10 kPa and with this an unrealistically high jet-velocity would be necessary, which could provoke significant particle bouncing. Alternatively, the nozzle size would have to be reduced to ≤ 0.1 mm for an operation of the impactor at realistic pressure conditions (around 80 kPa). Small-sized nozzles were not practical because of the risk of blockage when the instrument was employed in longer field campaigns. Moreover, the use of multiple small-area nozzles per impactor stage to achieve a more ideal Stokes number under low pressure conditions (as e.g. used in the electrical low-pressure impactor -ELPI, Marjamäki et al., 2000) was not favorable, as the small deposition areas would add additional uncertainty to the applied analysis which produces the best results for large and homogeneous deposition areas. Thus, the low Stokes number for stage 1 (0.06-0.14) is a result of the constraints of the coupled sampling and analysis method considered here at the expense of a reduced impaction efficiency. Determination of cutoff sizes RDI characterization studies were previously conducted using laboratory room air as a quasi-stable proxy for urban ambient air (Bukowiecki et al., 2009). This experimental approach was suitable for the scope of that study, but the necessary corrections through the use of room air restrict a general application. For this study, the cutoff diameters for stages 10 and 2.5 were determined through application of a condensation monodisperse aerosol generator (CMAG, TSI Inc., Model 3475), as in Kwon et al. (2003), and an aerodynamic particle sizer (APS, TSI Inc., Model 3321). Particles produced by the CMAG were quasi-monodisperse dioctyl sebacate (DEHS, C 26 H 50 O 4 ) droplets in the size range from approx. 0.3 to 5 µm (geometric standard deviation σ g =1.4). Settings for the CMAG varied within the following values: saturator flow 2.25-3 l min −1 , saturator temperature 235-240 • C, while the reheater temperature remained constant at 100 • C. Average particle concentration produced by the CMAG was 400 particles cm −3 after a dilution stage avoiding a too high concentration in the APS. For the cutoff determination of stage 1, particles in the order of 0.1 µm were required, which are difficult to produce with the CMAG. For this purpose, polydisperse NaCl particles were produced with a nebulizer, directed through a dryer and measured with a scanning mobility particle sizer (SMPS) consisting of a differential mobility analyzer (DMA, TSI Inc., Model 3081) and a condensation particle counter (CPC, TSI Inc., Model 3025, high flow). However, higher particle bouncing is expected for NaCl particles because they do not stick as well to the substrate as the oil droplets. While the APS is a suitable device to measure coarse particles (APS size interval: 0.542-19.8 µm, aerodynamic diameter), the SMPS is a more adequate choice to sample particles in the fine fraction (size interval of employed SMPS: 7-300 nm, mobility diameter). The mobility diameter d mob measured with the SMPS was transformed into an aerodynamic diameter d p using the following recursive equation: taking into account the respective Cunningham-slipcorrection factors C c d p and C c (d mob ), the density of air ρ 0 and the density of DEHS (ρ p =0.91 g cm −3 ) and NaCl particles (ρ p =2.16 g cm −3 ), respectively. Figure 2 displays the schematic experimental setup, which is essentially the same as in Bukowiecki et al. (2009). For the average particle concentration (reference measurement) the APS measured directly after the CMAG (subsequently referred to as setup A). In setup B, the APS measured the particle size distribution in a sequential manner after stages 10 or 2.5 and then the SMPS measured after stage 1. Switching the APS and SMPS back and forth from setup A to setup B eliminated differences previously encountered by using two different APS instruments (both APS Model 3321). Measurements in each setup were repeated at least 6 times, where a single measurement lasted for about 600 s. Time intervals were 5 s for each APS sample and the SMPS scanning interval lasted about 300 s. Instruments were connected with conductive tubing (TSI Inc.) and attached to specially manufactured RDI stage covers. Experiments were performed at ambient temperature (≈ 25 • C). The flow was measured with a primary flow calibrator (A. P. Buck Inc.) and regulated with a mass flow controller (red-y, Vögtlin Ltd.) before each experiment to assure the necessary flow and pressure conditions for a correct operation of the RDI (i.e. 16.6 l min −1 ). This is especially important for stage 1 because of the low cutoff diameter (0.1-0.2 µm). Here, the jet velocity is much higher (106 m s −1 compared to 18 m s −1 , stage 10, and 42 m s −1 , stage 2.5) and the pressure drops from about 101 to 88 kPa. Stage 1 is close to the transition from an impaction to a diffusion controlled deposition regime. To ensure that identical pressure and flow conditions are maintained at the respective nozzles in both setups, inlet flow rate and pressure were monitored and adjusted accordingly. In order to accommodate for the additional flow rates when the particle sizers are connected to setup B (5 and 1 l min −1 for the APS and SMPS, respectively), the RDI was connected also in setup A. Size separation characteristics were obtained from collection efficiency curves plotted versus aerodynamic diameter. The cutoff diameter and the stage penetration midpoint diameter d 50 both imply the diameter where 50% of particles are collected and 50% pass through (Hinds, 1982). Inverse efficiency curves E i d p were computed based on the ratio of the averaged particle size counts measured in setup B and setup A: with i being either stages 10, 2.5 or 1. Following the concept introduced previously by Bukowiecki et al. (2009), E i d p is also referred to as stage penetration at a given particle size d p , i.e. the non-deposited particle fraction. Kwon et al. (2003) suggested a sigmoidal fit for the inverse efficiency curve: with parameters a i , b i , c i , tuned to provide the best fit to the experimental data. In this approach the d 50,i are obtained directly by the parameter c i , the inflection point. This is equivalent to calculating the inflection point as zero point of the second derivative E i d p . All calculated cutoff diameters are compiled in Table 1. Stage penetration curves E i d p and second derivatives E i d p of the sigmoidal fits are displayed in Figs. 3 and 4. Already with little noise in the data pronounced peaks occur in the derivative, thus the concept is only applicable for the parts of the curve with very little noise. The fit in the region of diameters smaller than 100 nm for stage 1 is not as satisfactory as for the other stages because variations in the size distribution in the Aitken mode (particles < 100 nm) can have considerable effects on the ratio. If all particles larger than the cutoff diameter were impacted and all smaller particles passed through, the impactor would have perfectly sharp size cuts and a stepfunction efficiency curve. Deviations from this ideal theoretical step-function result in a number of oversized particles that pass through and a number of undersized particles that are collected, quantities which ideally should be symmetric. Bukowiecki et al. (2009) introduced the first derivative of E i d p as a measure for the cutoff sharpness, which is plotted in Fig. 5 versus the normalized diameter d p /d 50,i . For better readability all first derivatives E i d p were normalized to 1. The curves suggest that the cutoff sharpness achieved with the tested RDI is rather broad, especially for stage 1, which was already observed by Bukowiecki et al. (2009). An implication is that the size limits of individual RDI stages might be slightly smeared out and that smaller particles could be deposited on a higher stage and vice-versa. Several comparisons to concentrations of 24-h filters obtained with highvolume samplers for PM 10 , PM 2.5 and PM 1 showed that this effect is not significant on a mass base, see also Sects. 4 and Bukowiecki et al. (2005). The values derived for the cutoff diameters of stage 10 (2.4 ± 0.2 µm) and stage 2.5 (1.03 ± 0.02 µm) correspond well with the theoretical values 2.5 and 1.0 µm, respectively. They confirm the values obtained in the previous study: 2.4 ± 0.2 µm and 1.0 ± 0.02 µm. For stage 1 a value of 0.20 ± 0.02 µm was found, which lies within the expected range of 0.1-0.2 µm. Through the use of two aerosol generators, the respective aerosol concentrations were high enough to obtain cutoff diameters directly from the data without any further corrections. The presented results verify a correct size segregation within the RDI. Synchrotron radiation based X-ray fluorescence spectrometry (SR-XRF) The low aerosol mass on individual RDI bars demands a highly sensitive detection method. Additionally, a method for a high-throughput analysis was required for RDI sampling in field campaigns, where the typical number of individual samples can easily reach about 5000. SR-XRF provides a sufficiently high sensitivity and easy sample handling, but it depends on many parameters and therefore requires external calibration in case not all of them are known (Rousseau et al., 1996). Calibration through model calculations of mass absorption coefficients, excitation factors and instrumental characteristics (fundamental parameter analysis) was not applicable due to the high variability in the elemental composition of sampled particles. Today adequate reference materials in terms of similar elemental composition, particle size, sample homogeneity and substrate thickness for ambient aerosol analysis on PP films are still scarce. Thus a general, reusable reference, similar to the sample in terms of matrix composition and sample thickness for calibration of each experimental setup and session was developed to obtain a correct quantification. This provides comparability between different analyses since conditions during experimental sessions at different synchrotron radiation facilities, such as excitation energy, photon flux and detector efficiency can change significantly. This section explains the basic aspects of the applied SR-XRF setup, followed by a description of the development of a technique for external and internal reference element quantification (Sects. 3.2 and 3.3). Kα lines of a wide range of elements (Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Rb, Sr, Zr, Mo, Cd, Sn, Sb and Ba) as well as the Lα line of Pb were detected with XRF at two synchrotron facilities. Elements with atomic number Z = 13 − 24 (Al-Cr) were measured with a silicon drift detector (Roentec Xflash 2001 type 1102, Bruker AXS) with a nominal resolution of 155 eV (Mn Kα at 5.9 keV) at the X05DA beamline in the Swiss Light Source (SLS) at the Paul Scherrer Institute, Switzerland. Total available photon energy ranges from 5.5 to 22.5 keV at a bending magnet with a cryogenically cooled Si(111) channel cut monochromator (Flechsig et al., 2009). Trace elements were examined in the focused monochromatic mode, with an energy of 11.5 keV and usable photon flux of 2 ×10 11 photons s −1 within a 70 × 140 µm (h×v full width of half maximum, FWHM) focus area. A helium atmosphere was applied to reduce absorption effects in air and to eliminate the Ar fluorescence line at 2.9 keV (occurring with measurements in air). Elements with atomic number Z = 25 − 82 (Mn-Pb) were measured at the beamline L, HASYLAB, DORIS III storage ring at DESY where a bending magnet provides a polychromatic spectrum with usable photon statistics up to 80 keV. Al and Cu absorbers of various thicknesses (high-pass filter) can be used to shift the energy maximum towards higher energies and thus reducing background effects from lower energies. For measurements of the data set presented in Sect. 4 an 8-mm Al absorber was used. A polychromatic 100 × 200 µm (h×v) wide beam irradiated the sample and a nitrogen cooled Si(Li)-detector (Sirius 80, Gresham) with a nominal resolution of 144 eV (Mn Kα) measured the fluorescence counts. This detector is suitable to measure Kα lines of heavier elements (up to the Ba Kα line at 32 keV) due to the large active volume (Si crystal depth: 4 mm, area: 80 mm 2 ) and a Be window thickness of 12.5 µm. Since the detection efficiency of Si(Li) detectors decreases for lighter elements as electronic noise increases, a Si drift detector with a 3-µm polymer window (plus a 0.5-µm aluminum layer) and smaller active volume (depth: 450 µm, area: 10 mm 2 ) with the ability to process higher count rates was employed for measurements at the SLS. Low peaking times (in the order of τ p =1 µs) are suitable for this detector, but would lead to a too low energy resolution in the Si(Li) detector, which was operated with τ p =12 µs. The efficiency of Si drift detectors decreases rapidly for higher energies because the thin Si crystal absorbs less than 30% above 11 keV. Thus, each detector is matched to the chosen excitation energies. The detector dead time was kept below 30% by reducing the size of the photon beam. Output and input count rates (OCR, ICR) were measured while varying the opening of exit slits to investigate the detector saturation regime. Determining the relationship between OCR and ICR for a given detector at given experimental conditions enabled correcting for potential dead time effects in the detector. Sample wheels were rotated with a goniometer in steps of 3.51 • , corresponding to the separation of individual RDI bars, and each bar was irradiated typically for 20-30 s. Since synchrotron radiation is linearly polarized, positioning the fluorescence detector in the polarization plane at an angle of 90 • with respect to the incident beam reduced the spectral background due to coherent and incoherent scattering substantially. Absolute mass calibration Fluorescence counts of sample elements can be linked to the area concentration (µg cm −2 ) if the deposited mass of one sample, chosen as reference, is determined externally by wetchemical methods in a subsequent step. An example is ICP-OES, but it requires more analyte mass than deposited on a single RDI bar, because the sensitivity is not high enough for these minute masses. Furthermore, the sample material digestion and possible contamination error demanded a new, non-destructive method without structural modification. Since readily available calibration films of similar composition and thickness as the sample do not exist, producing a customized calibration film became necessary. Fittschen et al. (2006Fittschen et al. ( , 2010 introduced a concept for applying picoliter droplets via an ink-jet printer on different reflector substrates for TXRF. Transforming this approach to the substrates used in SR-XRF by Bukowiecki et al. (2008) gave rise to a procedure of applying standard solutions on thin films with an ink-jet printer. Calibration films were fixed on RDI sample wheels to safeguard that experimental conditions for the SR-XRF analysis are the same. The use of a Compact Disc-label printer (HP Photosmart D5160), along with the film structure itself ensured optimized adherence of the solution on the substrate because the film is not bent and transported by small brushes as in conventional printers. Clean printer cartridges (type HP 339, completely cleaned by Pelikan Ltd.) with a 15-pl ink drop volume were filled with customized solutions. A precondition for a correct calibration is the similarity of reference and aerosol specimen in terms of homogeneity (grain size and shape), chemical composition and concentration. This would imply using standards with a limited concentration range adapted to the concentration range of the samples. In contrast, a decreasing uncertainty of slope and intercept is obtained for more measurement points in a wider concentration range (Van Grieken, 2002). Therefore, the concentration range was chosen as a compromise between similarity to the sample and reduction of the uncertainty. In this range the relation between fluorescence intensity and mass per printed area is expected to behave linearly. Two techniques to obtain several increasing coating densities of standard solutions on the substrate were tested: printing one to five times the same amount of solution or printing five areas with different transparency (i.e. "color" saturation in the printer settings). Only the first technique yielded the requested linear increase in mass per coated area. For each of the five coatings pieces of 9 cm 2 from the same printing process were analyzed by ICP-OES. The linear relationship of fluorescence intensity (count rate normalized to photon flux and detector dead time) for the reference element Cd versus the obtained mass per analyzed area ranging from 0.11 to 0.33 µg cm −2 is plotted in Fig. 6. The slope of the fitted linear curve of reference element count rate ( st /t c ) versus mass per area in µg cm −2 (C st ) is inserted into the calibration formula introduced below (Eq. 7). When primary X-rays interact with material, scattering and secondary fluorescence excitation occurs beside primary absorption (Giauque et al., 1979). This implies that the fluorescence intensity of the aerosol sample is not only based on the absolute concentration but can also depend on the total chemical composition (i.e. the matrix seen as sample plus substrate). Owing to the thin sample layer and low mass density of collected particles, the samples are irradiated completely and concentrations are too low to cause secondary X-ray absorption by lighter elements (neglecting the effects of particle size). In the investigated ambient trace element determination, matrix effects were neglected as discussed already by Bukowiecki et al. (2008). Scattering leads to low peak to background ratios. To keep this scattering as low as possible, it is advisable to use the thinnest substrate possible. However, the thinner the substrate, the more difficult the handling of the film is, so printing inhomogeneities can occur. The chosen substrates represent a compromise between a thin film and satisfactory printing results. Commercially available films in different thickness were tested: a 100-µm ink jet transparency film (3 M, CG3420), a 100-µm PET ink jet film (Folex, BG-32.5 RS plus), both coated, a 50-µm PET film (Folex, X-131) and a 25-µm self adherent polypropylene film, both uncoated. Some films contain interfering elements like Si (adhesive of the self adherent film), S, Al, and Ca, which are also found commonly in ambient aerosol samples. Coated films contain substances for optimal homogeneous drying and the coating density increased linearly when the solution was printed several times on the same area. For the uncoated film activation in a plasma chamber before applying the solution resulted in faster drying and better adhesion to the substrate. For this purpose, an evacuated plasma chamber was filled with oxygen molecules ionized through a high frequency microwave. Extensive tests of different substrates, solutions and printing processes led to three applicable reference samples, which will be discussed in the following. First, a multielement solution (Merck standard IV, containing Ag, Al, B, Ba, Bi, Ca, Cd, Co, Cr, Cu, Fe, Ga, In, K, Li, Mg, Mn, Na, Ni, Pb, Sr, Tl, Zn plus single elements: P, Rb, S, Sb, Se, Sn, Ti, Zr) was applied on the 100-µm PET film with a highresolution printing process (1200 dpi). Next, a self adherent 25-µm PP film was tested. Again, the Merck standard solution IV was the basis and Rb and Se plasma-standards were added (10 g l −1 in HNO 3 solution) with equal concentration for each element. Through addition of 0.5% of Triton X-100, a tenside to decrease surface tension, a more homogenous wetting and improved drying speed was achieved. The buffered solution had a pH of 2 as a compromise between the risk of corrosion of cartridges caused by a low pH and elemental precipitation/separation caused by a high pH. Count rates of both reference materials (on 100-and 25-µm films) are illustrated in Fig. 7, indicating a much more articulated response curve for the thinner PP film due to reduced scattering. Also, the high Sb peak caused by the 100-µm PET substrate vanished for the thinner film. For measurements of the 100-µm film the beam exit slit size had to be reduced significantly to avoid saturation of the detector due to high count rates. These count rates were extrapolated according to the determined non-linear relationship between slit size and count rates due to dead time effects for the further analysis. The chosen standard solution led to satisfactory results for the calibration of heavier elements. However, to calibrate the lighter elements more precisely, a solution containing fewer elements was prepared, avoiding the interference of Lα lines with the Kα lines in focus. Again, clean printer cartridges were filled with a customized solution containing K, Ca and Ti standards (10 g l −1 in HNO 3 solution), Si (1 g l −1 in HNO 3 /HF solution) and Al (1 g l −1 in HNO 3 solution). Building on the good results obtained before, this solution was applied on the self-adherent 25-µm film. The resulting calibration curve is discussed in the next section (Figs. 8 and 9). This reference led to a satisfactory result for lighter elements and showed the high potential of customized calibration solutions for specialized purposes. Relative calibration based on external standardization In addition to the absolute mass calibration, a relative calibration is necessary because the fluorescence yield increases with increasing atomic number Z. The main reason is the Auger effect, a process competing with fluorescence in which the photon is absorbed within the atom and the released energy is emitted through an Auger electron. Absorption of the photon within the atom is most pronounced for lighter atoms, and significantly limits the yield of secondary X-rays from the lighter elements. The fluorescence yield ω is the relative frequency of photon emission (in competition with the relative frequency of Auger electron emission χ ; Bambynek et al., 1972;Burhop et al., 1955). An approximation of ω is given by: with a constant A=9×10 5 for the K-series and A=7×10 7 for the L-series. Since Auger electron and fluorescence photon emission are two complementary processes, ω+χ=1. Because the calibration films (with similar substrate thickness and matrix compared to the samples) contain a series of elements with the same concentration, it is possible to experimentally determine the response curve and calibrate the count rates with a relative factor (S rel ). The empirically determined S rel comprises all influences on the effective fluorescence intensity such as theoretical fluorescence yields, mass attenuation caused by sample elements and detector sensitivity. The absorbed photon fraction P abs was calculated based on the source parameters of the DORIS III storage ring plus the 8-mm Al absorber inserted into the beam path (Sánchez Fig. 9. Relative calibration curve S rel of Kα lines for measurements at SLS: count rates are normalized to detector dead time and plotted relative to the count rate of Ti, blank values are always subtracted. An exponential fit was applied to the data and shows the extrapolated relative calibration curve S rel for all elements in the sample. The lower end of the detector efficiency is depicted by Z 0 in the plot. del Río et al., 2004) and mass attenuation coefficients of Xrays for chosen elements in the sample (Berger et al., 2007): In this approximation for infinite thin films, the contributions of absorption caused by the relevant polychromatic photon intensities 0 for typical elements were summed up. Other variables in the equation are the sample thickness x s , the total absorption mass attenuation coefficient µ and the material density ρ. The calculated absorbed photon fraction reflects the overall trend of the empirical curve. All three calculated curves were added to Fig. 8 showing the empirical relative calibration curve of Kα lines for measurements at HASYLAB. A sigmoidal or exponential fit to the data points turned out to provide the best extrapolation of the relative curve for those sample elements that are not contained in the calibration solution. The energy where the detector efficiency drops to zero (Z 0 ) is included as the lower limit of the fit. Since Cd was chosen as reference element in the absolute mass calibration, the count rates for S rel in Fig. 8 are normalized to the count rate of Cd. Figure 9 displays the relative calibration curve S rel obtained by analyzing the calibration reference customized for lighter elements with SR-XRF at the SLS. Here, a similar curve to previous experiments was found and through the fit to data all elements in the sample can be analyzed. Coating homogeneity, reproducibility of printed areas and stability of printed surface are challenges in the printing process. Scanning electron microscope (SEM) images from printed calibration and blank films revealed a sufficiently good homogeneity of applied droplets. The presented results show that customized calibration films for different experimental SR-XRF conditions can be produced with the ink-jet printer method. Calibration formula Spectra were fitted with the WinAxil software package (Canberra Inc.; Van Espen et al., 1986;Vekemans et al., 2004) using a least squares fitting algorithm with a Bremsstrahlung background. Although the Bremsstrahlung background originally has its source in the description of electron induced X-rays, where retardation of the electrons is almost completely responsible for the continuum, this model was able to reconstruct the background curvature best (visual inspection). Spectral counts obtained by WinAxil are calibrated with the absolute mass calibration factor C st / st and the relative calibration factor S rel . The ambient concentration C of one element is deduced from the fluorescence intensity by the following calibration formula: with the RDI bar area (A 10 =15.2 mm 2 , A 2.5 =6.8 mm 2 and A 1 =3.0 mm 2 ), the total calibration film area analyzed by ICP-OES (A c =9 cm 2 ), the respective irradiation times for aerosol (t m ) and calibration spectra (t c ), the RDI sampling interval (t RDI ), the RDI flow (Q RDI ), the dead time caused in the detector (t d ), the actual beam current (I D ) and the maximum beam current directly after injection (I m ). The last term I m /I D is only applied to measurements performed at HASY-LAB, where the raw count rates have to be normalized to the photon flux. No correction is necessary for measurements at SLS because of a constant beam current due to top-up injection. As mentioned before this calibration technique is only applicable for references with similar elemental matrix and similar film thickness. Measurement uncertainties were calculated with uncertainty propagation of the three terms in Eq. (7) containing an uncertainty. The extrapolation from the rather small-sized beam spot to the RDI bar area (A beam ≈1% of total area A i ) adds to the total measurement uncertainty with a contribution of 20% of the area. Further uncertainty is introduced by possible slight variations in the RDI flow, which is estimated to contribute a relative uncertainty of 5%. The third term is the uncertainty obtained by the linear regression for the calculation of the absolute mass calibration factor. Minimal detection limits (MDL) were determined as a means for the qualitative evaluation of every individual data point. Only elements exceeding the MDL with > 50% of values remained in the final data set. These detection limits are calculated as follows: with B being the elemental continuum counts obtained by fitting a Bremsstrahlung background. Thus, a longer counting time improves the MDL for a fixed setup. Discussion and selected results The segregation into three size ranges in the RDI enables a more detailed interpretation of the relative elemental composition. Quantitative ambient aerosol measurements can be obtained through application of the calibration for the SR-XRF method described in Sect. 3. To support and consolidate these two findings, exemplary results are presented, while the full data set will be presented in an upcoming paper. The RDI was deployed in a field campaign at Zürich Kaserne, Switzerland for time and size resolved trace element sampling from 28 November 2008 to 5 January 2009 (with a short break from 26 to 28 December) and a time resolution of 2 h. The relative elemental composition for the three size ranges is illustrated in Fig. 10. Note that for these pie charts, the period from 31 December 2008 15:00 LT to 1 January 2009 05:00 LT is excluded to not distort the picture through unusually high emissions of some elements -e.g. S, K, Ti, Cu, Sr, Ba (and Sb) -during the fireworks at New Year's Eve. Nonetheless, sulfur and potassium account for the highest contributions in the fine size range. Secondary sulfate and biomass combustion emissions are assumed to contribute primarily to the high sulfur concentration. Ammonium sulfate is formed by conversion of SO 2 to sulfuric acid via either heterogeneous reactions in droplets (with ozone, NO 2 , H 2 O 2 ) or photochemically via OH radicals (Seinfeld, 1998), followed by neutralization by ammonia. Fine potassium also originates mainly from combustion processes. In the coarse size range, Cl (presumably from de-icing salt) and Fe contribute the highest amounts to the mass. This gain in information through the size resolution of the RDI enhances the potential of source apportionment studies significantly (Han et al., 2005;Ondov et al., 2006;Karanasiou et al., 2009). The trace element percentage of the average PM 10 mass concentration during the campaign is obtained by the sum of PM 10−2.5 + PM 2.5−1 + PM 1−0.1 , and similarly for PM 2.5 . The average mass contributed by all detected elements (Al, Si, P, S, Cl, K, Ca, Ti, Cr, Mn, Fe, Co, Ni, Cu, Zn, Sr, Zr, Mo, Cd, Sn, Sb, Ba and Pb, not extrapolated to the corresponding oxides) to the average PM 10 mass concentration of 24.6 µg m −3 quasi-continuously monitored (every 10 min) by a tapered element oscillating microbalance (TEOM 8500) amounts to 4.6 µg m −3 , thus to about 20%. The average mass concentration of PM 2.5 was retrieved from daily filters of high-volume samplers (Digitel AG, Aerosol Sampler DHA-80) to be 21 µg m −3 and detected elements summed up to 3 µg m −3 , corresponding to about 15% of the total PM 2.5 mass concentration. Again, the period during New Year's Eve is excluded, for more details see Table 1 in the Supplement. In order to validate the obtained mass concentrations, results from RDI measurements were compared to independent 24-h trace element filter data from the same campaign. Two high-volume samplers (Digitel AG, DHA-80) with PM 1 and PM 10 inlets were used to collect 24-h samples on 1, 3, 5, 7, 9, 11, 13, 15 and 17 December 2008. A fraction of the quartz micro-fiber filter was acid digested and subsequently analyzed by ICP-OES and ICP-MS for the determination of major and minor elements (see details of the method in Querol et al., 2008). A comparison of PM 10 data from the filter and the RDI analysis for characteristic elements (S, K, Fe, Cu, Sn and Sb) is shown in Fig. 11. PM 1 data did not reach the detection limit for a number of elements. Nonetheless, averaged concentrations of elements detected by both methods for days of simultaneous measurements compared very well: the average concentration of PM 1 filter data was equal to 0.64 µg m −3 while the average concentration of PM 1−0.1 RDI data summed up to 0.68 µg m −3 (this comparison includes the following elements: Al, P, S, K, Ca, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Rb, Sr, Zr, Cd, Sn, Sb, Ba and Pb). Average concentrations of PM 10 filter data amounted to 2.7 µg m −3 and the average concentration of PM 10−0.1 RDI data was equal to 2.55 µg m −3 , including the same elements as for PM 1 as well as Cl and Mo. For more details see Table 1 in the Supplement. RDI data were binned into 24-h intervals by calculating the mean of 12 data points, and all three RDI stages were summed up for the comparison to PM 10 . Time series for the chosen elements are visualized in Fig. 11. Measurement uncertainty bars in Fig. 11 are calculated as described above for RDI data and propagated for the calculation of the mean value, while the uncertainty for filter data was calculated following the method of Escrig et al. (2009) 3, 5, 7, 9, 11, 13, 15 and 17 December 2008. values in the plot, corresponding to a standard deviation σ in the order of 30-40%. Pearson correlation coefficients lie in the order of 0.9, if one outlier data point is excluded, see also Fig. 1 in the Supplement. Some days (e.g. 9 December) show a higher variability than others which may be due to increased sample inhomogeneity or other unknown experimental issues. The sample inhomogeneity is a critical issue if only a fraction of the total sample area is analyzed using focused SR-XRF beams. The low number of sampled particles in the RDI (estimated to be around 10 000 particles/analyzed area for stage 1 and 100 particles/analyzed area for stage 10, see Bukowiecki et al., 2008Bukowiecki et al., , 2009) raises the question of sufficient uniformity of the deposited material. In both beam line setups the respective possible maximum beam sizes were chosen. Due to the constraints given by the beamline optics as well as the geometry of the sample holder, it was never possible to match the beam cross section exactly to the RDI bars. In earlier work (Bukowiecki et al., 2009), specifically addressing the uniformity, a 2-D-high resolution scan was performed with a considerably smaller micro-focus beam (beam size 4 µm 2 , step width 7 µm) at the LUCIA beamline at SLS, PSI. There, it could be seen that some elements deposit in distinct spots. For a 1-h stage 10 RDI sample of urban ambient air, an average Fe particle-to-particle distance of about 70 µm was estimated, corresponding to a particle area density of 213 Fe-containing particles mm −2 . The FWHM of the area of Fe-containing particles on the film was found to be 1.4 mm, which is reasonably close to the nozzle width and thus RDI bar width of 1.52 mm. However, no sharp edges were found. For stage 2.5 a Fe particle-to-particle distance of about 30-40 µm and an overall particle-to-particle distance of less than 2 µm were estimated. These values lie within the dimensions of the beam sizes (100 × 200 µm and 70 × 140 µm). Furthermore, TEM images were taken in that same study showing the uniformity of sampled particles on the film. The total number of particles in the coarse size range was found to be around 800 particles mm −2 for a 1-h aerosol sample. The density of particles is lowest for the coarse size fraction and increasingly higher for the smaller size ranges. On the basis of this information, it is assumed that the chosen beam sizes for analysis at the SLS X05DA beam line and the HASYLAB L beam line allow for an extrapolation from the measured to total sample area if an adequate uncertainty estimate is assigned to the values. As mentioned above, the RDI bar area contributes an error of ±20% to the uncertainty calculation. In the case of the filter analysis, outliers due to sample inhomogeneity are rather improbable since the percentage of analyzed area from total sample area amounts to 12.5% (1/8 of the filter) and is thus significantly larger than for the RDI analysis. Since inhomogeneities in the distribution of the material on the filters are a function of the distance to the center, portions of the filter were taken like pieces of a round pie to avoid this influence (Brown et al., 2009). But, since 12 RDI values are averaged to be compared to one filter value, it is difficult to find an explanation for the discrepancy in just one of the two methods. Also, correlations of RDI values versus filter values show no clear trend towards a general over-or under-estimation by the RDI analysis and thus it is assumed that deviations lie within the uncertainties of both methods and atmospheric variability. The gain in information obtained by a higher time resolution, which can lead to the identification of diurnal variations, is a major advantage of the RDI method. Eventually, the overall comparison of 24-h concentration values shows reasonable agreement within the limits of both methods and therefore confirms the applicability of the presented calibration methodology. Conclusions The cutoff diameters of a rotating drum impactor were measured through quasi monodisperse DEHS particles and polydisperse NaCl particles to consolidate previous studies carried out in laboratory room air. Results show good agreement with theoretical values and confirm the validity of the earlier experiment (Bukowiecki et al., 2009). However, the cutoff sharpness for the analyzed impactor is broader than that found for other cascade impactors. The developed calibration method for SR-XRF count rates to ambient concentrations properly matched the requirements (similar elemental composition, particle size, sample homogeneity and substrate thickness for ambient aerosol analysis on PP films) and provided an external, reusable reference material for the presented studies. The aim of obtaining consistent data sets from different experimental sessions for subsequent source apportionment studies was reached. The procedure is advantageous over already existing standards as the elemental composition and the thickness of the substrate is comparable to the sample composition and the film used for sampling. However, when using a calibration film that does not contain sufficient elements in the region of interest, adaptation to additional data, such as filter values may be necessary. Printing standard solutions on a simple transparency film with a conventional Compact Disk-label printer forms an easy to handle approach for both relative and absolute mass calibration. Averaging a series of spectra leads to a satisfactory result, although the printing homogeneity might be improved in the future by using a professional large-scale printer. While the substrate film thickness was appropriate in the presented case, a professional printer with the possibility to fix the film through electrostatic adhesion or a vacuum sample holder could lead to the use of a thinner film where lower background effects are expected. The comparison of high time resolved and size-segregated RDI data to 24-h filter data shows generally good agreement although outliers due to the variability in both methods were observed. Despite the need for access to a synchrotron facility to conduct SR-XRF analysis for high time resolution trace element data, the results demonstrate that the gain in information compared to conventional low time resolution wet-chemical evaluation justifies these efforts. High time resolution data is expected to enable a better identification of sources because daily patterns such as peak-traffic hours and other day-night differences in anthropogenic activities can be observed. This will be exploited for the current data set to perform source apportionment with positive matrix factorization (PMF) in an upcoming paper.
2018-12-02T12:54:05.207Z
2010-10-20T00:00:00.000
{ "year": 2010, "sha1": "38887688b0c59caba9c471f37f9ea0ded265f64b", "oa_license": "CCBY", "oa_url": "https://amt.copernicus.org/articles/3/1473/2010/amt-3-1473-2010.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "7f564b99a981dc50d20b0c4abdb595e206964008", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Chemistry" ] }
18891483
pes2o/s2orc
v3-fos-license
Conceptual and institutional gaps: understanding how the WHO can become a more effective cross-sectoral collaborator Background Two themes consistently emerge from the broad range of academics, policymakers and opinion leaders who have proposed changes to the World Health Organization (WHO): that reform efforts are too slow, and that they do too little to strengthen WHO’s capacity to facilitate cross-sectoral collaboration. This study seeks to identify possible explanations for the challenges WHO faces in addressing the broader determinants of health, and the potential opportunities for working across sectors. Methods This qualitative study used a mixed methods approach of semi-structured interviews and document review. Five interviewees were selected by stratified purposive sampling within a sampling frame of approximately 45 potential interviewees, and a targeted document review was conducted. All interviewees were senior WHO staff at the department director level or above. Thematic analysis was used to analyze data from interview transcripts, field notes, and the document review, and data coded during the analysis was analyzed against three central research questions. First, how does WHO conceptualize its mandate in global health? Second, what are the barriers and enablers to enhancing cross-sectoral collaboration between WHO and other intergovernmental organizations? Third, how do the dominant conceptual frames and the identified barriers and enablers to cross-sectoral collaboration interact? Results Analysis of the interviews and documents revealed three main themes: 1) WHO’s role must evolve to meet the global challenges and societal changes of the 21st century; 2) WHO’s cross-sectoral engagement is hampered internally by a dominant biomedical view of health, and the prevailing institutions and incentives that entrench this view; and 3) WHO’s cross-sectoral engagement is hampered externally by siloed areas of focus for each intergovernmental organization, and the lack of adequate conceptual frameworks and institutional mechanisms to facilitate engagement across siloes. Conclusion There are a number of external and internal pressures on WHO which have created an organizational culture and operational structure that focuses on a narrow, technical approach to global health, prioritizing disease-based, siloed interventions over more complex approaches that span sectors. The broader approach to promoting human health and wellbeing, which is conceptualized in WHO’s constitution, requires cultural and institutional changes for it to be fully implemented. Electronic supplementary material The online version of this article (doi:10.1186/s12992-015-0128-6) contains supplementary material, which is available to authorized users. Background As the world grows increasingly interdependent, the importance of global governance to advancing public health has intensified and become more complex. Among the many different definitions of global governance is the well-known and comprehensive one by Weiss and Thakur [1] which describe global governance as "the complex of formal and informal institutions, mechanisms, relationships, and processes between and among states, markets, citizens, and organisations, both intergovernmental and non-governmental, through which collective interests on the global plane are articulated, rights and obligations are established, and differences are mediated". The complexity in global health is exacerbated by the fact that decisions made outside of the health sector have profound impacts on global health. The interaction between trade liberalization and the global rise of non-communicable diseases [2], the public health effects of climate change [3], the health impacts of global migration [4], and the impact of social determinants on individual health [5], all illustrate the broader determinants of health that must be addressed beyond the confines of the health sector. This realization have prompted the development of concepts such as "global health diplomacy" [6] and "global governance for health" [7]-describing why and how global governance systems outside the global health system should protect and promote people's health. This study, building on previously described definitions [8][9][10], distinguish between "global governance for health" and "global health governance". Global health governance mainly refer to the collaboration between and the coordination of international institutions, bilateral aid agencies, non-governmental organizations, philanthropic organizations and public-private partnerships (such as the Global Alliance for Vaccination and Immunization and the Global Fund to Fight AIDS, Tuberculosis and Malaria) whose processes and activities primarily aim to improve global health. In comparison, global governance for health is about "institutions and processes of global governance which do not necessarily have explicit health mandates, but have a direct and indirect health impact" [8], and how these global institutions and processes can better work to improve global health. In this context, the World Health Organization (WHO), with its mandate as the directing and coordinating authority for health within the United Nations system [11], is the natural starting point for efforts to strengthen global governance for health. The WHO has undergone multiple reform attempts, with the most recent process starting in 2010 and continuing to this day [12,13]. One important aspect of WHO's reform agenda is moving the organization beyond its traditional technical focus to a more proactive role where the organization more effectively address the broader determinants of health through cross-sectoral collaboration. However, over the years, two themes consistently emerge from the broad range of academics, policymakers and opinion leaders who have analyzed WHO reforms: that the reforms are too slow, and that they do too little to strengthen WHO's capacity to facilitate cross-sectoral collaboration [14][15][16]. To inform this part of WHO's reform agenda, there is need for a clear understanding of how the organization perceives and interacts with other sectors at the intergovernmental level. The main challenges, of translating theoretical frameworks into real-world cross-sectoral collaboration, need to be identified and well understood. This qualitative mixed methods study aims to identify and explain challenges WHO faces in addressing the broader determinants of health, and the potential opportunities for enhancing cross-sectoral collaboration. Specifically, it investigates how WHO conceptualizes its mandate on global health, what factors enable or impede WHO's efforts to engage with the intergovernmental organizations (IGOs) across sectors, and how the dominant conceptual frame interacts with these factors. Methods This qualitative study used a mixed methods approach of semi-structured interviews and document review to explore three research questions: 1. How does WHO conceptualize its mandate on global health? 2. What are the barriers and enablers to enhancing cross-sectoral collaboration between WHO and other IGOs? 3. How do the dominant conceptual frame and the identified barriers and enablers to cross-sectoral collaboration interact? Semi-structured interviews Sampling methodology for interviewee selection sought to balance the need for data saturation with feasibility. Five interviewees were selected by stratified purposive sampling within a sampling frame of 45 potential interviewees from the level of WHO department director level or above. General management, partnerships and regional office staff were excluded. No interviewees declined to participate, and informed consent was collected prior to conducting the interviews. The interviews were conducted between October 2012-February 2013. Prior to collecting the data, approval was received from the Data Protection Official for Research under the Norwegian Social Science Data Services (project no. 31093). An interview schedule (Table 1) was developed for this study, and used for the semi-structured interviews. Compared to the three research questions, the questions in the interview schedule were phrased as more open-ended topics, with the aim of exploring a broader range of issues and avoid asking leading questions. All interviews were recorded and transcribed, and then anonymized to enable interviewees to express the fullest range of views and opinions without constraint. Field notes were also taken by interviewers, which were similarly anonymized. Document review A document review was conducted to triangulate information and provide a point of reference for claims made during the interviews. A targeted review of strategic plans, financial statements, internal reform documents and external organization evaluations was conducted (see Additional file 1 for a complete list of documents reviewed). Information was extracted from these documents using a data collection matrix designed to identify the dominant frames used to conceptualize global health, existing forms of collaboration with other sectors, and to corroborate the themes that emerged from the interviews (see Additional file 2 for the matrix). The development of the data collection matrix was informed by a literature review on framing in global health [17][18][19][20][21][22][23][24][25]. Frames used in the various papers were compared, and overlapping frames were defined under a common concept. The final data collection matrix contained eight frames (Table 2). Thematic analysis The interview transcripts, field notes, and public documents were qualitatively analyzed in NVivo 10 using thematic analysis as described by Yin [26]. Thematic analysis organizes and encodes the data according to themes emerging from the data set. Identified themes capture important aspects of the data in relation to the research question and facilitate the interpretation of the data set [26,27]. Specifically, this study used free coding, iterative categorization of text fragments, and reciprocal translational analysis from meta-ethnography, integrating these techniques with grounded theory's inductive approach and constant comparison method [28,29]. To improve inter-rater reliability, both the analysis of semi-structured interviews and the review of WHO's public documentation were performed by two independent investigators, who then compared their coding and discussed any reasons for variation. Interviews continued until data saturation was reached, as determined by two investigators who discussed the themes emerging from the interviews and whether new themes that addressed the research questions were still emerging. The data coded during the thematic analysis was analyzed against the three research questions. Results Three major themes emerged from the interviews of senior WHO officials and the document review. The first theme describes that WHO's role must evolve to meet global challenges and societal changes. The second and third themes are about the barriers to cross-sectoral collaboration. Common pressures experienced by organizations are often divided into factors external or internal to the organization, and the analysis identified this division to exist also for the barriers faced by the WHO. The second major theme describes that WHO's cross-sectoral engagement is hampered internally by the dominant biomedical Advancing Global Health through Global Governance 3. What steps are taken by your organization to ensure that health is protected and promoted within its deliberations, policies and activities? Where and when in the planning, deliberation, and implementation processes of your organization is health considered? 4. What are the major barriers to collaboration between your organization and a global governance institution from a different sector, towards the advancement of global health? 5. What are the major enablers to collaboration between your organization and a global governance institution from a different sector, towards the advancement of global health? 6. Given the major barriers and enablers, and the conceptual (2) and procedural (3) contexts discussed, are there any proposals or solutions you would like to see? Table 2 Global health frames included in the data collection matrix for the document review • Global health as biomedicine • Global health as a commodity/trade issue • Global health as foreign policy • Global health as a global public good • Global health as a human right • Global health as investing in economic growth • Global health as a means to reduce poverty • Global health as security view of health, and the prevailing institutions and incentives that entrench this view. The third major theme is that WHO's cross-sectoral engagement is hampered externally by siloed areas of focus for each IGO, and the lack of adequate conceptual frameworks and institutional mechanisms to facilitate engagement across siloes WHO's role must evolve to meet the global challenges and societal changes of the 21st century It was expressed that "WHO's role as a convenor around global health issues" has undergone "a significant change", and that now, "increasingly, everybody recognizes that health is part of a nexus of policy issues which affect trade, IP, the environment and etc.…" (Interviewee 3). As suggested by another interviewee below, there is an increasing need to consider the activities of a range of other sectors. It was furthermore raised that discussions on global health increasingly take the form of "political negotiations, rather than technical experts getting together" (Interviewee 3), and that health "is not any more negotiated by health officials mainly" (Interviewee 2). Overall, it was broadly agreed that addressing the various determinants of health requires the inclusion of a broader range of actors within a more complicated policy framework. WHO's cross-sectoral engagement is hampered internally by the dominant biomedical view of health, and the prevailing institutions and incentives that entrench this view The analysis of the interviews suggested that the operations of WHO and the global health system at large are characterized by a narrow, technical approach focusing on how the health sector can deal with diseases. Analysis of WHO's strategic documents reiterated the dominance of this way of viewing global health challenges through a biomedical frame. We also identified three secondary frames that provide additional justification for WHO's programmatic activities: global health as a human right, global health as a security issue, and global health as a means to reduce poverty (see Table 3). Finally, the strategic documents emphasized at different points the importance of social determinants of health, indicating the organization's desire to view global health more broadly (Table 4). Two interviewees emphasized the importance of cross-sectoral engagement and negotiation in pursuit of public health benefits, and contrasted the biomedical frame with the importance of understanding the perspective of other sectors. "Well, I think that it's important to listen to what the other sectors have to say and not try to apply pre-cooked recipes, and then you have to see what flexibility there is. You cannot pretend that you will have ideal situations, sometimes you have to find compromise and the The biomedical frame appears to be the dominant frame throughout almost all of WHO's documents, which presents a disease-based conceptualisation of global health, with a focus on interventions within the health sector to reduce burden of specific diseases. Often, the structure of priorities or budget items are based almost exclusively on a biomedical frame, which extends to the very organization of WHO's departments, which are dominantly structured according to specific disease groups. Global health as a human rights issue Health as a human right features prominently in the WHO constitution [11], which states that "the enjoyment of the highest attainable standard of health is one of the fundamental rights of every human being without distinction of race, religion, political belief, economic or social condition" Global health as a human right appears to be used as an additional justification for programmatic activities, almost always supplementing the biomedical frame. However, the programme budget for 2014-15 allocates very limited resources to the programme area on human rights [45]. Global health as a security issue Global health as a security issue is a central and recurrent frame throughout many of WHO's public documents, with an emphasis on WHO's role in mitigating and coordinating international responses to disease outbreaks. This frame appears to be of particular importance to re-asserting the uniqueness and added value of WHO. However, it is notable that the security frame is less utilised in the GPW12, apart from references made to the International Health Regulations, and that the programme budget 2014-15 for outbreak and crisis response was cut by 51.4 % compared to the level in 2012-13 [45]. Global health as a means to reduce poverty In the GPW11, eradicating extreme poverty is mentioned together with eradicating hunger as "the first and most important Millennium Development Goal". While poverty reduction's role in improving public health is acknowledged, the programme documents primarily frame the issue by discussing the role of health policies in contributing to poverty reduction. Indeed, 'Investing in health to reduce poverty' was one of the seven priorities for GPW11. The concept of 'poverty reduction' appears largely connected to the broader discussion on 'sustainable development' in the GPW12. Rarely used frames Global health as a global public good, global health as investing in economic growth, global health as foreign policy and global health as a commodity/trade issue were rarely used frames in the reviewed strategic documents. compromise needs to be good enough, so it's a question of negotiation." Interviewee 1 "[Lack of] capacity building in negotiation [is a barrier]. Or perhaps even, and that's linked to attitude, realising that you need that capacity. You can't take just a prescription and write on it, and [expect] the others are going to do [it]. Many health actors unfortunately are still in that illusion." Interviewee 2 Interviewees noted that the health sector (including WHO) is often unable to speak effectively in the language and with the perspectives of other sectors, lacking the political savvy, communication skills, and relevant evidence to communicate the importance of health for the priorities and interests of other actors. "The problem is that the health sector is very strong in convincing itself that other sectors should do something. And it is very weak in speaking the language of the other sectors… we work a lot on evidence, but as I said earlier, we usually frame it in a way to convince the health sector people, we don't gather enough the evidence that is necessary to convince prime ministers, finance ministers, foreign ministers." Interviewee 2 "You need to gear your stuff to the audience that you're talking about. And again we're not necessarily that adept at doing that. [There are] colleagues who Table 4 Examples of statements in strategic documents describing WHO's role and barriers to addressing the broader determinants of health Medium-term strategic plan 2008-2013 "Although essential for achieving lasting health improvements across populations, the underlying determinants of health have received relatively little attention at WHO, necessitating a substantial increase from the baseline." "lack of effective consensus among partners, including organizations of the United Nations system, other international bodies and nongovernmental organizations on policies and framework for action; insufficient investment by national governments for building and deploying adequate skills to ensure that tools to analyse human rights, ethical, economic, gender and poverty aspects are widely and effectively implemented." "The health sector is only poorly able to influence policies in other sectors to promote occupational and environmental health and lacks the tools, knowledge and skills to engage other sectors." "Health systems are on the whole not even identifying the environmental determinants of health as part of their remit, let alone as a priority for improving public health. The few existing data indicate that only about 2 % of a typical national health budget is currently invested in preventive health strategies. Clearly, health institutions face both the challenge of controlling health costs and the opportunity to do so through more effective environmental health strategies and interventions." "The mandate for WHO's action in this area is firmly anchored in the Constitution and the history of public health practice and achievements. In the framework of United Nations reform, WHO has an opportunity to show a more global leadership in public health and the environment, linking health explicitly to the goals of sustainable development." Eleventh General Programme of Work 2006-2015 "Many of the determinants of health are outside WHO's direct sphere of influence, but WHO will work with ministries of health to build their understanding of what can realistically be done by working with other sectors. WHO will monitor global trends that are of significance to health in areas such as trade and agriculture, and keep ministries of health informed." "More research is required for a better understanding of the links between determinants and their consequences, and for how governments, in particular ministries of health, can best influence other government sectors." "At the international level, governments will need to engage effectively with negotiated agreements such as TRIPS and the General Agreement on Trade in Services, given their increasing importance for health goods and services. Engagement with industry in general, covering areas such as food, pharmaceuticals and insurance, should continue, focusing on commonly agreed public health agendas. WHO has a responsibility to keep governments informed and engaged in the process." "The action required to tackle most of these determinants goes beyond the influence of ministries of health, and involves a large number of government and commercial responsibilities. If these determinants are to be dealt with effectively, therefore, the boundaries of public health action have to change. Governments, especially health ministries, must play a bigger role in formulating public policies to improve health, through collective action across many sectors. It is the responsibility of WHO to keep governments informed of the situation, raise awareness, and advocate policies to tackle the determinants when opportunities arise." Table 4 Examples of statements in strategic documents describing WHO's role and barriers to addressing the broader determinants of health (Continued) Twelfth General Programme of Work 2014-2019 "The concept of social determinants of health constitutes an approach and a way of thinking about health that requires explicit recognition of the wide range of social, economic and other determinants associated with ill health, as well as with inequitable health outcomes. Its purpose is to improve health outcomes and increase healthy life expectancy. The wider application of this approach-in line with the title of the Twelfth General Programme of Work and in a range of different domains across the whole of WHO-is therefore a leadership priority for the next six years in its own right." "As a public health agency, WHO continues to be concerned not only with the purely medical aspects of illness, but with the determinants of ill health and the promotion of health as a positive outcome of policies in other sectors" "One consequence of the growing political interest in health and the recognition of the connection between health and many other areas of social and economic policy, is a growing demand for intergovernmental, rather than purely technical processes, in order to reach durable and inclusive agreements. In the general programme of work it is foreseen that this demand is unlikely to decrease. As a consequence, WHO will put in place the requisite capacities to prepare for meetings, brief participants and manage these processes as effectively as possible." kind of say, "We must do this purely from a public health point of view; they must realise that what we're saying is important."" Interviewee 3 The analysis of interviews indicates that WHO has ambitions to engage across sectors. However, staffing, organisational structure and financing factors serve to entrench the biomedical view of health despite widespread awareness of the linkages between health and its social, environmental and political determinants. Firstly, it was noted that the current staffing of the organization, dominated by medical specialists, reinforces the dominant biomedical frame. The last report released by WHO on its staffing indicated that 47.8 % of the WHO staff had health professional background, of which 90.7 % were medical specialists, with 49.8 % of these being public health specialists. In comparison, with 0.1 % were economists and 1.6 % lawyers and social scientists [15,30]. "I would say, you know, we are an organization, particularly at headquarters, that was set up to do technical work and we hire specialists. And we hire people with the right shaped heads to do detailed, technical work, but not with the right shaped heads to do policy and negotiation. I think that's beginning to shift, but it's a barrier" Interviewee 3 The second barrier is the structure of the organization and the resultant internal tensions. The specialized nature of WHO's programmes and departments, and their often independent responsibility in fundraising and in implementing their mandate, leads to an overly narrow focus on specific health issues, priorities and interests. To this end, a range of conceptualizations of health exist between the different components, each of which seeks to improve public health in very different ways. "I think one of WHO's strengths, in a sense, lies much more in its programmatic structure and that dominates very much. We're good at TB, we're good at malaria, we're good at health systems to some extent, we're okay with all of that stuff. Where we are weak, if those are the pillars, the roof is very thin. The parts of the organisation that deal with health as a broader issue, that can have the breadth to look at a range of health issues, are few and far between…We somehow aren't as good as we could be in terms of being greater than the sum of our parts." Interviewee 3 "…as you go towards the more technical areas, then of course people who are working on diabetes are going to see things a little bit differently from people working on ebola…I think depending on the office that you're operating in, what the predominant factors or pressures are, are going to differ a little bit. So there's inevitably going to be a lot of different perspectives and views about what's most important, what is the most pressing, but I think when you put it all together, probably the biggest thing is that there will be a fair amount of consistency in terms of what we are struggling with. I think the real difficulties are going to be over, what do you do about them?" Interviewee 5 It was argued that achieving alignment among different parts of the organization prior to engagement with other actors required negotiations between the various interests and perspectives of each department. Ultimately, this means that WHO often expresses views that are of the lowest common denominator-opinions that everyone can agree with-and are unable to prioritize between issues. As expressed by one interviewee: "Trying to get a core script that people would be okay with, speaking in the language of the G8 rather than speaking in the language of having to cover every single WHO department was enormously hard. And usually you're at greater risk, like in most wars, of being shot by your own side, of not including somebody's pet priority, however irrelevant it might be." Interviewee 3 A third reason for WHO remaining within a biomedicaloriented frame appears to be that the current financing of the organization does not incentivize WHO to staff and structure itself to engage with other sectors, and that member states are essentially compelling the agency to limit itself to being a technical agency through their financing power of individual programs focusing on diseases and specific health issues. It was expressed that member states are currently providing WHO with insufficient financial support for the organization to take on a coordinating role in global health. "One of the barriers is also financial capacity…core funding has been capped since the 1980s and has therefore in real terms diminished. So at the same time, there is an expectation that we will do things [coordination], but we are actually not funded to do it… It's difficult to coordinate without the funding for it." Interviewee 1 "If WHO's future is as a highly technical normative agency producing standards etc., then you need people with the in-depth specialist expertise to do that well. If WHO is going to be a political actor in the interests of health in other sectors, in international fora, you need people to do that. If it's going to be a development player, you need to have people who can handle that stuff, but particularly at the developing country level. And at the minute, we talk about all 3, but at headquarters we're still very much in Mode 1 because that's where the money comes from, that's what people want us to do." Interviewee 3 It was explained that there is a split within the organization as well as a split in views among member states on whether to stay within the technical space or to aspire to a more proactive, political role where health is also advanced through intergovernmental negotiations. "The question however becomes whether that trend of increasingly moving from technical strategies into intergovernmental negotiation continues and whether we put in place the capacity to manage it better or in effect devote more resources to it. And here I think within the organisation views are split. There are some that kind of [miss] the old days when it used to be a completely technical organization, and there are other that realised that this change is inevitable, and we're one of the only organisations that can really put ourselves at the centre of these negotiations and we should therefore be quite happy to do more of them. And that split is reflected in member states as well." Interviewee 3 It was suggested that even though ongoing WHO reforms mostly deal with managerial issues, it provides an opportunity for the organization to articulate how it could more effectively communicate and engage other sectors to achieve improved public health outcomes. However, instead of expanding its mandate, the organization had largely limited its operational focus to the health sector: "Last year, we defined these six categories [for organizing WHO's future work]. If you want my personal opinion, we missed a good opportunity to better express the importance of influencing other sectors…You will get something which is very medicalised and very health-care oriented, treating diseases, non-communicable, communicable diseases, life course, emergencies and health systems. Everything is very much around health care." Interviewee 4 "…we need to convince our own peers as well, that universal health coverage is a beautiful thing, but that it is a health-related goal, where the health sector is responsible for the achievement. I will be very pleased if there are responsibilities all over the place [in the post-2015 agenda] for achieving health instead of just one." Interviewee 4 This tension is reflected in WHO's General Programme of Work. The previous programme of work (the 11th General Programme of Work (GPW11) [31] and the Medium-Term Strategic Plan 2009-2013 [32]), despite recognizing its inability to adequately address the broader determinants of health, only vaguely discussed how WHO should engage in cross-sectoral collaboration, largely limiting itself to being responsible for informing governments, but not demonstrating leadership (Table 4). There is however a visible shift in language between the 11th and 12th General Programmes of Work (GPW), with the latter more explicitly recognizing that the organization needs to put in greater capacity to manage inter-governmental processes as well as promote "health in a range of intergovernmental forums (foreign policy, trade negotiations, human rights, climate change agreements, and others) that do not have health as their prime concern, but whose decisions can have impact on health outcomes" [33]. However, it is noted that the largest emphasis in GPW12 is on universal health coverage as WHO's most ambitious contribution-an issue that falls squarely within the health sector-with limited discussion about cross-cutting, underlying issues where sectors outside the health sector may participate in improving public health. WHO's cross-sectoral engagement is hampered externally by siloed areas of focus for each IGO, and the lack of adequate conceptual frameworks and institutional mechanisms to facilitate engagement across siloes While a broader approach to health requires collaboration and shared responsibility, interviewees recognized an inherent barrier in the way IGOs currently interact between sectors. Broadening WHO's mandate to include and share leadership roles with other sectors was thought to risk diluting their mandate, potentially resulting in resources being shared or diverted away to other non-health actors. Similarly, two interviewees noted that WHO's involvement in other sectors may be resisted by other IGOs that want to control their policymaking domains: "…to be very honest, I think [barriers to collaboration between IGOs] are mandate, power, budget and fear. Fear in the sense that, well, 'I don't want to give [away] part of my organization's mandate or power." Interviewee 4 "Another level of issue of course is, who has the mandate to do something in an area? And particularly difficult is if it appears that you have overlapping mandates, so you have different organizations saying, "Well, we should be doing that." And another organisation saying exactly the same thing. Then do you both repeat and do duplicative work, or do you try to reconcile somehow?" Interviewee 5 Despite expressing that the importance of cross-sectoral engagement has long been recognized, one interviewee argued that WHO, and the global health system more broadly, still mainly viewed health as an issue for the health sector. "Unfortunately, the major actors in global governance might still feel when you talk about global governance for health, that this is the business of health actors. This is one of the major mistakes, in my opinion, to consider that health is the business of the health sector exclusively. This looks very old fashioned today, after years of talking about health in all policies and inter-sectoral work. I'm afraid it is one of the main mistakes, considering health to be about the health sector, about hospitals and health care." Interviewee 4 The confluence of biomedical framing within the health sector and the siloing of actors across sectors appeared to have implications for the type of collaborative partnerships that deal with health in the global governance system. Here, the H8 group 1 was used as an example of an initiative which has evolved around a narrowly conceived mandate. "If you look at all of them [the members of the H8 group], they are very much health-related. They consider themselves as very much the core of the health business... They think that they are doing a lot in terms of primary prevention because they go as far as vaccination. With all due respect, the H8 is a very important and influential group, but the mistake is to keep the boundaries on that group around health-specific issues, with health actors and ministers of health essentially as the main interlocutors….So I think this is the major issue of the moment, that health remains very much around health issues, in the old fashioned way of considering health of people." Interviewee 4 The recent UN reform effort ("Delivering as One") was presented as a way of dealing with multiple mandates, but it was felt that in reality, it only added to the complexity, ultimately encouraging individual UN agencies to maximize their access to resources. "It (UN Delivering as One) is sort of pious words and an awful lot of process, whilst everybody is elbowing others out of the way to get hold of the resources." Interviewee 3 "If you would allow me to present a negative example. The UN decided to look at 'Delivering as One' , as part of its strategic thinking. Instead of having so many agencies, it decided to have a culture of 'delivering as one'. What has happened is now we have all the specialist agencies, plus one called 'UN Delivering as One'. This is the proof that we already have failed." Interviewee 4 A number of examples of previous and existing collaborations between WHO and other IGOs and non-health groups were identified through the document review, demonstrating that WHO does indeed seek engagement with other sectors (Table 5). It was beyond the scope of this study to evaluate the effectiveness of the inter-institutional mechanisms that have been implemented, and the crosssectoral work performed by the WHO. However, three important features were noted. One was that the agreements between WHO and other IGOs appeared, where possible, to demarcate areas of primary responsibility. Second, it was noted that these collaborations appear to not have resulted in formal platforms or institutional mechanisms for cross-sectoral dialogue, but rather relied on governing bodies and formal meetings of each, respective organization. Finally, the annual updates on global collaboration with organizations in other sectors appeared to indicate a reduction in the number of collaborative activities during the latter part of the recent decade. However, the review of WHO's annual updates to the governing bodies gives an incomplete picture, since individual WHO departments also pursue cross-sectoral collaborations beyond those formally reported to the governing bodies. A longstanding collaboration between the WHO and the FAO is the Codex Alimentarius Commission, which issues standards and codes of practice related to foods, food production, and food safety, and which is recognized by the World Trade Organization (WTO) as an international reference point for the resolution of disputes concerning food safety and consumer protection. Other examples of cross-sectoral collaboration include the tri-lateral cooperation on intellectual property and public health between WHO, World Intellectual Property Organization and WTO [34], the recently established WMO-WHO joint office for climate and health [35], and the planned collaboration between the WHO, the UNDP, the World Bank and other IGOs on the prevention and control of non-communicable diseases [36,37]. The WHO have also the past 15 years at different times attempted to indirectly engage other sectors in the global health agenda by establishing commissions addressing specific issues which require cross-sectoral collaboration, namely the Commission on Macroeconomics and Health [38], the Commission on Intellectual Property Rights, Innovation and Public Health [39], the Commission on Social Determinants of Health [5], and now more recently the Commission On Ending Childhood Obesity [40], which at the time of writing is yet to release its final report. • WTO contributed with research to a report prepared for the WHO on the links between tobacco consumption and trade liberalization in 2001 [47] • Joint WHO/WTO study on the implications of international trade and multilateral trade agreements for health systems and health service provision [48] International Labour Organization (ILO) 1998 • Collaboration sought within the framework of the WHO Global Strategy for Occupational Health for All, to increase priority to occupational health and safety on national and international agendas 2000 • Agreed in 1999 to establish an inter-secretariat working group in order to promote cooperation in areas such as poverty alleviation, gender issues in workers' health, prevention and control of HIV/AIDS among workers, and health financing and health insurance coverage for workers. 2010 • WHO and ILO were designated lead agencies for a UN system initiative on social protection (one of nine initiatives to help member states respond to the economic crisis) International • The World Bank, as part of its initiative to accelerate progress towards the health-related goals, convened organizations in the United Nations system, including WHO, and donors to examine approaches to scaling up activities. WHO's main role was reported to be addressing cross-cutting issues influencing achievement of goals, such as those related to human resources, governance and human rights. 2004 • WHO collaborated with the "anchor unit" of the World Bank's Human Development Network to promote deworming activities in the FRESH Start initiative (Focusing Resources on Effective School Health). Several interviewees suggested that very few effective institutional mechanisms exist for improving the interaction between IGOs of different sectors. A number of policy frameworks seek to frame health broadly, and integrate a concern for health within other sectors. One such prominent platform is the social determinants of health framework, which provides a foundation for understanding the relationship between individual health and broader social and environmental conditions. The 'Health in All Policies' approach is a similar concept, which aligns naturally with this framework. However, some interviewees believed that these approaches, despite being talked about for many years, have largely remained 'health-centric' , and have been insufficient for engaging other sectors. • Technical consultations conducted to streamline collaboration, covering areas such as radiotherapy, diagnostic procedures, molecular biology, communicable diseases, food safety and nutrition, and health-related aspects of radiation protection. 2003 • Started collaboration on building human resource and institutional capacity for the application of telecommunications to the maintenance of nuclear medicine equipment in developing countries. World Organisation for Animal Health (OIE) 2007 • The Global Early Warning and Response System launched by the FAO, OIE and the WHO, as the first joint early warning and response system for animal diseases, including zoonoses, to enhance global capacity to detect and control diseases of animal origin at their source. UNCTAD 1998 • Explored the issue of trade in health services, and issued a joint publication which examined trade and health implications, especially from the developing country perspective [51]. • Collaborated on building up country capacity to analyse and respond to the effects of globalization and trade on health, and on a framework for integrating health protection into UNCTAD's plan of action. Expanding an IGO's operations and achieving coherence with other sectors can be trying, as it brings with it a number of complex and poorly understood organizational management challenges. As suggested by one interviewee, these challenges should be recognized as common to all UN agencies and government departments, and not unique to WHO. "So WHO is a complexity in itself, which struggles like any other organisation, not more than others, but like any other organisation with policy coherence. It's the same question if you ask, what's a country's opinion on this, and you ask the same question to different ministries, the normal is that you get different answers." Interviewee 2 Discussion The findings from this study suggest that WHO's operations currently centre upon a narrow, biomedical view of health rather than a view which incorporates the broader determinants of human wellbeing. There is a shared understanding that the dominance of the biomedical approach is an impediment to WHO playing an effective cross-sectoral coordinating role for health. The interviewees and documents reviewed suggest a number of reasons for this approach, which can be divided into both internal and external pressures. The language used both by interviewees and WHO's public documents portrays the sense of a constrained organization which is forced to discuss health as though it is a series of discrete issues which can be dealt with independently, even though the organization's leadership is aware that the reality is far more nuanced. The biomedical approach is reflected in WHO's programme of work, and underpinned by budgetary allocations which are primarily structured around discrete disease and treatment groups. Importantly, this study suggests that external political pressures and the lack of financial flexibility afforded by member states constrain WHO, and contribute to maintaining a narrow technical focus which impedes crosssectoral collaboration. A number of global health experts have drawn attention to the state of WHO's budget, 80 % of which consists of voluntary contributions earmarked for a specific purpose, and to how this lack of discretionary budget constrains the organization's ability to implement the programme of work approved by the World Health Assembly [16,[41][42][43][44]. The programme budget for 2014-15 allocates more resources to specific disease-based efforts compared to more broadly defined programmatic areas addressing social and environmental determinants of health (($841 million allocated to communicable diseases and §318 million for non-communicable diseases compared to $28 million and $91 million respectively for social determinants of health, and health and the environment) [45]. The organization's dominant biomedical approach to its mandate is also likely influenced by the dominance of medical professionals [30], who may have excellent technical knowledge of health issues, but not the necessary expertise to support political negotiations between member states, which require skills in convening negotiations and facilitating consensus-building [6,15]. The lack of staff with other backgrounds may also limit the organization's capacity to advocate for health in other sectors such as trade, environment or labour and migration. The need to review WHO's staffing was noted in a recent Chatham House on International Affairs' analysis of the organization, which argued that "addressing the social, economic and environmental determinants of health and non-communicable disease, and advising countries on the attainment of universal health coverage and financial protection would seem to demand a very different distribution of skills from that which exists currently" [14]. There is likely also a complex interaction between the external image portrayed by WHO as a 'doctor of the world' (which is undoubtedly influenced by the external pressures noted above) and the medicalization of its internal organizational culture. Externally, the lack of effective institutional mechanisms to facilitate effective cross-sectoral collaboration is a critical barrier. Interviewees suggested that existing frameworks for understanding the interaction between public health and other sectors (such as the social determinants of health) and designing interventions to engaging sectors in improving public health (such as the Health-in-All-Policies Approach) currently are alone insufficient for enabling dialogue and collaboration between different sectors affecting global health. Furthermore, interviewees noted pressures between different UN bodies, which ensure they stick firmly to their 'space' , even when they recognize the need for shared responsibility to fulfill their mandates. One such pressure-in simplified terms-may follow the logic that 'if WHO is telling us that major determinants of global health are sustainability and stable employment, perhaps we should fund UNEP and ILO to achieve improved health'. There is hence a perceived danger in stepping too far away from the biomedical understanding of health, in that WHO may cede power and resources to other IGOs. This concern extends internally to relationships between clusters and departments within WHO due to competition for funds and resources, and indeed functions at a national level between government departments. It is important to note that these pressures are not unique to WHO. Crucially, these pressures and challenges-of trying to work across sectors to enhance human wellbeing in an increasingly complex world-are in no way unique to multilateral organizations. They are common issues that national governments, civil society groups, and businesses face on a regular basis. Indeed, despite the predominance of the biomedical frame throughout WHO, it appears that the organization's senior leadership has a firm understanding and vision of how they believe the agency ought to collaborate and engage in cross-sectoral collaboration, within the boundaries set by its constitution. Given the pressures outlined above, a disconnect between the culture, competence, activities and public perception of WHO and its senior leaders is to be expected. There are three main limitations to this study. First, its conclusions are based on a limited number of interviews, and the possibility exists that important perspectives may have been missed, which could have been captured by a larger sample. This aspect is partially addressed by the purposive sampling of interviewees with a high degree of seniority and leadership experience in the WHO, giving them a firm understanding of the organization's strategic direction, organizational culture, and the various challenges experienced in managing its programming. Second, it is well known that global governance for health involves the interaction of many actors, including governments, civil society, businesses, and public-private partnershipsbias may result from seeking perspectives from only one of these potential stakeholders. Third, as the study is specific to WHO and its current situation, generalisation of results to other organizations or sectors may be inappropriate. Conclusion A changing global environment is placing new and complex demands on the UN System, including WHO. These require a broader approach to promoting human health and wellbeing, which is conceptualized in WHO's constitution and well understood by senior WHO officials, but has yet to be sufficiently implemented. In contrast, there are a number of external and internal pressures on the organization which have created an organizational culture and operational structure which focuses on a narrow, technical approach to global health, prioritizing disease-based, siloed interventions over cross-sectoral collaboration. Furthermore, conceptual frameworks such as the social determinants of health framework and Health-in-All-Policies are not enough. Member states must incentivize and support inter-governmental organizations in pursuing collaboration. There is also a need for fora, platforms and institutional mechanisms to facilitate cross-institutional discussions and negotiations. New forms of operationalizing and enhancing cross-sectoral partnerships must become a priority for everybody seeking to improve global governance for health.
2017-06-19T11:49:32.692Z
2015-11-24T00:00:00.000
{ "year": 2015, "sha1": "7827cba95000613dfb0287b7f87b28ba1b636234", "oa_license": "CCBY", "oa_url": "https://globalizationandhealth.biomedcentral.com/track/pdf/10.1186/s12992-015-0128-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c63d565400eef579a956c546217d74b4944e3515", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine", "Political Science" ] }
212699856
pes2o/s2orc
v3-fos-license
Effects of Alpha Lipoic Acid Supplementation on Serum Levels of Oxidative Stress, Inflammatory Markers and Clinical Prognosis among Acute Ischemic Stroke Patients: A Randomized, Double Blind, TNS Trial Purpose: Stroke is one of the most common conditions causing death. There have been few studies examining the effects of alpha lipoic acid (ALA) on stroke patients. In this regard, the present randomized controlled clinical trial was conducted to examine the effects of ALA supplementation on serum albumin, and inflammatory and oxidative stress markers in stroke patients. Methods: The present paralleled randomized controlled clinical trial involved 42 stroke patients who were over 40 years and under enteral feeding. The participants were randomly assigned into two groups and finally 40 patients completed the study. Patients in alpha lipoic acid group (n=19) took 1200 mg ALA supplement daily along with their meal, and participants in control group (n=21) underwent the routine hospital diet for 3 weeks. Fasting blood samples were obtained and albumin, oxidative stress, and inflammatory indices were assessed at baseline, as well as at the end of the trial. Results: After 3 weeks, treatment of patients with ALA led to a significant decrease in tumor necrosis factor alpha (TNF-α) and interleukin 6 (IL-6) levels (P=0.01) compared to baseline. But serum levels of albumin, total antioxidant capacity (TAC), malondialdehyde (MDA), highsensitivity C-reactive protein (hs-CRP), IL-6 and TNF-α did not change significantly vs. control group (P>0.05). Conclusion: ALA did not significantly change the serum levels of albumin and inflammatory as well as antioxidant capacity indices in stroke patients compared with the control group. More clinical trials with large sample sizes and long duration are needed to clarify the effects of ALA on these patients. Introduction Stroke is the second leading cause of death across the world. 1 It has various clinical, social, and economic burdens and its management demands a comprehensive effort by both basic scientists and clinicians. 2 Ischemic stroke which is caused by either a sudden reduction in brain blood flow or complete blockage of it, accounts for 85% of all strokes. 3 Despite sophisticated medical management and neurosurgical techniques, the mortality rate of stroke is remarkable. 4 Wide lines of studies have focused on well-defined risk factors of stroke and its management strategies. Both life style and diet-related factors contribute to its high rate mortality. 1 Better understanding of risk factors would improve treatment outcomes and prognosis of patients. It has been strongly clarified that oxidative stress is a fundamental mechanism causing brain injury in stroke patients. 5 In fact, increasing levels of reactive oxygen species (due to lower levels of antioxidant defense and higher levels of oxygen after reperfusion), oxidizing lipids, and iron induce oxidative stress. Therefore, the increase in ROS production and consequently in oxidative stress, as well as the increase in inflammatory mediators are the earliest and the most potent mechanisms involved in brain damage during the stroke. 1,5 Modulation of free radicals derived from metabolic processes is one of the most important strategies in preventing reperfusion injury during stroke. This process is mainly mediated by antioxidants of the body. 6 Alpha lipoic acid (ALA, 1,2-dithiolane-3-pentanoic acid), a neuro-protective and neuro-restorative antioxidant, with the ability to cross through blood-brain barrier, is a strong free radical scavenger in the brain. 7,8 There are recent interests to assess the effects of ALA in central nervous system diseases such as Parkinson's disease, Multiple sclerosis, Alzheimer's disease, and spinal cord injury. [9][10][11] Lipoic acid protects the brain against reperfusion injury in the early stages of cerebral ischemia. 12 Primary experimental evidence has revealed that treatment of rats in cerebral artery occlusion condition with lipoic acid protects the brain against ischemic stroke by modulating several inflammatory pathways. 3,4,[12][13][14] Despite the positive results from a clinical trial on protective effects of lipoic acid on clinical outcomes after acute ischemic stroke, 6 there is no evidence confirming its role in modulating the inflammatory process and oxidative stress in stroke patients. It seems that some clinical trials with basic trends are required to clarify the effects of lipoic acid on the modulation of the levels of free radicals and oxidative stress as main mediators in stroke. It is especially important in Iran where the prevalence of stroke is rising and this severely affects the patients' quality of life which is effective in controlling the complications of the disease. Therefore, this randomized controlled clinical trial was conducted to examine the effects of ALA supplementation on serum albumin, and inflammatory and oxidative stress markers in stroke patients. Participants The target population of the present randomized clinical trial (RCT) was patients with acute ischemic stroke of cerebral hemisphere who had referred to Imam Reza hospital, Tabriz University of Medical Sciences, Tabriz, Iran (Tabriz stroke registry project). The patients were screened according to the inclusion and exclusion criteria and 42 individuals of them were involved to this study, among which 40 patients completed the study ( Figure 1). The patients who met the following criteria were eligible to include in the study: those who were admitted for the first time or diagnosed with acute ischemic stroke, those over 40 years, those whose National Institutes of Health Stroke Scale (NIHSS) scores were less than 20, and those for whom the enteral or oral feeding was done in first 48 hours of admission. The exclusion criteria included, being in vegetative state, having chronic conditions (including kidney failure, undergoing dialysis, liver failure and cirrhosis in the admission, seizure, cancer, gastrointestinal bleeding, uncontrolled diabetes, having any history of heart attack during past three month, respiratory disease, heart failure, and hematologic disease), any changes in diagnosis and feeding methods, taking enteral feeding less than 10 days, taking immune-suppressive medications, abusing alcohol and drugs, death in the first 10 days of admission, and lactose intolerance. Patients were assigned to intervention and control groups by block randomization method. Randomization was done by computer-generated random numbers. Allocation of participants into the groups was done by a third person who was not directly involved in the project. Intervention group took 1200 mg ALA supplement daily along with the ordinary meal and participants in control group underwent the routine hospital diet for 3 weeks. All participants received kitchen made feeding with composition of 1.5 g/day protein and 25 kcal/kg energy per day. The ALA supplement was produced by Caren co. We did all measurements both at baseline and at the end of the trial. We calculated the required sample size based on data from a previous study 15 by considering serum interleukin 6 (IL-6) as a key dependent variable, type I error of 0.05, and study power of 80%. Based on suggested formula for clinical trials and taking into account a possible dropout of 20%, we came up with the sample size of 20 stroke patients for each group. Study design This project was done in accordance with the guidelines laid down in the Declaration of Helsinki (1964). Individual questionnaires including demographic characteristics, past medical and drug history, as well as socio-economic status were completed for each patient by their family through comprehensive face-to-face interviews. Biochemical measurements Prior to the intervention and at the end of the trial, after at least 8 hours overnight fasting, 7 mL of venous blood samples were obtained from each patient. Blood samples were immediately centrifuged at 4000 rpm for 15 minutes in aseptic condition. Serum samples were separated in 1 mL micro-tubes and were immediately stored at -80°C. Serum levels of albumin, total antioxidant capacity (TAC), malondialdehyde (MDA), IL-6, high-sensitivity C-reactive protein (hs-CRP), and tumor necrosis factor alpha (TNF-α) were measured using enzyme-linked immunosorbent assay (ELISA) kits at the baseline and endpoint of the trial. Clinical prognosis measurements NIHSS is a systematic method that provides a quantitative measure of stroke-induced neurological damage and is generally designed for patients with acute stroke. The scale is comprised of 15 items and covers the results of stroke patients' neurological examination and assesses the effect of acute stroke on the level of consciousness, language, eye movements, motor force, ataxia, and sensorium. Each item is scored either on a 3 or 5 point scale, where zero represents the normal state and upper limit is 42. Also, modified Rankin Scale (mRS) is comprised of 6 levels and evaluates independence rather than the performance of specific tasks. The scale consists of 6 grades, ranging from 0 to 6, where zero is associated with no symptoms, five represents severe disability, and 6 means death. Statistical analyses Statistical analyses of all data were performed using SPSS software version 21. Data results were expressed as mean ± standard deviation (±SD). We used the one-sample Kolmogorov-Smirnov test for assessing the normality of the data distribution. Baseline characteristics of the patients in the alpha lipoic acid and control groups were compared using independent sample t test and chi-square test for quantitative and qualitative variables, respectively. Paired t test analysis (following the intention to treat principle) was used to identify any differences between the two groups after intervention. The P value <0.05 was considered statistically significant. Results All 40 patients (19 patients in alpha lipoic acid group and 21 patients in control group) completed the 3-week trial. No patient reported adverse effects or symptoms with ALA supplementation during the trial. In lipoic acid group, 68.4 % of the participants were men and 31.6 % were women (13 males and 6 females). In control group 81.0 % of the participants were men and 19.0 % were women (17 males and 4 females). The demographic characteristics and anthropometric measures of the participants are shown in Table 1. There were no significant differences between the two groups regarding baseline variables including age, sex, marital status, education, smoking, co-morbidities, and lesion location at the beginning of the study (P ˃ 0.05). The serum levels of albumin, TAC, MDA, TNF-α and hs-CRP before and after the intervention and their variations are shown in Table 2. There were just significant differences in term of MDA between two groups at the baseline (P = 0.01) and endpoint (P = 0.03) of study. Supplementation with lipoic acid resulted in no significant changes on antioxidant capacity compared with control group. Also, in term of other inflammatory factors the changes after 3 weeks were not significantly different between study groups (P ˃ 0.05). After 3 weeks, treatment of patients with ALA in intervention group, led to a significant decrease in TNF-α and IL-6 (P = 0.01) and not significant increase in TAC (P = 0.07) serum concentrations. There were no significant differences between baseline levels of albumin, MDA, and hs-CRP, and end point concentrations in alpha lipoic acid group (P > 0.05). Table 3 provides the mean changes in clinical measurements of patients from the baseline to the end point. During study, there was a significant decrease in NIHSS and mRS in lipoic acid (7.10 ± 4.34 vs. 5.52 ± 3.58, P = 0.01), (1.73 ± 1.09 vs. 1.47 ± 0.96, P = 0.02) and control groups (5.28 ± 4.57 vs. 3.42 ± 1.07, P = 0.02), (1.85±1.49 vs. 1.47 ± 1.28, P = 0.04), respectively. However, there were no changes regarding the clinical prognosis indices between two groups. Discussion Our study demonstrated that ALA supplementation for 3 weeks led to a significant decrease in serum TNF-α and IL-6 in intervention group, but no significant differences were founded on inflammatory and antioxidant biomarkers variation between intervention and control groups. To the best of our knowledge, this is the first clinical trial on the effects of ALA supplementation on inflammatory and oxidative stress biomarkers in stroke patients. Neuro-protective effects of ALA had been studied frequently in central nervous system diseases. 10 It is demonstrated that reactive oxygen scavenging capacity and ability to regenerate endogenous antioxidants necessary for repair systems relate to neuro-protective and neuro-restorative roles of lipoic acid, respectively. 10 Our results, as the first study which examined the effects of ALA supplementation on anti-oxidant profile in stroke patients, are inconsistent with previous studies conducted on related issues. We did not observe significant effects of ALA supplementation on serum concentrations of TAC and MDA. It has been reported that TAC levels were improved in multiple sclerosis patients through ALA supplementation. However, in line with our results, they did not observe significant changes in MDA levels. 16 It seems that the ischemic reperfusion in stroke patients leads to over production of ROS and consequently increase in the levels of oxidative stress mediators, several months after stroke. 17,18 Therefore, short term supplementation with ALA may not be able to affect body anti-oxidant profile. It is recommended that all treatment procedures should be followed at least for 3 months. Three-week supplementation with ALA led to a significant decrease in serum concentration of TNF-α and IL-6 in intervention group. However, there were no significant differences between two groups in terms of TNF-α, IL-6, and hs-CRP as well as serum levels of albumin (as a negative acute phase protein). Our results are not consistent with the findings of the previous trial on supplementation with ALA which showed a significant decrease in serum inflammatory cytokines in MS patients. 19 The serum inflammatory markers usually remain high over months after stroke. 20,21 Due to acute inflammation, the levels of negative phase proteins including albumin are also low in stroke patients over a month. Hypoalbuminemia is frequently reported among stroke patients and correlates with the pro-inflammatory pattern of serum protein electrophoresis. 22,23 It seems that examining the changes in serum inflammatory indicators needs relatively long-term trials. It should also be mentioned that stroke patients suffer from various comorbidities including hypertension, diabetes, etc, which are basically accompanied with chronic inflammation and increased levels of pro-inflammatory cytokines that interrupt their treatment. Therefore, comprehensive therapeutic strategies with long-term interventions should be developed. There are numerous studies on stroke models that have examined its potential action mechanisms. 24 As ROSs cause tissue injury leading to ischemic reperfusion in stroke, ROS scavengers are considered in CNS-protecting therapies. 25 It has been shown that protective effects of ALA against cell death following stroke are mediated by increasing the antioxidant enzyme levels and scavenging ROS. 12,13 Immediate injection of ALA decreases the infarct size in the rat partially via insulin receptor activation. 4,26 Experimental evidence suggested that inhibition of peripheral TNF-α and down-regulation of brain microglial activation by ALA protect rat brain against ischemic stroke. 14 It seems that expressions of nuclear factor erythroid 2-related factor 2 and heme oxygenase-1 are induced by ALA treatment that consequently decreases the intracellular ROS, infarct volume, brain edema, and oxidative damage, and promotes neurologic recovery in rats. 3 ALA had a significant positive effect on NIHSS and mRS values but no statistically significant differences were found at baseline and after potential cofounder adjustment between two groups. No side effects were reported for ALA supplementation in the present study. There are some limitations that should be considered in interpretation of our results. Although all patients were on the same hospital diet, individual intakes of patients were not considered. Therefore, the dietary intakes and nutritional status of patients were not evaluated which are important confounders affecting the inflammatory and oxidative stress indicators. Moreover, it seems that duration of supplementation was not long enough to examine the changes reported here. The small sample size was other factor affecting the findings. Thereafter, more clinical trials with large sample sizes in this regard are required. Conclusion In conclusion, 3 weeks of supplementation with ALA did not have any significant effects on serum albumin, and oxidative stress and inflammatory biomarkers compared to control group in stroke patients. However, more clinical trials with large sample sizes and long duration are required to clarify the effects of ALA on stroke patients. The results are described as mean ± standard deviation (SD). a MD (95% CI), P value is reported based on the analysis of independent sample t test. b MD (95% CI), P value is reported based on the analysis of paired sample t test.
2020-02-27T09:16:24.887Z
2020-02-18T00:00:00.000
{ "year": 2020, "sha1": "23c65d3b445bde89b0b56c55815d199eb908f38c", "oa_license": "CCBY", "oa_url": "https://apb.tbzmed.ac.ir/PDF/apb-10-284.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5e0bc965a102d0f69691f71b4420fc3760000934", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
51113196
pes2o/s2orc
v3-fos-license
Ocular Manifestations of Stevens-Johnson Syndrome and Toxic Epidermal Necrolysis The ocular surface is mainly composed of conjunctiva and corneal epithelium covered by a thin layer of tear film. Proper eyelid function, lacrimal gland tear production, meibomian gland function and sensorineural factors are essential for homeostasis [1]. Even one dysfunction can make permanent ocular surface problems, which lead to blindness, or ocular surface failure. Ocular surface failure is classified to the two major types; the first is limbal stem cell deficiency (LSCD), in which the corneal epithelium is replaced by conjunctival epithelium and the second is squamous metaplasia, in which the corneal or conjunctival epithelium exhibits keratinization and loss of mucosal epithelial characteristics including the expression of goblet cells. Although there are some options to treat the ocular surface failure, both two types eventually give birth to the dreadful result, blindness. Steven-Johnson syndrome (SJS) and toxic epidermal necrolysis (TEN) is a hypersensitivity mucocutaneous disease triggered mostly by medication and infection. They have a direct effect on the skin and no fewer than two mucous membranes including the eye and sometimes can be life-threatening [2]. SJS and TEN are variants belonging in the same class and are defined based upon the amount of epidermal detachment; SJS, 10% or less of total body surface area involvement, TEN, 30% or greater involvement and SJS/TEN overlap, involvement between 10-30% [3]. Clinical findings include a prodromal symptom of fever and malaise, followed by the development of a generalized, tender cutaneous eruption consisting of a variety of morphologic macules, papules, atypical target lesions and vesicles or bullae [4]. Serious dermatologic manifestations may make a physician overlook ocular sequelae, which are irreversible and fatal to visual acuity by the destruction of the ocular surface. The incidence of SJS and TEN are 9.2 and 1.9 per million person-years, respectively [5]. The rareness of SJS/TEN is the obstacle to provide the well-controlled clinical trial and eviHanyang Med Rev 2016;36:174-181 http://dx.doi.org/10.7599/hmr.2016.36.3.174 pISSN 1738-429X eISSN 2234-4446 INTRODUCTION The ocular surface is mainly composed of conjunctiva and corneal epithelium covered by a thin layer of tear film.Proper eyelid function, lacrimal gland tear production, meibomian gland function and sensorineural factors are essential for homeostasis [1]. Even one dysfunction can make permanent ocular surface problems, which lead to blindness, or ocular surface failure.Ocular surface failure is classified to the two major types; the first is limbal stem cell deficiency (LSCD), in which the corneal epithelium is replaced by conjunctival epithelium and the second is squamous metaplasia, in which the corneal or conjunctival epithelium exhibits keratinization and loss of mucosal epithelial characteristics including the expression of goblet cells.Although there are some options to treat the ocular surface failure, both two types eventually give birth to the dreadful result, blindness. Steven-Johnson syndrome (SJS) and toxic epidermal necrolysis (TEN) is a hypersensitivity mucocutaneous disease triggered mostly by medication and infection.They have a direct effect on the skin and no fewer than two mucous membranes including the eye and sometimes can be life-threatening [2].SJS and TEN are variants belonging in the same class and are defined based upon the amount of epidermal detachment; SJS, 10% or less of total body surface area involvement, TEN, 30% or greater involvement and SJS/TEN overlap, involvement between 10-30% [3].Clinical findings include a prodromal symptom of fever and malaise, followed by the development of a generalized, tender cutaneous eruption consisting of a variety of morphologic macules, papules, atypical target lesions and vesicles or bullae [4].Serious dermatologic manifestations may make a physician overlook ocular sequelae, which are irreversible and fatal to visual acuity by the destruction of the ocular surface.The incidence of SJS and TEN are 9.2 and 1.9 per million person-years, respectively [5].The rareness of SJS/TEN is the obstacle to provide the well-controlled clinical trial and evi- Steven-Johnson syndrome (SJS) and toxic epidermal necrolysis (TEN) are rare and sometimes life-threatening hypersensitivity mucocutaneous disease triggered mostly by medication and infections Major involving tissues are the mucous membranes of oral, gastrointestinal, respiratory, integument, and gynecologic tissues.Even after recovering from skin problems without sequelae, survivors can have serious ocular complications leading to blindness despite local and systemic therapy.There is no definite effective systemic and local treatment for SJS/TEN.Early detection and aggressive treatment are important for the long-term prognosis of the eye.Eyelid margin and palpebral conjunctiva and fornix should be checked thoroughly to detect the cicatrical changes that make chronic ocular surface failure such as limbal cell deficiency and complete ocular surface keratinization.Amniotic membrane transplantation and cultivated oral mucosal graft are beneficial to reduce the risk of ocular surface failure. PATHOPHYSIOLOGY The pathogenesis of SJS/TEN is controversial.The genetic risk factors are drug-specific and vary among populations and/or ethnic groups.Genetic testing for human leukocyte antigen (HLA)-B*1502 is available and recommended by the U. S. Food and Drug Administration for one drug, carbamazepine, in at-risk (Asian) populations.HLA-B*1502 was strongly associated with carbamazepine-induced SJS/TEN in Han Chinese [6].Allopurinol is a xanthine oxidase inhibitor commonly prescribed to treat gout and is known to be one of the drugs most frequently associated with SJS and TEN [7].Allopurinol-induced SJS/TEN has a strong association between HLA-B*58:01 and allopurinol-induced SJS/TEN among Koreans [8,9].Genetic associations have been demonstrated for compound medications, such as over-the-counter cold medications [10]. The molecular pathogenesis of SJS/TEN is also not clear.A cytotoxic T lymphocyte (CTL) immune-mediated reaction is known as the major immunologic component of SJS/TEN [11,12].An Immunological memory may cause an early manifestation of recurrent cases of SJS/TEN within 48 hours of repeated provocation. The blister fluid of SJS/TEN patients has shown a predominance of activated CD8+ T lymphocytes, but natural killer (NK) cells and other cytotoxic molecules have also been implicated [13]. Dysregulation of the Fas pathway has been implicated in the pathogenesis of a variety of tissue-destructive processes, including graft-versus-host disease, multiple sclerosis, stroke, and TEN [14]. Viard and colleagues showed that Fas may play a key role in inducing apoptosis in keratinocytes in TEN [15].They reported that mediation of keratinocyte death in TEN through activation of Fas and by elevated expression of Fas ligand (FasL) on the cell surface of keratinocytes and high levels of soluble sFasL in TEN serum and in frozen skin sections of TEN patients induced apoptosis in a Fas-sensitive cell line, while apoptosis was blocked by anti-FasLmonocolonal antibody.But Chang and colleagues found that sFasL levels peaked 24-48 hours after the onset of significant skin damage, suggesting that sFasL may be a byproduct of FasL expressed on epidermal cells and not a direct inducer of apoptosis [16].Other immunologic components were suggested such as Gelatinase A (MMP2) and B (MMP9) [17], TNF-R1 and TNF-related apoptosisinducing ligand (TRAIL) [18], IFN gamma, TNFalpha, sFasL, IL-18, and IL-10 [19] and perforin/granzyme B [20]. OCULAR MANIFESTATIONS The ocular manifestation of SJS/TEN can be changed according to the clinical stages; acute, subacute and chronic (Table 1).The acute stage is usually within 2 weeks after the onset of symptoms. The most common ocular condition observed at this stage is bilateral conjunctivitis, which occurs in 15-75% of patients [21,22].Another 25% of hospitalized patients develop conjunctival and/or corneal ulcerations.A careful lid eversion with fluorescein staining is mandatory to inspect the tarsal and the bulbar conjunctiva. ACUTE PHASE The pathogenesis of the acute phase of SJS/TEN are the keratinocyte apoptosis and secondary effects of inflammation and loss of ocular surface epithelium.Acute ocular involvement rate is 50% to 88% of SJS/TEN [23][24][25].Epithelial loss of tarsal conjunctiva and eyelid margin with or without pseudomembrane or true membrane formation can make early symblepharon formation and fornix foreshortening.Corneal epithelial defect can bring out corneal ulceration and perforation [26,27].Meibomianitis has a high prevalence rate, which are more than half of patients [23]. Power and colleagues suggested the schema for the severity of ocular involvement during the acute phase of the disease [24].To sum up, mild was defined as ocular signs involvement just like lid edema, conjunctival injection, which require prophylactic antibiotics or lubricants and routine eye care, after all completely resolved before discharge from hospital.Moderate involvement was defined as requiring specific treatment.Conjunctival membranes, corneal epithelial loss of more than 30% were typical ocular complications, but vision usually is not affected and the ocular complications were nearly resolved before discharge.Severe involvement meant that the sight-threatening status and the patients needed continuous specific ophthalmic treatment after discharge.Symblepharon showed the active state of corneal disease.Severity was determined by the severity of more severely affected eye. SUBACUTE PHASE Even though skin lesions are mostly resolved, chronic cicatriz- ing conjunctivitis with trichiasis and irregular eyelid margins may be persistent by the inflammation and ulceration of the ocular surface.Lid margin inflammation targets the meibomian glands in particular and causes their widespread destruction, in addition to distichiasis.The abnormal eyelid with misdirected and/or distichiatic lashes a mechanically abrasion on the corneal epithelium, leading to corneal epithelial defects, infection, and stromal scar (Fig. 1).Severe inflammation and persistent ulceration of the tarsal conjunctiva and lid margins leads to lid margin keratinization and tarsal scar [28,29]. CHRONIC PHASE Sotozono and colleagues developed a grading system for the chronic ocular manifestations in patients with SJS/TEN which is made up by corneal, conjunctival, and eyelid complications and they assessed 13 components and scored them on a scale from 0 to 3 according to their severity [30].lid closure and blinking, and sometimes restrict ocular motility [33].The contracture of the palpebral conjunctiva leads to cicatricial entropion and trichiasis [34].Tarsal conjunctival scarring can be associated with eyelid malpositions and other disorders, including ectropion, entropion, trichiasis, distichiasis, meibomian gland atrophy and inspissation, punctal occlusion, and keratinization of the eyelid margin, tarsal and bulbar conjunctival surfaces [28,33] (Fig. 2).A vicious cycle of ocular surface inflammation and scarring leads to disruption of the delicate architecture and function of the eyelids and tear film, which leads to further progression of the ocular surface damage and increased inflammation. Chronic ocular sequelae leads to ocular surface failure such as chronic LSCD or squamous metaplasia of ocular surface and is correlated with development of late corneal blindness [28].Symblepharon and eyelid malposition often worsen over time.Lipid tear deficiency, in which conjunctival cytology showed a marked decrease in goblet cell density [23], and trauma by lid margin abnormality are the main pathogenesis.The keratinized inner eyelid surface can lead directly to chronic corneal inflammation, neovascularization, scarring, and LSCD [28,29].Scarring in the fornices and in the lacrimal gland ducts causes severe aqueous tear deficiency and xerosis [35]. PROGNOSTIC FACTORS The prevalence of specific ocular abnormalities after SJS/TEN varies widely according to published reports.The lack of standardized criteria for grading the severity of acute ocular involvement may yield variable complication rates across different studies [23,28,36].The reported mortality rates varied between 1-5% for SJS and 25-35% for patients with TEN.Predictors of mortality included increasing age, increasing number of chronic conditions, infection (septicemia, pneumonia, and tuberculosis), hematological malignancy (non-Hodgkin's lymphoma, leukemia), and renal failure (P≤0.03 for all) [5]. The physician can miss the best opportunity to treat the eye by unawareness of early signs and symptoms of the eyes because they are apt to pay attention to the fatal condition [9].But, compared with Sjögren syndrome (SS) that is the severe chronic ocular surface inflammatory diseases SJS/TEN is worse in visual acuity, clinical ocular surface score and subjective scores [37].The acute stage severity or etiology did not correlate to ocular involvement.Generally, ocular complications were related to ocular involvement se-verity in the acute phase.Early ophthalmic assessment and frequent follow-up are helpful because ocular involvement represents the first long-term complication in patients with TEN [23]. Predictive factors associated with acute ocular involvement in SJS/TEN are young age and the history of taking NSAIDs or cold remedies [38].This case series suggest that the mortality of children is lower, ranging between 0% and 17%.Antibiotics, anticonvulsants, and nonsteroidal anti-inflammatory drugs are the most commonly implicated etiologies in children [39].The degree of ocular complication increased the prevalence of visual disturbance and eye dryness [38]. The SCORTEN is a severity-of-illness score for SJS and TEN based on a minimal set of well-defined variables derived from the Simplified Acute Physiology Score (SAPS II) that is calculated within 24 hours of admission [2].In acute stage, the patients with epidermal detachment of more than 10% of the total body surface showed more frequent ocular damage.But the SCORTEN value did not correlate with the severity of eye involvement in the acute setting [41].The severity of the acute ocular disease and abnormal laboratory tests were not the significant risk factors of late complications [42]. Conjunctivalization of cornea, Cicatricial eyelid and conjunctival complications were correlated with poor vision. Appropriate early treatment cannot prevent slight cicatrical changes in the eyelid and even slight cicatrical changes can make corneal complications.The lid margin was a commonly affected site. TREATMENT The goal of treatment of SJS/TEN is the recovery of the systemic condition and prevention of cicatrical ocular complications [43]. Patterns of chronic ocular disease after the acute episode were classified to mild/moderate SJS, severe SJS, ocular surface failure, recurrent episodic inflammation, scleritis and progressive conjunctival cicatrisation resembling mucous membrane pemphigoid [44,45].The severity of ocular involvement was classified as mild, moderate, or severe. Until now, there was no evidence of therapeutic benefits of systemic immunodulatory therapy in final visual outcome and chronic ocular complications in SJS/TEN [40,46].Ophthalmic treatment depend on the severity and topical eye drops cannot substitute surgical treatment. HMR Hanyang Med Rev 2016;36:174-181 In acute SJS/TEN, topical medications for severe conjunctivitis with meibominitis are topical broad-spectrum antibiotic, topical corticosteroid, and a preservative-free lubricant to protect the ocular surface in acute-stage patients.Early topical steroid treatment is important for the improvement of visual prognosis [31].Amniotic membrane transplantation (AMT) at the acute stage of SJS/ TEN with conformer, symblepharon ring or ProKera should cover the all ocular surface [47].Cryopreserved amniotic membrane was applied over the lid margins, palpebral conjunctiva, and ocular surface, anchored in place with bolstered fornix sutures, perilimbal sutures, and a conformer [32,48,49].AMT usually performed in the acute stage with the first 2 weeks after the onset of ocular involvement to encourage the rapid epithelial healing and to reduce inflammation and scarring in the ocular surface. In subacute or chronic stage, the ocular surface can be compromised.Severe dry eye occurs as the result of an abnormality in the tear film components and extensive ocular surface scarring, which lead to a combination of symblepharon formation, limbal stem cell deficiency, and recurrent or persistent corneal epithelial defects [46].Keratinization of the posterior lid margin and the subsequent repeated microtrauma of the cornea with every blink causes the cornea to become damaged, vascularized, and inflamed.Chronic conjunctival cicatrization leads to the deformation of the lid margins, which causes entropion, trichiasis, and distichiasis.This further exacerbates the damage to the cornea.Di Pascuale and colleagues also reported that the extent of eyelid and tarsal pathology had a significant impact on the occurrence of corneal complications [28].The treatment strategy during the chronic stage of SJS/TEN revolves around preventing continual ocular surface damage, managing SJS/TEN sequelae, and visual rehabilitation. Although large diameter scleral contact lenses offer substantial benefits to patients with ocular involvement of SJS/TEN, lens fitting can be a problem in eyes with symblepharon [29,50].In addition, patient compliance may be less than optimal after SJS and its high cost is the barrier to its use [29].The long term use of bandage contact lenses and scleral lenses can lead to the complications, especially in chronic dry eyes [51].Mucous membrane grafting (MMG) is the efficient treatment to reducing keratinization of palapebral conjunctiva and eyelid margin [48,52].McCord and colleagues reported the earliest clinical series of buccal MMG in SJS and Iyer and colleagues supported the efficacy of MMG in preventing lid margin keratinization [29,48,52]. In the end stage disease, patients have the corneal blindness with severe dry eye.Generally, penetrating keratoplasty should not be performed in the condition of severe dry eye with SJS or TEN because PK does not supply the limbal region of the eye with corneal epithelial stem cells [31,32,48,49].Limbal stem cell transplantation (LSCT) and cultivated oral mucosal epithelial transplantation (COMET) can be the optimal choice to promote the epithelial regeneration with a relatively wet ocular surface [53][54][55][56]. In general, patients with SJS/TEN had poorer graft survival rates compared with other LSCD of chemical burn and thermal corneal damage [32,55,56] because these patients present with serious preoperative conditions (e.g., persistent inflammation of the ocular surface, abnormal epithelial differentiation of the ocular surface, severe dry eye and lid-related abnormalities).Additionally, patients with allogenic LSCT inevitably need immunosuppressive treatment and both immunological defenses and the distribution of commensal bacterial on the ocular surface may become modified to some extent, resulting in the occurrence of postoperative bacterial infections, immunological rejection, or sustained ocular surface inflammation [57].Thus, the long-term prognosis in these patients is poor.In 2002, Nakamura and colleagues first reported ocular surface reconstruction using tissue-engineered autologous oral mucosal epithelial sheets [53,54].Since then, autologous CO-MET has been used around the world to treat eyes with severe ocular surface disorders, including SJS [58].A recent summarized report of 242 patients showed that 72% (126/175 eyes) of eyes were classified to successful treatment and 68% (142/210 eyes) of the eyes experienced visual improvement in bilateral LSCD [58].Naturally, patients who had COMET do not need the immunosuppression, which was different from LSCT. Keratoprosthesis is replacement of a damaged and opaque cornea with an artificial implant.Clinically used keratoprothetics are the Boston keratoprothesis and Osteo-odonto-keratoprothesis. The Boston keratoprosthesis has a collar button design.It consists Hanyang Med Rev 2016;36:174-181 dence-based treatment.By review of the publications, I focused discussion on the ocular manifestations, prognostic factors and treatments to provide basic understanding to the ophthalmic and the non-ophthalmic physicians. Fig. 1 . Fig. 1.Subacute/chronic phase Steven-Johnson syndrome.Both eyes show the superficial punctate erosion of more than half of the ocular surface.Lid margins show the irregularity of the mucocutaneous junction and keratin deposition (A, C: right eye, B, D: left eye). Fig. 2 . Fig. 2. Infrared photography of the meibomian gland.In the chronic phase of Steven-Johnson syndrome, both eyelid show diffuse shortening and total loss of meibomian gland acinus and atrophy of ductal opening (White dot lines show the borderlines of meibomian gland, A, C: right eye, B, D: left eye). of 3 components: a front plate with an optical stem, a back plate, and a titanium locking C-ring[59].The sandwiched corneal tissue between two plates is used to suture the device to the eye.The Boston Kpro is made of type I and type II formats and the type II is used for severe end stage ocular surface diseases, including Ocular Cicatricial Pemphigoid/SJS.Osteo-odonto-keratoprothesis uses the rooted tooth and the surrounding intact alveolar bone as a plate, which carries a polymethyl methacrylate optical cylinder[60][61][62]. Min Ho Kang • Early Recognition and Treatment of Stevens-Johnson Syndrome HMR Hanyang Med Rev 2016;36:174-181CONCLUSIONThe acute stage may present with bilateral conjunctivitis and loss of the ocular surface epithelium.The chronic stage manifests with cicatricial sequelae and persistent ocular surface inflammation and a chronically dry surface.Although the use of immunosuppressive and immunomodulatory therapies is controversial, the role of AMT in the acute stage is well established and should be performed at the earliest possible opportunity.It is recommended to prevent symblepharon, eyelid malposition, dry eye, and corneal disease rather than to try to reverse the damage later.The chronic stage of SJS is usually characterized by severe dry eye, lid margin abnormalities, and LSCD.Some patients may benefit from systemic immunosuppressive therapy in the chronic stages, but overall, topical and systemic drugs have a limited role to play late in the disease.Most important factors to reduce the chronic ocular surface failure is correcting lid margin deformities and protecting the ocular surface from lid margin keratinization.Oral MMG stabilizes the ocular surface and reduces the damage by the keratinized lid margin. Table 1 . Ocular manifestations and treatments according to the stage of Stevens-Johnson syndrome Min Ho Kang • Early Recognition and Treatment of Stevens-Johnson Syndrome HMR Hanyang Med Rev 2016;36:174-181
2019-03-16T13:06:43.448Z
2016-08-01T00:00:00.000
{ "year": 2016, "sha1": "16dde5c1a2733d5131391cbf0a9129d56f771c66", "oa_license": "CCBYNC", "oa_url": "http://synapse.koreamed.org/Synapse/Data/PDFData/0130HMR/hmr-36-174.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "16dde5c1a2733d5131391cbf0a9129d56f771c66", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257042529
pes2o/s2orc
v3-fos-license
Glioblastoma upregulates SUMOylation of hnRNP A2/B1 to eliminate the tumor suppressor miR-204-3p, accelerating angiogenesis under hypoxia Glioma is the most common malignant tumor of the central nervous system in adults. The tumor microenvironment (TME) is related to poor prognosis in glioma patients. Glioma cells could sort miRNA into exosomes to modify TME. And hypoxia played an important role in this sorting process, but the mechanism is not clear yet. Our study was to find miRNAs sorted into glioma exosomes and reveal the sorting process. Sequencing analysis of glioma patients cerebrospinal fluid (CSF) and tissue showed that miR-204-3p tends to be sorted into exosomes. miR-204-3p suppressed glioma proliferation through the CACNA1C/MAPK pathway. hnRNP A2/B1 can accelerate exosome sorting of miR-204-3p by binding a specific sequence. Hypoxia plays an important role in exosome sorting of miR-204-3p. Hypoxia can upregulate miR-204-3p by upregulating the translation factor SOX9. Hypoxia promotes the transfer of hnRNP A2/B1 to the cytoplasm by upregulating SUMOylation of hnRNP A2/B1 to eliminate miR-204-3p. Exosomal miR-204-3p promoted tube formation of vascular endothelial cells through the ATXN1/STAT3 pathway. The SUMOylation inhibitor TAK-981 can inhibit the exosome-sorting process of miR-204-3p to inhibit tumor growth and angiogenesis. This study revealed that glioma cells can eliminate the suppressor miR-204-3p to accelerate angiogenesis under hypoxia by upregulating SUMOylation. The SUMOylation inhibitor TAK-981 could be a potential drug for glioma. This study revealed that glioma cells can eliminate the suppressor miR-204-3p to accelerate angiogenesis under hypoxia by upregulating SUMOylation. The SUMOylation inhibitor TAK-981 could be a potential drug for glioma. INTRODUCTION Glioblastoma is the most common primary malignant of adult central nervous system and is resistant to radiotherapy and chemotherapy [1]. The median survival period of glioblastoma patient is approximately 15 months [1,2]. Many studies had revealed that glioma immunosuppressive microenvironment promoted malignant behavior of glioma, but the formation of glioma immunosuppressive microenvironment and the interactions between glioma cells, immune cells and stromal cells in glioma microenvironment are still not clear [3]. Exosomes are extracellular vesicles with diameters of 30-150 nm [4]. Exosomes can transport cargo, including nucleic acids and proteins, from donor cells to target cells [5]. Exosomes mediate the crosstalk between tumor cells and other cells in many kinds of tumors, such as colon tumors, breast tumors, and GBM [6][7][8]. Our previous studies showed that GBM cells and non-GBM cells in the TME could interact with each other through exosomes, and the noncoding RNAs (miRNAs, lncRNAs and circRNAs) transported by exosomes played important roles in tumor growth, immune escape and vasculogenic mimicry [9][10][11][12]. Recently, some studies found that some noncoding RNAs were selectively loaded rather than randomly packaged into exosomes, and the sorting process was dependent on specific RNA-binding proteins (RBPs) [13,14]. Small ubiquitin-like modifier (SUMO) is a kind of posttranslational modification. SUMOylation (addition of the SUMO modification) is the process by which SUMO proteins (sumo1, sumo2, sumo3, etc.) conjugate SUMO to a specific lysine residue of the target protein with the assistance of SAE1, SAE2, and UBC9 [15][16][17]. The deSUMOylation process is mainly regulated by the sentrin-specific protease (SENP) family in mammals [17,18]. SUMOylation participates in many important biological processes, including protein degradation, DNA repair, proliferation, etc. [19][20][21][22]. The members of the heterogeneous nuclear ribonucleoprotein (hnRNP) family are RBPs [23]. Studies have shown that some members of the hnRNP family, such as hnRNP A2/B1, hnRNP Q and hnRNP A1, are related to the exosome-sorting process of noncoding RNA, and SUMOylation plays an important role in this sorting process, but the mechanism is still unclear [14,24,25]. To study the sorting mechanism of noncoding RNAs in exosomes and how these noncoding RNAs affect the GBM tumor microenvironment, we collected glioma tissue, normal tissue, and pre-and postsurgery cerebrospinal fluid (CSF) from 47 glioma patients at Qilu Hospital of Shandong University. We observed that miR-204-3p tended to be sorted into exosomes. We found that miR-204-3p suppressed tumors via the CACNA1C/MAPK pathway in GBM cells. We confirmed that hnRNP A2/B1 mediated the exosome sorting of miR-204-3p by pull-down and RIP assays, and this process was dependent on a specific motif and domain. Further research showed that hypoxia could upregulate the expression of the transcription factor SOX9 to promote the synthesis of miR-204-3p. Hypoxia could also upregulate UBC9 expression to promote SUMOylation of hnRNP A2/B1. SUMOylation-mediated cytoplasmic localization of hnRNP A2/B1 promoted exosome sorting of miR-204-3p. miRNA-204-3p transported by exosomes promoted angiogenesis and migration of vascular epithelial cells through the ATXN1/STAT3 pathway. The SUMOylation inhibitor TAK-981 inhibited glioblastoma growth and angiogenesis by arresting SUMOylation of hnRNP A2/B1. miR-204-3p tends to be sorted into exosomes of GBM cells We first performed whole-transcriptome sequencing of CSF exosomes (before and after surgery) and tumor tissue collected from 44 patients who underwent surgery for GBM at Qilu Hospital of Shandong University from November 2017 to October 2019. We also collected 12 nontumorous brain tissue samples from craniocerebral trauma patients from the Department of Neurosurgery of the Qilu Hospital of Shandong University, the Second Hospital of Shandong University, and the 5th People's Hospital of Jinan Shandong University as a control group. Three control CSF samples were collected from normal pressure hydrocephalus patients. Exosomes isolated from CSF samples using ultracentrifugation were identified by Western blotting, Zeta View, and transmission electron microscopy ( Supplementary Fig. 1a-c). The DESeq2 package in R was used to normalize and identify differentially expressed miRNAs in tissue and CSF exosomes. We found 271 significantly differentially expressed (P < 0.05 and absolute value of fold change >1.5) miRNAs in glioma tissue and 433 miRNAs in CSF exosomes. To identify miRNAs with a significant exosome sorting process, we took the intersection of miRNAs upregulated in CSF exosomes before surgery and miRNAs downregulated in glioma tissue and found 11 candidate miRNAs (Fig. 1a-c). MiR-204-3p was selected for further research because of its high fold change and expression level in exosomes ( Supplementary Fig. 1d, e). Because CSF exosomes could be secreted from several cell types in the central nervous system, we first wanted to confirm whether miR-204-3p was present in glioma cell exosomes. We found that miR-204-3p was present in the culture media of LN229 and U251 cells and was tolerant to RNase digestion (Fig. 1d). GW4869, a widely used exosome inhibitor, Fig. 1 miR-204-3p was tended to be sorted into exosomes of GBM cells. a Volcano plots of miRNAs that were downregulated in glioma compared with normal tissue. b Volcano plots of miRNAs that were upregulated in CSF exosomes before surgery (pre-CSF) compared with CSF exosomes after surgery (post-CSF). c Venn diagram showing miRNAs downregulated in glioma and upregulated in pre-CSF. d RNase treatment was used to evaluate the stability of miR-204-3p in the medium supernatant of LN229 and U251. e Treatment with exosome inhibitor GW4869 was used to verify that miR-204-3p existed in exosomes. significantly reduced miR-204-3p expression in LN229 and U251 exosomes (Fig. 1e). Exosomes isolated from the GBM cell lines LN229 and U251 using ultracentrifugation were identified by Western blotting, Zeta View, and transmission electron microscopy ( Supplementary Fig. 1f-h). These results showed that miR-204-3p tended to be sorted into GBM exosomes. miR-204-3p inhibits GBM growth through the CACNA1C/MAPK pathway We first studied the function of miR-204-3p in GBM cell lines. miR-204-3p mimics and inhibitor were transfected into LN229 and U251 cells, respectively. The transfection efficiency was confirmed by qRT-PCR (Supplementar Fig. 2a). CCK8, colony-forming assays, and EdU fluorescence staining showed that miR-204-3p mimics inhibited the proliferation of GBM cell lines, while the miR-204-3p inhibitor showed no significant effect (Fig. 2a-d and Supplementar Fig. 2b, c). The ineffectiveness of the miR-204-3p inhibitor may be due to the low expression level of miR-204-3p in GBM cell lines. Flow cytometry showed that miR-204-3p inhibited LN229 and U251 proliferation by arresting the cell cycle (Fig. 2b). To further confirm the function of miR-204-3p in vivo, a miR-204-3p vector and miR-204-3p overexpression LN229 cell line were constructed via lentivirus transfection, and overexpression efficiency was confirmed by qRT-PCR (Supplementar Fig. 2d). The orthotopic murine GBM model showed that the miR-204-3p overexpression group exhibited reduced tumor volume and a longer survival period (Fig. 2e). HE staining showed that tumor volume decreased significantly in miR-204-3p overexpression group (Fig. 2f). And immunohistochemistry of mouse brains showed that the expression of proliferation marker ki67 decreased significantly in overexpression group (Supplementar Fig. 2d). miRDB, TargetScan, and TarBase were used to predict the target mRNA of miR-204-3p (Fig. 3a). qRT-PCR was used to confirm the content of candidate mRNAs in LN229 and U251 cells after transfection with miR-203-3p mimics and inhibitor, respectively. The results showed that CACNA1C may be a target of miR-204-3p in LN229 and U251 cells (Fig. 3b). Double luciferase reporter gene experiments verified that the target of miR-204-3p in GBM is CACNA1C (Fig. 3c). KEGG pathway analysis showed that CACNA1C is upstream of the MAPK pathway (not shown here). Western blot analysis showed that miR-204-3p mimics downregulated CAC-NA1C, p-AKT and p-ERK expression, while there was no obvious change in AKT or ERK expression (Fig. 3d). We further constructed CACNA1C-overexpressing cells via lentiviral transfection. CCK8, colony-forming assays showed the antitumor effect of miR-204-3p mimics was rescued by overexpression of CACNA1C (Fig. 3e, f). CACNA1C overexpression also rescued miR-204-3p mimicsinduced MAPK pathway inhibition (Fig. 3g). siRNA was used to knock down CACNA1C in GBM cell lines. CCK8, colony-forming, flow cytometry, and Western blot assays showed that knockdown of CACNA1C inhibited the proliferation of LN229 and U251 cells ( Supplementary Fig. 3a-d). The orthotopic murine GBM model showed that CACNA1C overexpression rescued the tumor growth inhibition caused by 204-3p overexpression and significantly shortened the survival period. (Fig. 3h, i). These results showed that miR-204-3p inhibited GBM growth through the CACNA1C/ MAPK pathway. hnRNP A2/B1 mediated exosome sorting of miR-204-3p It has been reported that hypoxia plays an important role in the release of exosomes and affects exosome composition [9,26]. We found that hypoxia increased the miR-204-3p content in exosomes while reducing the miR-204-3p content in cells (Fig. 4a). The above results verified that miR-204-3p tended to be sorted into exosomes, and this sorting process was influenced by hypoxia. It has been reported that the hnRNP family can mediate the exosome-sorting process of noncoding RNA through a combination of specific RNA sequences. There was an hnRNP A2/B1 binding sequence at the 3′ end of miR-204-3p, as revealed by StarBase, a website widely used to predict the binding motif of RBPs (Fig. 4b). qRT-PCR was used to analyze the content of miR-204-3p in LN229 and U251 cells and exosomes transfected with small interfering RNA of hnRNP A2/B1. The knockdown efficiency was verified by Western blotting (sFig. 4a). We found that knockdown of hnRNP A2/B1 did not significantly change the miR-204-3p content in cells or exosomes under normal oxygen but reversed hypoxia-mediated exosome sorting of miR-204-3p (Fig. 4c, d). To further verify whether hnRNP A2/B1 binding to miR-204-3p with a specific motif, biotin-labeled wild-type and mutant sequences of miRNA-204-3p were designed ( Supplementary Fig. 4b). In vitro RNA pull-down assays revealed that the mutant sequence eliminated the interaction between miR-204-3p and hnRNP A2/B1 (Fig. 4e). To further determine whether the specific domain of hnRNP A2/B1 is involved in binding to miR-204-3p, flaglabeled hnRNPA2/B1 deletion segments were constructed according to the hnRNP A2/B1 structure (Fig. 4f). RNAimmunoprecipitation assays showed that miR-204-3p bound with full-length hnRNPA2/B1 and RRM1 motif segments (Fig. 4g). These results showed that hnRNPA2/B1 directly interacted with miR-204-3p and that hnRNPA2/B1 mediated exosome sorting of miR-204-3p in hypoxia. We also found that miR-204-3p content was higher in hypoxia group than that in normoxic group, after eliminated the effect of exosome-sorting process on intracellular miR-204-3p by hnRNP A2/B1 knockdown (Fig. 4c). This result suggested that hypoxia may promoted the miR-204-3p synthesis. Hypoxia promoted the localization of hnRNP A2/B1 to the cytoplasm through upregulation of SUMOylation It has been reported that the formation of exosomes occurs in the cytoplasm, but hnRNP A2/B1 is mainly located in the nucleus [23,27]. If hnRNP A2/B1 mediates the packaging of miRNA into exosomes, this packaging process may be influenced by the transport of hnRNP A2/B1 from the nucleus into the cytoplasm. In addition, hypoxia increased hnRNP A2/B1-mediated exosome sorting of miR-204-3p. We hypothesized that hypoxia mediated the localization of hnRNP A2/B1 to the cytoplasm to promote exosome sorting of miR-204-3p. Cellular immunofluorescence staining of LN229 and U251 cells showed that the proportion of hnRNP A2/B1 in the cytoplasm increased under hypoxia ( Fig. 5a and Supplementary Fig. 5a). It has been reported that SUMOylation plays an important role in hnRNP A2/B1-mediated exosome sorting of noncoding RNAs, but the mechanism is not clear. Interestingly, we found that AA, an inhibitor of SUMOylation, inhibited hypoxia-mediated cytoplasmic transport of hnRNP A2/B1 ( Fig. 5a and Supplementary Fig. 5a). Considering that SUMOylation mediates biological processes, including the nucleocytoplasmic localization of proteins, hypoxia may affect the movement of hnRNP A2/B1 into the cytoplasm through SUMOylation. Hypoxia is one characteristics of glioma tumor microenvironment. In order to confirm the SUMOylation level of hnRNP A2/B1 in glioma tissue, we collect glioblastoma tumor tissue and adjacent normal tissue. Western blot showed that SUMOylation level of hnRNP A2/B1 was increased in tumor tissue ( Supplementary Fig. 5b). And the miR-204-3p was downregulated in tumor tissue ( Supplementary Fig. 5c). qRT-PCR and Western blotting showed that the SUMOylationassociated gene UBC9 was upregulated under hypoxia (Fig. 5b). To further verify whether upregulation of UBC9 promotes SUMOylation of hnRNP A2/B1, we overexpressed UBC9 with an overexpression plasmid in 293T cells. Immunoprecipitation assays showed that the SUMOylation of hnRNP A2/B1 was enhanced after UBC9 overexpression (Fig. 5d). To confirm whether hypoxia mediated the movement of hnRNP A2/B1 into the cytoplasm, a Paris kit (Thermo Fisher, Cat. AM1921) was used to extract protein from the cytoplasm and nucleus, respectively. Western blot analysis showed that hypoxia upregulated the SUMOylation of hnRNP A2/B1 and that SUMOylated hnRNP A2/B1 was mainly distributed in the cytoplasm (Fig. 5e). Next, we wanted to identify the SUMOylated amino acid residues of hnRNP A2/B1. GPS-SUMO, a software for SUMOylation site prediction, was used to predict the SUMOylation site of hnRNP A2/B1. The results showed that the arginine 108(th) could be the SUMOylation site of hnRNP A2/B1, and the NCBI database showed that this arginine site is conserved (Fig. 5f). We constructed a Flag-labeled wild-type/mutant hnRNP A2/B1 plasmid, and immunoprecipitation showed that mutation of the 108(th) arginine eliminated SUMOylation of hnRNP A2/B1 (Fig. 5f). Cellular immunofluorescence staining showed that the distribution of the SUMOylation site arginine-mutated hnRNP A2/B1 was not influenced by hypoxia ( Fig. 5g and Supplementary Fig. 5d). These results showed that hypoxia promoted the movement of hnRNP A2/B1 into the cytoplasm by upregulating SUMOylation of hnRNP A2/B1, and SUMOylation of hnRNP A2/B1 occurred at the 108-arginine residue. targets. b qRT-PCR was used to detect the relative expression of candidates of miR-204-3p in LN229 and U251 cells after transfection with mimics and inhibitors of miR-204-3p. c Dual luciferase reporter genes were used to verify target gene of miR-204-3p. d Western blot analysis showed that 204-3p mimics transfection downregulated CACNAV1C, P-AKT, P-ERK but had no significant influence on either AKT or ERK. e-g CCK8, plate cloning, and Western blotting showed that overexpression of CACNAV1C rescued the 204-3p transfection induced proliferation inhibition and downregulation of the CACNAV1C/MAPK pathway. h Live imaging showed that overexpression of CACNA1C rescued miR-204-3p induced inhibition of tumor growth. i Overexpression of CACNA1C reversed the prolonged survival period caused by miR-204-3p. Hypoxia promoted the transcription of miR-204-3p by upregulating SOX9 In previous studies, we found that intracellular miR-204-3p levels increased after blocking exosome-sorting progression using small interfering RNAs targeting hnRNP A2/B1 under hypoxic conditions. This result suggested that hypoxia may promote the synthesis of miR-204-3p in hypoxia, but hypoxia promoted the exosomesorting process to maintain tumor suppression of miR-204-3p at a relatively low level at the same time. qRT-PCR showed that expression of pri-miR-204-3p, the precursor of miR-204-3p, increased under hypoxic conditions ( Supplementary Fig. 6a). This result suggested that hypoxia upregulated the transcription of miR-204-3p. The transcription of miRNA is related to its position on the chromosome [28,29]. miR-204 is in the intron region between exon 1 and exon 2 of host gene, TRPM3. According to previous study, genes in intro regions could been transcript with the host gene [29]. Bioinformatics analysis showed positive correlation between the host gene, TRPM3, and miR-204-3p in glioma patient tumor tissue (Supplementary Fig. 6b). JASPAR predicted that the transcription factor SOX9 binding to the promoter region of TRPM3 ( Supplementary Fig. 6c). qRT-PCR showed that siRNA transfection of Sox9 downregulated miR-204-3p expression in both glioblastoma cells and their exosomes (Fig. 6a, b). The knockdown efficiency of si-SOX9 was confirmed by Western blotting (Supplementary Fig. 6d). Double luciferase reporter experiments and chromatin immunoprecipitation (ChIP) verified the interaction of SOX9 and the promoter region of TRPM3 (Fig. 6c, d). Then we wanted to determine whether GBM cells upregulate SOX9 to promote the transcription of miR-204-3p. TCGA database analysis showed that SOX9 is positively related to Hif1α, a hypoxia-related protein, in glioma ( Supplementary Fig. 6e). We also found that hypoxia upregulated SOX9 expression at both the mRNA and protein levels in GBM cell lines (Fig. 6e, f). Knockdown of SOX9 reversed the upregulation of miR-204-3p transcription under hypoxic (Fig. 6g). These results showed that hypoxia promoted the transcription of miR-204-3p by upregulating SOX9 in GBM. The SUMOylation inhibitor TAK-981 suppressed glioblastoma growth and angiogenesis Considering that the exosome-sorting process of miR-204-3p plays an important role in both the proliferation of glioma and angiogenesis in the tumor microenvironment and that this sorting process is mediated by SUMOylated hnRNP A2/B1, we hypothesized that blocking the SUMOylation of hnRNP A2/B1 is the key to inhibiting exosome sorting of miR-204-3p under hypoxia. TAK-981, Q. Guo et al. a SUMOylation inhibitor, was reported to have potential antitumor effects. In vitro experiments showed that TAK-981 downregulated both SUMOylated hnRNP A2/B1 and total hnRNP A2/B1 expression in glioblastoma cell lines (Fig. 8a). As SUMOylation could counter ubiquitination, the downregulation of total hnRNP A2/B1 may depend on the enhancement of ubiquitination-related degradation. TAK-981 also reduced hypoxia-mediated exosome sorting of miR-204-3p in both LN229 and U251 cells (Fig. 8b). Considering the poor permeability of the TAK-981 blood-brain barrier, a subcutaneous tumor model of glioblastoma in mice was used to study the antitumor effect of TAK-981. The results showed that TAK-981 suppressed the growth of glioblastoma (Fig. 8c). Immunohistochemistry verified that the TAk-981 inhibited not only proliferation but also angiogenesis in glioblastoma (Fig. 8d). These results showed that TAK-981 inhibited glioblastoma growth by inhibiting the SUMOylation of hnRNP A2/B1. DISCUSSION The malignant behavior of GBM is related to its specific tumor microenvironment [33]. Tumor cells and other cells in the GBM tumor microenvironment can interact through various mechanisms, and exosomes play an important role in this process [34]. In addition to other materials, exosomes contain nucleic acids and proteins. In our previous studies, noncoding RNAs in exosomes were shown to play an important role in promoting the formation of the GBM inhibitory immune microenvironment and maintaining the malignant phenotype of GBM cells [9,12,32,35]. Studies have shown that the process by which noncoding RNAs enter exosomes is not completely random, and some noncoding RNAs depend on specific mechanisms to enter exosomes. Noncoding RNAs with shorter sequences, such as miRNAs, are more easily sorted into exosomes [36]. To study this sorting mechanism, cerebrospinal fluid exosomes and glioma tumor tissues were collected from patients, and data analysis showed that miR-204-3p tended to be sorted into exosomes. miR-204-3p inhibits tumor proliferation through the CACNA1C/MAPK pathway. We found that hypoxia played an important role in the transfer of miR-204-3p to exosomes. In a hypoxic environment, the upregulated transcription factor SOX9 in the tumor increased the synthesis of miR-204-3p, but the total content of miR-204-3p in tumor cells was decreased. Hypoxia increased the SUMOylation of hnRNP A2/B1 by upregulating UBC9 expression. SUMOylation promoted the transfer of hnRNP A2/B1 to the cytoplasm, and hnRNP A2/B1 in the cytoplasm promoted the transfer of miR-204-3p to exosomes, ultimately leading to a decrease in miR-204-3p levels in cells. miR-204-3p in exosomes promoted tube formation of vascular endothelial cells through the ATXN1/STAT3 pathway. The SUMOylation inhibitor TAK-981 inhibited the exosome-sorting process of miR-204-3p under hypoxia by blocking SUMOylation of hnRNP A2/ B1 to inhibit tumor growth and angiogenesis. The above process is shown with a pattern diagram (Fig. 8e). This study revealed a new mechanism of hypoxia-induced exosome sorting, which provides a new direction for the development of drugs to treat GBM. Hypoxia can induce malignant behavior in many tumors, but extreme hypoxia often leads to tumor cell death. The upregulation of miR-204-3p synthesis induced by hypoxia may be a normal Fig. 5 Hypoxia promoted the distribution of hnRNP A2/B1 into the cytoplasm through upregulation of UBC9. a Immunofluorescence showed that the proportion of hnRNP A2/B1 increased in the cytoplasm under hypoxia, and this process could be inhibited by the SUMOylation inhibitor anacardic acid (AA) at 100 μM in U251 cells. b, c qRT-PCR and Western blotting showed that the SUMO-related gene UBC9 was upregulated under hypoxia in LN229 and U251. d Immunoprecipitation showed that overexpression of UBC9 increased SUMOylation of hnRNP A2/B1. e Western blot analysis showed that SUMOylated hnRNP A2/B1 mainly located in cytoplasm, and upregulated under hypoxia. f Immunoprecipitation showed that mutant arginine 108(th) removed the SUMOylation of hnRNP A2/B1. g Immunofluorescence showed that mutant hnRNP A2/B1 could not move to the cytoplasm under hypoxia in U251 cells. Fig. 6 Hypoxia promoted the synthesis of miR-204-3p by upregulating the transcription factor SOX9. a, b qRT-PCR showed that knockdown SOX9 with small interfering RNA downregulated miR-204-3p in LN229 and U251 cells and exosomes. c Dual luciferase reporter genes were used to verify SOX9 binding with promoter region of miR-204-3p. d ChIP assay showed that SOX9 bind to the miR-304-3p promoter region. e, f qRT-PCR and Western blotting showed that SOX9 was upregulated under hypoxia. g qRT-PCR showed that knock-down SOX9 inhibited the upregulation of pir-miR-204-3p under hypoxia. physiological process under hypoxic pressure to regulate cell growth. However, GBM cells discharged excessive miR-204-3p into exosomes and maintained a relatively low level of tumor suppression of miR-204-3p in cells. miR-204-3p in exosomes promoted angiogenesis of vascular endothelial cells in tumors, which alleviated stress caused by hypoxia. This may be a mechanism by which tumor cells the hypoxic environment. Other cargo in hypoxia-induced tumor exosomes may function via similar mechanisms, which may become a promising focus for exosome-related studies. SUMOylation-induced localization of hnRNP A2/B1 to the cytoplasm may be related to the nuclear pore structure. It was reported that hypoxia also affected nuclear pore opening [37]. Whether hypoxia also influences the distribution of hnRNP A2/B1 in the cytoplasm by opening the nuclear pore is still not clear and warrants further study. The formation of exosomes is a complex process. How hnRNP family members participate in this process and whether the SUMOylation of hnRNP molecules affects this process are also worthy of further study. METHODS Ethics Experiments with animals were conducted under the guidelines of Qilu Hospital. To obtain human CSF and tumor tissue, samples were collected from glioma patients under a protocol approved by Qilu Hospital after written informed consent was obtained. This study was approved by the Ethics Committee on Scientific Research of Shandong University Qilu Hospital (approval number: KYLL-2018-324). All cells were cultured in a humidified incubator containing 5% CO 2 at 37°C. The hypoxia treatment group was cultured in a humidified incubator containing 5% CO 2 and 1% O 2 at 37°C. Exosome extraction and storage Normal FBS in culture medium was replaced with FBS -exo (FBS minus exosomes) 48 h before exosome extraction. The precipitate was removed by supercentrifugation (100,000 × g, 12 h) of FBS. First, culture medium was centrifuged at 300 × g for 10 min, 2000 × g for 10 min, and 10,000 × g for 30 min, and the precipitate was removed. Then, the supernatant was centrifuged at 10,000 × g for 70 min, and the precipitate was dissolved in PBS. All exosome samples were stored at −80°C if not used immediately. RNase treatment LN229 and U251 cells were cultured in DMEM with 10% FBS -exo . The supernatant was incubated with 5 U/μg RNase R (Epicenter Technologies, USA) or RNase R and 0.1% Triton X-100 for 10 min at 37°C. The stability of miR-204-3p was analyzed by qRT-PCR. HUVEC tube formation assay and migration assay For tube formation assay, 96-well plates were coated with 70 μl Matrigel (BD, lot: 356234), per well, and incubated for 3 h at 37°C. HUVECs (5000) were plated into 96-well plates 48 h after transfection or cocultured with exosomes. A microscope (Lecia, DMi8) was used to capture images. Images were analyzed with ImageJ using the Angiogenesis Analyze function. For migration assay, HUVECs were cultured with serum-free ECM medium in the upper chamber of 24-well transwell (Corning, 3422) without Matrigel coating, and 500 μl complete ECM medium was added to the bottom chamber. After 24 h, migrated HUVECs was fixed in 4% paraformaldehyde for 15 min and stained with 0.1% crystal violet solution (Sigma-Aldrich, HT90132). Five random fields of adherent cells were photographed with a light microscope (Leica, DM2500). Immunoblotting and qRT-PCR Immunoblotting and qRT-PCR were performed following a published protocol [38]. The antibodies used in immunoblotting are listed in Table 1, and the primers used in this study are listed in Table 2. Full and uncropped Western blots were included in Supplementary Material. 5-Ethynyl-2′-deoxyuridine (EdU), cell counting kit (CCK)-8 assay and colony formation Gene silencing and overexpression The proliferation rate was assessed using the ratio of pro-EdU-positive cells to total cells. CCK-8 (Dojindo, Japan) was used to measure cell proliferation according to the manufacturer's protocol. A microplate reader (Bio-Rad) was used to collect absorbance data at a wavelength of 450 nm. Cells were cultured in 6-well plates at 2000 cells/well for colony formation. The cell culture medium was changed every 72 h. Cells were fixed with 4% paraformaldehyde for 30 min and stained with crystal violet for 20 min at room temperature after 15 days. Colonies were imaged and quantified with a microscope (Leica). Luciferase reporter assays Firefly luciferase reporters were transfected into 293T cells using Lipofectamine™ 3000 reagent (Thermo Fisher Scientific; USA) according to the manufacturer's protocol. The reporter genes CACNA1C-wt, CACNA1C-mut, ATXN1-wt and ATXN1-mut were synthesized by GeneChem (Shanghai, China). A Dual Luciferase Reporter Assay Kit (Promega) was used to perform the luciferase assay 48 h later. Luciferase reporter activity was normalized to Renilla luciferase activity. Coimmunoprecipitation IP buffer (Pierce, Rockford, USA) was used to lyse cells 48 h after transfection. Samples were incubated with primary antibody (10 μg antibody per 1000 μg protein) or equivalent IgG overnight at 4°C. The immunoprecipitated complexes were then incubated with protein A/G agarose beads (Pierce, Rockford, USA) at 37°C and eluted according to the manufacturer's protocol. Western blotting was used to detect target proteins. RNA pull-down assays Biotinylated miR-204-3p and its mutated sequence were purchased from RiboBio (GenePharma, Shanghai, China). 293T cells were transfected with biotinylated miR-204-3p and its mutated sequence using Lipofectamine™ 2000 reagent (Thermo Fisher Scientific; USA) according to the manufacturer's protocol. Then, the cells were lysed and incubated with a biotin probe. Cell lysates were incubated with streptavidin-conjugated agarose magnetic beads (Thermo Fisher Scientific, Waltham, MA, USA) at room temperature. Western blotting was used to detect hnRNP A2/B1 content. RNA immunoprecipitation (RIP) Flag-tagged full-length and partial hnRNP A2/B1 plasmids were synthesized by RiboBio (GenePharma, Shanghai, China). The plasmids were transfected into 293T cells using Lipofectamine™ 3000 reagent (Thermo Fisher Scientific; USA) according to the manufacturer's protocol. 293T cells were lysed with RIP lysis with RNase inhibitors after 48 h of transfection. Then, cell lysates were incubated with beads coated with IgG or anti-Flag antibodies (Millipore) at 4°C overnight. An RNeasy MinElute Cleanup Kit (Qiagen, Valencia, CA, USA) was used to extract RNA, and qRT-PCR was used to detect miR-204-3p. Immunofluorescence (IF) LN229 and U251 cells were seeded in 8 wells and transfected with plasmids using Lipofectamine™ 3000 reagent (Thermo Fisher Scientific; USA) according to the manufacturer's protocol, and the control groups were only treated with Lipofectamine™ 3000 reagent (Thermo Fisher Scientific; USA). Cells were fixed with 4% paraformaldehyde for 15 min and incubated with 0.1% Triton X-100 for 10 min. Then, the cells were incubated with 5% BSA for 30 min at room temperature and incubated with primary antibody at 4°C overnight. After 3 washes with PBS, the cells were incubated with fluorescence secondary antibody (DyLight 488, Thermo Fisher) for 1 h and then stained with 1 U/mL phalloidin (CA1670, Solarbio) for 20 min. A LeicaSP8 confocal microscope (Lecia Microsystems, Wetzlar, Germany) was used to capture images, and ImageJ was used to analyze the data. Animal study Four-to six-week-old male BALB/c-Nude mice (#D00521) were purchased from GemPharmatech (Wilmington, USA). Mice were housed in Shandong University Animal Facility with a 12-light/12-dark light cycle and temperatures of 20-23°C with 40-60% humidity. For the subcutaneously implanted glioma model, mice were randomly assigned into NC (n = 3) and TKA-981 (n = 3) group. Mice were stereotactically injected with 2 × 10 6 glioma cells (LN229 or U251) into the right axilla after they were anesthetized. An iPhone XR (Apple, A2108) was used to take photographs at 7, 14, and 21 days after injection. TAK-981 (MCE, HY-111789) was diluted according to the instructions of MCE, and the experimental group was injected 25 mg/kg TAK-981 (2.5 mg/ml) every 3 days after injection of glioma cells. Immunohistochemistry assay (IHC) Brains or subcutaneous tumors isolated from the experimental mice were fixed with 4% paraformaldehyde for 24 h and then dehydrated in gradient sucrose solution. After paraffin embedding, tissues were cut into 4 μm thick sections. The sections were blocked with 10% goat serum and then incubated with antibodies (anti-Ki67, Cell Signaling, 9449S; anti-CACNA1C, Santa Cruz, sc-398433; anti-CD31, Cell Signaling Tech, 3528 S) at 4°C overnight. Then, standard protocols with horseradish peroxidase-conjugated secondary antibodies and 3′-diaminobenzidine (DAB) were used. After hematoxylin staining, sections were mounted and scanned using a microscope (Lecia DM 2500). Flow cytometry For the cell cycle assay in GBM cell lines, LN229 and U251 cells stained with PI/RNase (BD Biosciences, 550825) were used for flow cytometry. LN229 and U251 cells were harvested by trypsinization and centrifugation at 400 × g for 5 min 48 h after the corresponding treatment and stained according to the manufacturer's instructions. Flow cytometry data were collected using a C6 flow cytometer (BD Biosciences). ModFit LT was used to analyze the cell cycle. Chromatin immunoprecipitation (ChIP) For the ChIP assay, a ChIP Assay Kit (Beyotime, China) was used to precipitate DNA combined with SOX9 according to the manufacturer's protocol. A PCR Clean Up Kit (Beyotime, China) was used to purify the DNA precipitate. qRT-PCR was used to detect target DNA sequences. Primers were purchased from GenePharma (Shanghai, China). Primer set 1 specify "TATACCCCTTTGTAAGCAACT" region of the promoter of TRPM3, and primer set 2 specify "GTAAAGCCATTGTTTTCAGGT" region of the promoter of TRPM3. The sequences of ChIP primer sets were listed in Table 2. Statistical analysis GraphPad software 9 (GraphPad Software Inc., CA, USA) was used to perform statistical analyses. Student's t test was used to analyze comparisons between two groups. The Wilcoxon test and one-way ANOVA were used for nonparametric and parametric data. We statistically compared the similar variances between the groups as well. The significant of differences are marked in the figures: P > 0.05 not significant (ns), P < 0.05 statistically significant (*P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001). DATA AVAILABILITY miRNA sequencing of our samples that support the findings of this study are available from the corresponding author upon request. In addition, data were released. The processed data are available from the corresponding author upon reasonable request. All original data underlying the selected data shown in the figures and supplemental figures are available from the corresponding authors upon reasonable request. Source data are provided with this paper.
2023-02-21T14:55:12.536Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "adca9ff7ff6af0d9a972a6bd32ea010f473532c7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "adca9ff7ff6af0d9a972a6bd32ea010f473532c7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
239673920
pes2o/s2orc
v3-fos-license
Quantifying the temporal variation of the contribution of fine sediment sources to sediment yields from Chilean forested catchments during harvesting operations Fingerprinting techniques were incorporated into a paired catchment investigation in southern Chile to quantify the contribution of three fine sediment sources (catchment surfaces, forest roads and stream channels) to catchment suspended sediment yields during forest harvesting and replanting operations. Optimum composite fingerprints for use in sediment source discrimination and apportionment comprised 137Cs and 210Pbex for the control catchment (LUC) throughout the study and for the treatment catchment (LUT) during the pre-harvest period, and 137Cs and soil organic matter during harvest and post-harvest periods for LUT. Prior to harvesting, the dominant sediment source to the sediment load in both catchments was the stream channel and remained relatively constant throughout the study for LUC. For the entire study period the total suspended sediment yield from LUT (3,160 kg ha-1) approximately doubled that from LUC (1,650 kg ha-1). Most of this difference is accounted for by the increase in sediment output during the rainy months following clearcutting. The disturbance associated with forest operations in LUT caused the contributions to the load from the catchment slopes and forest roads to increase markedly (total contributions 835 and 795 kg ha-1, respectively). However, the total contribution from the stream channel for LUT during the study period (1,530 kg ha-1) remained similar to that from LUC. The results of the investigation demonstrated that any attempt to reduce sediment loading from forest harvesting would require adopting best management practices to reduce sediment mobilization from catchment surfaces and forest roads. INTRODUCTION Increased sediment mobilization and delivery to streams associated with forest logging change the physicochemical and biological properties of adjacent aquatic ecosystems (Buttle 2011). Studies from around the world aimed at establishing the magnitude of the increase in suspended sediment yield after forest harvesting have mainly been based on traditional catchment experiments (Buttle 2011). Bathurst and Iroumé (2014), using existing data sets, conclude that maximum post-logging sediment yields are up to an order of magnitude higher than those associated with pre-logging periods. However, Buttle (2011) comments that harvesting alone might not substantially contribute to increased sediment fluxes, and Sidle et al. (2004) indicate that the primary sediment sources commonly consist of logging roads, road crossings and skidder trails, rather than the catchment surface. Differences in the quantification of post-logging effects for the different studies arise because many factors influence sediment mobilization, including logging practices, the level of connectivity between the harvested area and the stream network, the quantity, type and management of forest residues, the width of protected riparian areas and several other site characteristics (Gayoso 2015). Although numerous experimental catchment studies have been undertaken to assess the effects of forest logging on sediment yields, to date, few have explicitly established the source of the sediment contributing to the increased flux. The development of sediment source fingerprinting or tracing techniques has provided new opportunities to obtain such information, representing a direct means of establishing and apportioning the source of fine sediment transported by a stream. It involves assembling information on the physical and chemical properties of fine sediment collected at the outlet of a catchment and comparing these properties with those of potential sources (Walling 2005, Collins et al. 2017, Rachels et al. 2020. The success of this approach depends heavily on the selection of several sediment properties, which can clearly discriminate among the potential sources and thereby establish their contribution to the downstream sediment flux. A wide range of sediment properties have been used for this purpose, including major and minor elements (Walling 2005, Collins et al. 2017, Rachels et al. 2020, radionuclides (Matisoff et al. 2002, Walling 2005, Schuller et al. 2013, stable isotopes (Douglas et al. 2003) and isotopes associated with the organic fraction of the sediment (Bravo-Linares et al. 2018). Selection of the best fingerprint properties exercise commonly involves the use of statistical techniques to assess the ability of individual properties to discriminate potential sources and a mixing or unmixing model to estimate the relative contribution of these sources. The objective of this investigation undertaken in southern Chile is to assess the impact of forestry operations on sediment yields, through the innovative use of fallout radionuclides as sediment source fingerprints to inform on the apportionment of sediment sources. The hypothesis is that in regions in the Southern Hemisphere characterized with frequent atmospheric washout, the deposit of anthropogenic and geogenic fallout radionuclides can provide optimal concentrations to be implemented as fingerprints to investigate the temporal variation of both relative and total source contributions to the total fine sediment catchment outputs. This investigation builds upon Schuller et al. (2013), extending the observation period and investigating the temporal variation of both relative and total source contributions to the total fine sediment output in a forest paired catchment study. Results provide important information to support the implementation of cost-effective control measures. Study catchments. A paired catchment investigation was undertaken in the coastal mountains of southern Chile (figure 1A). The two catchments, located 1 km apart, were designated Los Ulmos control (LUC) and Los Ulmos treatment (LUT). Soils were red clayed originating from old volcanic ashes deposited on the coastal metamorphic complex (Iroumé et al. 2006). Further details concerning the characteristics of these catchments and forest covers are provided by Schuller et al. (2013) ; the remaining surface corresponds to riparian native vegetation (13.4 %) and unpaved dirt roads (2.6 %). Trees were hand-felled with chainsaw and logged to landings located outside the catchment using rubber-tyred skidders. Mean and maximum yarding distances were 66 and 170 m respectively, and skid trails were created by yarding. Clearcutting occurred between 01 and 15.04.2010 (table 1) shortly before the May-August rainy season. Over the past 50 years, this period has typically accounted for ca. 60 % of the total annual rainfall for the study area. During the harvest period, no sediment control measures were implemented. The adoption of best management practices is not compulsory in the country, except for maintaining in place the riparian vegetation along the drainage network. LUT was replanted with E. nitens during the spring (October) of 2011 (table 1). The LUC catchment was not disturbed during the study. Monitoring of precipitation, discharge and suspended sediment. Continuous rainfall records were generated using two Hobo tipping bucket gauges with resolution of 0.257 mm located halfway between the catchments. Streamflow was measured using Thompson-type V-notch weirs equipped with data loggers to record water stage at 3 min in- Figure 1. A) Location of the study site within Chile; B) and C) the two catchments and the distribution of sampling points within the catchments. A) Localización del estudio en Chile; B) y C) las dos cuencas y la distribución de los puntos de muestreo dentro de cada una de ellas. tervals with accuracy of ±2 mm. The suspended sediment load was documented at each flow measuring station using a flow proportional water sampling procedure (Huber et al. 2010). Water samples, with a volume proportional to the discharge, were collected every 8 hours using an automatically operated electric pump and added to a bulk sample stored in a tank. Every 7 or 15 days during the winter (wet) and summer (dry) periods, respectively, a representative sample of the water and sediment stored in the tank was collected and filtered to obtain the discharge-weighted mean suspended sediment concentration for the sampling period. The suspended sediment load for each sampling period was calculated as the product of this mean suspended sediment concentration and the mean discharge for the period. The tank was emptied and cleaned in readiness for the collection of the subsequent bulk sample. Sampling potential suspended sediment sources and the sediment output from the catchments. The three potential suspended sediment sources (catchment surface or forested slopes, forest roads and tracks, and stream channels) and the sampling program for characterizing the fingerprint properties of the potential sources and the sediment output from the catchments were described in Schuller et al. (2013). In both catchments, multiple composite samples of source material were collected from the surface (0-1 cm) of the potential sources at areas with good connectivity to the streams, to characterize the spatial variability of fingerprint properties associated with the individual sources. The distribution of sampling points is shown in figures 1B and C for LUC and LUT, respectively. Samples representative of the catchment surfaces were collected from areas of the forested (or harvested/replanted) slopes using a grid pattern. Samples from forest roads and stream channels were collected along these features from locations which provided evidence of active erosion and sediment mobilization. In the case of the samples collected from forest roads, these aimed at being representative of material that would be mobilized during storm events and therefore included material from road verges and adjacent cut slopes. Along stream channels, composite samples of surface material were collected from the full vertical extent of bank profiles and the stream bed. The composite source material samples, collected from each sampling location point, typically comprised 1-2 kg of material, providing enough mass for subsequent laboratory analyses. The timing of the sediment source sampling campaigns is summarized in table 1. The source material sampling was repeated in LUT after the completion of harvesting operations and prior to replanting, to characterize the catchment in its disturbed condition. Composite samples collected from potential sources were analyzed individually to ensure that the results obtained were representative of the spatial variability of the properties of the three potential sources. Two time-integrating trap samplers were installed in the upper part of each weir pool to collect bulk samples of sus-pended sediment to be used as target samples in the source fingerprinting study (Schuller et al. 2013). The sediment collected by the trap samplers was retrieved at approximately monthly intervals (table 2) and was supplemented by fine sediment collected from the beds of the weir ponds. A summary of the target sediment sampling program in relation to precipitation and measured sediment load for the individual sampling periods is presented in table 2. Selection of fingerprint properties. Based on Walling (2005) and Matisoff et al. (2002), attention focused on the use of the fallout radionuclides caesium-137 ( 137 Cs, halflife 30.2 y) and excess lead-210 ( 210 Pb ex , half-life 22.2 y) as fingerprints. The presence in surface soil of anthropogenic 137 Cs in the studied area predominantly reflects global fallout from the atmospheric testing of thermonuclear weapons from 1952 to the mid-1980s. In contrast, the fallout of geogenic 210 Pb ex can be viewed essentially continuous over time at a specific site (Appleby and Oldfield 1978). Fallouts 137 Cs and 210 Pb ex are strongly and rapidly adsorbed by exchange sites in the surface soil (He and Walling 1996). In undisturbed soils, such as some forest soils, the occurrence of 137 Cs and 210 Pb ex is typically characterized by an exponential depth distribution due to their fallout origin and limited post-fallout downward movement in soil. The maximum 137 Cs concentration is commonly found slightly below the surface in undisturbed soil, because of the cessation of significant fallout in the 1980s and the very slow downward migration of the peak activity (Schuller et al. 1997, Walling 2013). In the case of 210 Pb ex, the peak activity is normally found at the surface, due to the ongoing fallout receipt. Caesium-137 and 210 Pb ex provide valuable fingerprints for distinguishing surface and subsurface sediment sources. In this study, the catchment surface can be expected to be characterized by significant activities of both radionuclides, whereas forest roads and channel banks are likely to be characterized by lower or zero activities. However, surface sediment can accumulate fresh 210 Pb ex fallout if it subsequently remains undisturbed for a considerable period. Because of likely contrasts among the organic matter content (SOM) of surface soils, road surfaces and the channel banks (Ritchie et al. 2007), SOM concentration associated with source material samples was included as a possible fingerprint property. The naturally occurring environmental radionuclides potassium-40 ( 40 K, half-life 1.28x10 9 y) and radium-226 ( 226 Ra, half-life 1,622 y) were also included as possible fingerprints to investigate their potential to discriminate different source materials, because they can be determined by the analytical procedures used for 137 Cs and 210 Pb ex . Measurement of the fingerprint properties of source material and target samples. Since radionuclide activities are grain size dependent (He andWalling 1996, Walling 2005), attention was directed to the <63 µm fraction when determining the fingerprint properties of potential source Table 2. The timing of fine sediment collection from catchment outlets and precipitation and sediment load associated with each sampling period for LUC and LUT. The relative contribution of the three sources to the target sediment samples collected from both catchments and the values of relative mean error (RME) for a comparison of estimated and measured fingerprint concentrations in each target sample. Los períodos de recolección de sedimentos en las salidas de las cuencas, y la precipitación y carga de sedimentos asociados a cada período de muestreo en LUC y LUT. La contribución relativa de las tres fuentes de sedimentos a las muestras objetivo obtenidas en cada cuenca y los valores del error relativo medio (RME) para la comparación de las concentraciones estimadas y medidas de trazadores en cada muestra objetivo. materials and the target samples of fine sediment. This facilitated direct comparison of target and source samples and no additional particle size correction was applied (Schuller et al. 2013). Target samples were dewatered by vacuum filtration through MFS ADVANTEC GC 50 glass fiber filters with a 1.2 μm pore size. The dewatered fine sediment target samples collected from catchment outlets and all source material samples were air-dried, oven-dried at 40°C, disaggregated and sieved to < 63 μm prior to analysis. For radionuclide analyses, an aliquot of ca. 80 g of the < 63 μm fraction of each source and target sample was sealed in a Petri dish and stored for at least 3 weeks prior to radiometric assay to ensure equilibrium between 226 Ra and its short-lived easily detected gamma emitting daughter 214 Pb. The mass activity densities (activities) of 137 Cs, 210 Pb, 40 K and 226 Ra were determined by gammaspectrometry, using an ORTEC high-resolution, extended range Ge detector with 53 % relative efficiency, coupled to a PC based digital analyzer system employing ORTEC GammaVision software. The detector was calibrated for the same sample geometry, with standards characterized by a bulk density and grain size like those of the analyzed sample and prepared using certified standard solutions type QCYB400 and type QCYB410 provided by Eckert and Ziegler Nuclitec GmbH. Count times were more than 72,000 s per sample, providing results with an analytical precision of ca. ±10 % at the 95 % level of confidence. The organic carbon content (SOC) of the <63 μm fraction was measured by organic matter oxidation in a sodium dichromate (Na 2 Cr 2 O 7 ) and sulphuric acid (H 2 SO 4 ) solu-tion. After 24 h, the chromate reduction was calculated by measuring supernatant absorbance at 600 nm wavelength with a spectrophotometer. SOM was estimated from SOC content using a Sprengler coefficient of 1.724. Source fingerprinting and source ascription. The relative contributions of the potential sediment sources to target samples representative of the total fine sediment load for specific sampling intervals were estimated using a standard sediment source fingerprinting approach described in detail by Schuller et al. (2013). In brief, key components involved, firstly, comparing the fingerprint properties of individual target sediment samples collected at catchment outlets with the equivalent mean values for the properties of the three potential sources to ensure that the former fell within the range of the latter. Only properties which passed this test were used in subsequent analyses. Secondly, the discriminatory power of fingerprint properties was tested statistically by using the non-parametric Kruskal-Wallis H test, to compare the property values associated with the individual samples collected from the three different potential sources. Only properties which demonstrated a significant difference between the sources were selected. A multiple discriminant function analysis was subsequently employed to select the optimum composite fingerprint set to be used in source apportionment, from those properties identified as possible fingerprints in the first stage. The relative contributions of the potential sources to a target sediment sample were estimated using a multivariate mixing model that was optimized by minimizing an objective function reflecting the difference between the observed and predicted property concentrations of sediment (Walling and Collins 2000). The objective function, equation [1], used for the optimization was: where C i is the concentration of the fingerprint property i in the time-integrated suspended sediment sample, P s the optimized relative contribution from source s, S si the mean concentration of the fingerprint property i in source s, n the number of fingerprint properties comprising the optimum composite fingerprint and m the number of sediment sources. The mixing model assumes that the fingerprint properties are conservative, so that the properties of the suspended sediment directly reflect those of its sources and comprise material only from the identified sources. The result is conditioned by two requirements: The uncertainty introduced by the spatial variability of the properties of a given source and the need to represent this concentration as a single mean value (i.e., S si ) in the mixing model was considered by using a Monte Carlo procedure to introduce different possible values of S si , derived using the standard error of the mean values, into the mixing model. This Monte Carlo procedure involved 50,000 iterations and the resulting estimates of the contribution of individual sources to the target sediment sample were characterized by the mean value and its 95 % confidence limits. Mean contributions were calculated for each target sediment sample and linked to individual sampling periods represented by the time integrated suspended sediment samples. The goodness of fit provided by the multivariate mixing model was tested by comparing the measured fingerprint property concentrations for target sediment samples with the corresponding values predicted by the optimized mixing model. For this purpose, the relative mean error (RME) for each target sample was calculated using equation [2] and expressed in percentage. [2] Estimates of the total mass of sediment contributed by each of the three potential sources during each observation period were obtained by coupling the relative contribution of the three potential sources with information on the total sediment load associated with each sampling period. RESULTS The impact of catchment disturbance on sediment yield. Rainfall and sediment loads documented for LUC and LUT during successive intervals of the overall study are summarized in table 2. Figure 2 also presents the measured rainfall, runoff and sediment load for LUC and LUT for the successive individual sampling intervals. The runoff response of both catchments was similar during the entire study period, with no detectable runoff increases in LUT after clearcutting (figures 2C, D and 3A). However, the sediment output from LUT (figure 2F) provides clear evidence of increase after forest harvesting, when compared to that from LUC ( figure 2E). For the period 01.10.2009 to 19.03.2010, which preceded the commencement of forest harvesting in LUT, the total sediment output from LUT (263 kg ha -1 ) was slightly less than that from LUC (358 kg ha -1 ). The total sediment output from LUT for the 4.5-month wet period (15.04.2010 to 01.09.2010, recorded rainfall of 1,320 mm) following the forest harvesting was 1,700 kg ha -1 , whereas the value for the equivalent period for LUC was 538 kg ha -1 , indicating that during these 4.5-months following forest harvesting the fine sediment output in LUT was more than three times that of LUC. Considering essentially the same 4.5-month observation period for the subsequent year (11.04.2011 to 31.08.2011), rainfall (1,250 mm) was slightly less than that for 2010 (1,320 mm), although it was more evenly distributed throughout the rainy season (figures 2A and B). In 2011, the sediment output from LUC during this period was 316 kg ha -1 and therefore, as might be expected, less than that for 2010 (538 kg ha -1 ). This difference reflects the lack of periods with very high rainfall totals and the slightly lower rainfall total for 2011. However, the sediment output from LUT (633 kg ha -1 ) remained higher than that from LUC, although the increase was reduced to a ca. 2-fold. During the two observation periods following replanting (extending from 11.10.2011 to 12.12.2011, table 2), the total sediment yields from LUC and LUT were similar (54 and 63 kg ha -1 , respectively). Figure 3B presents the cumulative daily sediment load of both catchments. It corroborates the interpretation of the behavior of treatment and control catchments presented above. Considering the overall monitoring period, the sediment output from LUT (~3,160 kg ha -1 ) approximately doubled that from LUC (~1,650 kg ha -1 ); with this increase being accounted for primarily by the increased sediment yield from LUT during the 4.5 months after harvesting. Assessment of the discriminatory power of the included fingerprint properties and selection of the optimum composite fingerprints for source apportionment. Application of the non-parametric Kruskal-Wallis test and multiple discriminant function analysis indicated that the optimum composite fingerprint sets for source discrimination and apportionment estimation comprised 137 Cs and 210 Pb ex for LUC throughout the study period and for LUT pre-harvest. Caesium-137 and SOM provided the optimum composite fingerprint for LUT during the post-harvest period. Composite fingerprints successfully classified 83.3% and 82.7% of the source material samples collected for LUC and LUT pre-harvest, respectively, and 71.9% of the samples collected from LUT post-harvest. Precipitación total (A y B), escorrentía (C y D) y carga de sedimentos (E y F) para cada periodo sucesivo entre muestreos en LUC y LUT, respectivamente. Para la cuenca LUT, las flechas verticales a la izquierda y derecha representan las fechas de término de la cosecha y reforestación, respectivamente. Las líneas horizontales en negro superpuestas al eje de tiempo muestran los periodos típicos de la estación lluviosa en el sitio de estudio, basados en registros de largo plazo. A) Curvas doble másicas de las escorrentías diarias de sedimento fino desde las dos cuencas estudiadas. B) Curvas doble másicas de las cargas diarias de sedimento fino desde las dos cuencas estudiadas. En A) y B), los puntos en negro y blanco representan las fechas del término de la cosecha y reforestación, respectivamente, en LUT. concentrations in those samples, as indicated by the relative mean error (RME), is also listed for each target sample in table 2. With a maximum value of ~25 % and ca. 85 % of the values < 15 % and ca. 66 % of the values < 10 %, RME values are considered acceptable. LUC LUT Relative source contributions. Information on the temporal variation of the relative contributions of the three potential sources to the target samples collected from the two study catchments and their associated loads is presented in table 2 and in figures 4A and B. Considering the pre-logging period, both catchments are characterized by similar relative source contributions, with 70-90 % of sediment output co-ming from stream channels. Catchment surface represents the next most important source, contributing ca. 15 % of sediment output. The forest roads represent a minor, but nevertheless significant, sediment source in both catchments, contributing about 10 and 1% of the sediment output from LUT and LUC, respectively. Relative source contributions from LUC stay constant during the remainder of the study period, although roads assume superior importance during the wet seasons of both 2010 and 2011. Contribution from roads also increases during the summer of 2010-2011. In the case of LUT, disturbance of the catchment by clear felling at the beginning of the wet season caused A B Figure 4. Temporal variation of the relative contribution (A and B), the total magnitude of the contribution (C and D) and the cumulative sediment load (E and F) from the three potential sources to the time-integrated samples collected at the outlets of catchments LUC (left) and LUT (right) during the study period. The arrows on the left and right of B), D) and F) mark the completion of harvesting and planting operations, respectively. Variación temporal de la contribución relativa (A y B), la magnitud total de la contribución (C y D) y la carga acumulada de sedimentos (E y F) de las tres fuentes potenciales a las muestras integradas en el tiempo colectadas en las salidas de las cuencas LUC (izquierda) y LUT (derecha) durante el estudio. Las flechas a la izquierda y derecha en B), D) y F) marcan el término de las operaciones de cosecha y reforestación, respectivamente. Date major changes in the relative importance of the three sediment sources. During the sampling period from 19.03 to 15.04.2010, that included clearcut operations, the relative contribution from the stream channel decreased markedly to ~20 %, and the relative contributions from both the catchment surface and forest roads significantly increased from ~15 to ~55 % and from ~10 to ~25 %, respectively. LUC LUT During the two months following the completion of clearcutting in LUT (15.04.2010LUT (15.04. to 09.06.2010, the importance of the catchment surface and forest roads as sediment sources declined markedly, returning to relative contributions like those associated with the pre-harvest period. Subsequently, during the period extending from 09.06.2010 to 31.08.2011, the contributions from both slopes and forest roads again increased and dominated the sediment output from the catchment. The contribution from the stream channel was reduced to ~2 % during a prolonged 7-month period extending from 24.11.201024.11. to 20.06.201124.11. . However, after 20.06.2011, the importance of the channel as a sediment source was progressively restored to pre-harvest conditions. There is some evidence of the impact of replanting activity during this period, with increased contributions from the catchment surface and forest roads, which are likely to have been disturbed by this activity. The magnitude of source contributions. Figures 4C and D provide information on the temporal variation of the absolute magnitude of the sediment contributions from the three sources to sediment output during the study period. The results for LUC (figure 4C) emphasize the importance of stream channels as the primary source of sediment exported from the catchment. This can be seen as indicative of sediment mobilization and delivery from the study area under essentially natural or undisturbed conditions. The results for LUT (figure 4D) highlight the seriously increased contribution from forest roads and the catchment surface after clearcutting, with these enhanced contributions continuing until the completion of the forest planting. In addition, the sizeable increase in the contribution of sediment from channel sources during the first, third and fourth observation periods following clearcutting (figure 4D) may reflect channel disturbances by harvesting activities, and increased winter storm discharges passing down the channels. Figures 4E and F provide a useful summary of the changes in sediment output from LUT caused by forest harvesting and replanting operations. Overall, considering the entire study period, forest operations caused the total sediment output from LUT (3160 kg ha -1 ) to be double than that from LUC (1650 kg ha -1 ). This increase is coupled with major changes in the amounts of sediment contributed by the different sources. Under 'natural' conditions stream channels are the major sediment source, contributing ~1390 kg ha -1 during the study period (figure 4E), which represents ca. 84% of the of total sediment export. Forest harvesting in LUT caused the contributions from catchment slopes and forest roads to increase markedly, with both sources contributing ca. 25% (i.e., 835 and 795 kg ha -1 , respectively) of the total sediment output in this catchment (figure 4F). However, the total contribution from channel sources for LUT during the study period (1530 kg ha -1 ) remains similar to that from LUC (1390 kg ha -1 , figures 4E and F). DISCUSSION Catchment disturbances and impacts on runoff and sediment yield. The calibration period of common monitoring of the two catchments prior to the disturbance of LUT was limited to six months. The short calibration period and the fact that it did not include the season characterized by intense rainfall must be seen as a limitation of the study. However, it did include several events with substantial rainfall (figures 2A and B). The available data confirmed the essentially similar sediment response of the two catchments during the calibration period (figures 2E, F and 3B). The extension of the monitoring to include a period when LUT could be expected to have largely recovered from the disturbance caused by forest harvesting and when the specific sediment yields of the two catchments were again very similar provided further confirmation of the similarity of the response of the two catchments. There were no detectable runoff increases in LUT after clearcutting (see figure 3A). This was somehow unexpected, since many paired catchment investigations have reported runoff increases after forest harvesting (Brown et al. 2005). However, limited changes or delayed runoff increases had also been reported by other paired catchment studies (David et al. 1994), and this behavior might confirm the suggestion of McDonnell et al. (2018) that "factors influencing the control variables on sustained annual water yield in forested headwaters are not well understood" when calling for better consideration of underground water storage. The sediment output from LUT (figure 2F) provides clear evidence of increase after forest harvesting, when compared to that from LUC (figure 2E). Such behavior is well documented globally (Gayoso 2015). As clearcutting did not generate increases in runoff, the increase in the sediment loads in LUT was due to increases in suspended sediment concentrations, which were higher during the 4.5-month period immediately after harvesting (mean 305 mg l -1 , range 11-1638 mg l -1 ) as compared with the control period (mean 50 mg l -1 , range 14-104 mg l -1 ). Comparing a 4.5-month observation period immediately after harvesting with a similar 4.5-month for the subsequent year, sediment output from LUT was three times higher than that from LUC, and the increase was reduced to a ca. 2-fold for the latter period. This fact is relevant when examining the long-term effect of harvesting on sediment movement. The reduction in the magnitude of the increase in sediment output from LUT, when compared to LUC, during the second wet season (i.e., one year after harvest) reflects some degree of stabilization of the catchment sediment source areas following logging. However, it is also seen as demonstrating delayed export of sediment, possibly held in storage, during the subsequent rainy season. During the two observation periods following the completion of replanting (11.10.2011 to 12.12.2011, table 2), the total sediment yields from LUC and LUT were similar, a fact that may reflect the limited rainfall (142 mm) during these periods although does not exclude the possibility of a delayed increase in the sediment yield from LUT during rainy periods, as also observed after logging. The cumulative daily sediment load of both catchments (figure 3B) emphasizes the short-lived nature of the increase in sediment yield associated with the wet season following forest harvesting and the return to a similar response from both catchments by the time replanting was completed. The discriminatory power of the fingerprint properties included in the study. The conservative nature of the individual fingerprint properties was tested using the 'range test'. Potassium-40 and 226 Ra, which constitute intrinsic properties of the soil, failed this test, and were therefore not included in the subsequent statistical tests for identifying the optimum composite fingerprints. The inclusion of SOM confirms the ability of organic matter content to discriminate among material from catchment surface, forest road and river channel sources as reported by Ritchie et al. (2007). Source contributions. Considering the pre-logging period, the forest roads represent a minor (less than 10 % of the sediment output), nevertheless significant sediment source in both catchments. Relative source contributions from LUC remain constant during the rest of the study period, although roads assume higher importance during the wet seasons of both 2010 and 2011 when the road surfaces are likely to be subject to increased surface runoff and erosion. The contribution from roads also increases during the summer of 2010-2011, suggesting that the heavy rainfall that occurred during the winter of 2010 may have increased the importance of this sediment source during the subsequent summer by, for example, increasing instability of cut slopes and activating rills and small gullies. These effects were probably also enhanced by the increased use of forest roads in LUC during these periods in association with the harvesting activity in adjacent catchments. These changes are highly consistent with the expected impact of clearcutting in disturbing both the catchment surface and forest roads and thereby increasing their susceptibility to erosion (Gayoso 2015). However, Luce and Black (1999) emphasize the importance of roads as a primary source of the sediment yield from forested catchments, and the 25 % contribution of forest roads to the sediment yield from LUT after harvesting could be seen as low. It was considerably lower than that reported by Grace (2002), who found relative contributions from roads of ca. 90 %. Nevertheless, Rachels et al. (2020) found that the primary source of suspended sediment in pre and postharvesting conditions was streambank sediment. In this study, the fingerprint properties of source material samples collected from stream channels differed from those collected from forest roads and catchment surfaces. However, Bravo-Linares et al. (2018) working in the LUC catchment used a compound-specific stable isotope technique to discriminate sediment sources. They found that 74-98 % of the sediments in stream channels originated from unpaved roads. This might indicate that forest roads provide sediment to the stream during the entire plantation rotation, which is partially stored along channels thus showing a different signature to that associated with roads. From a management perspective, information of the absolute magnitude of sediment contributions from the three sources to sediment output is much more relevant than that for the relative contributions of the three sources, since any sediment control or management strategy must aim at reducing the amounts of sediment transported downstream and focusing on the source or sources contributing most sediment. The fact that the total contribution from channel sources for LUT during the study period remained like that from LUC, suggests that this catchment disturbance did not seriously change the amount of sediment contributed by channels. In this context, the results from the present study indicate that catchment slopes and forest roads represent important additional sources that are activated by the disturbance associated with forest harvesting and suggest that any management action should focus on these potential sources if the increased output of fine sediment from recently harvested areas in the study region is to be reduced. Further consideration when implementing management practices must, however, take account of the magnitude of sediment loads and source contributions from the study catchments. Overall, the fine sediment output from LUC, which can be seen as representing 'natural' conditions (estimated in the order of 750 kg ha -1 y -1 ) is relatively low by world standards (Bathurst and Iroumé 2014) and doubtlessly reflects the dense vegetation cover associated with forested catchments in the study area. However, aquatic ecosystems accustomed to relatively low sediment yields can prove highly sensitive to additional inputs of fine sediment to the stream network (Nor Zaiha et al. 2015). In this context a doubling of the fine sediment input caused by increased sediment contributions from the catchment surface and forest roads and an even higher increase (e.g., trebling) from these sources immediately following clearcutting, could have a significant impact on aquatic habitats. The adoption of proved best management practices, which can substantially reduce the connectivity among the catchment surface, roads, and the stream network, will reduce sediment mobilization and sediment concentrations (Schuller et al. 2010, Gayoso 2015, Cristan et al. 2016. Downstream transmission of increased sediment loads will clearly be influenced by the dilution of such contributions by inputs from undisturbed catchments. In these circumstances careful planning of the timing and location of forest harvesting activity could play an important role in reducing their downstream impacts. CONCLUSIONS This study, undertaken within a paired catchment investigation of forest harvesting impacts in an area of plantation forestry in southern Chile, has demonstrated the potential of sediment source fingerprinting techniques to provide information on the provenance and the relative and absolute contribution of different sediment source types to the sediment output from a catchment. The information on sediment source obtained is seen as adding an additional dimension to traditional catchment experiments that can inform better both understanding of the sediment dynamics of the catchment investigated and the adoption of sediment control strategies to be applied within the catchment during forestry operations. During the entire study period the total specific sediment yield (kg ha -1 ) from the catchment disturbed by forestry operations approximately doubled that from the control catchment. Most of this difference is accounted for by the increase in sediment output that occurred during the first 4.5 rainy months after harvesting. The effects of forest harvesting in increasing sediment yield were coupled with a major shift in the importance of the three key sediment sources in the catchment. Prior to harvesting, the dominant sediment source in the two catchments was stream channels, and source contributions from the control catchment remained relatively constant during the remainder of the study period. However, clearcutting operations in the disturbed catchment caused substantial changes in the contribution of sediment sources. The total contribution of the stream channel showed little change, although the contribution from both the catchment surface and forest roads significantly increased. These findings emphasize that any attempt to reduce the increase in sediment yield associated with forest harvesting operations needs to target both catchment slopes and forest roads. The adoption of best management practices could reduce sediment mobilization and transfer from catchment slopes to streams. Reduction of sediment mobilization from forest roads is likely to require improved road construction techniques and reduction of connectivity between road surfaces and cut slopes and stream channels. Careful attention to the timing of forest harvesting operations, so that the catchment has a larger time to recover prior to the wet season, clearly also offers scope for reducing sediment mobilization during the early stages of the post-harvest period.
2021-10-15T15:21:11.715Z
2021-09-06T00:00:00.000
{ "year": 2021, "sha1": "50a2a364f3fa2cf711291cf3757b605f0e42d9ac", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.cl/pdf/bosque/v42n2/0717-9200-bosque-42-02-231.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "529f036830823a140698d977edbb43cf932716c0", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
248655345
pes2o/s2orc
v3-fos-license
Managing Musculoskeletal and Kidney Aging: A Call for Holistic Insights Abstract Aging represents a major concern, with a two-fold increase in individuals >65 years old by 2040. Older patients experience multiple declines in condition, with overlapping concerns. Fractures, frailty and falls remain underestimated events in routine practice. They are shared by numerous conditions and diseases, such as osteoporosis, sarcopenia and undernutrition, which mostly feature low evolution and are silent. In this review, we focused on musculoskeletal decline in older individuals who also have chronic kidney disease (CKD), which promotes fractures and falls. We aimed to highlight the need for a global approach for musculoskeletal and kidney aging. Although strategies limiting falls remain controversial, the need for an early diagnosis can limit these declines and allow for specific treatment of bone fragility in addition to non-pharmacological approaches. The emergence of senolytic agents offers new hope for preventing musculoskeletal disorders. This scoping review describes these overlapping silent diseases, provides evidence for their global understanding and management, and sheds light on new therapeutic directions. Introduction Dependence represents the threshold of tertiary prevention that we aim to avoid. Many injuries leading to dependence are affected by aging, and the effects of some could be minimized, if early managed. Early prevention remains a challenge in poorly symptomatic patients with concomitant silent multiple declines. Therefore, the "F-issues" (fragility fractures, falls and frailty), should be considered as red flags in an asymptomatic aging patient. Hip fracture represents a burden, with a yearly 25% mortality rate, especially in the elderly. In Europe, the number of death due to fracture events in 2019 was around 250.000, when the number of new fragility fractures was estimated at 4.3 million, comprising 826,708 hip fractures (19%) and 662,544 vertebral fractures (16%). 1 The majority of fragility fractures are related to osteoporosis after the age of 50. Falls are the main trigger for fracture in older individuals, and a relevant marker shared with musculoskeletal frailty such as muscle loss and undernutrition. Although kidney function is not systematically assessed in musculoskeletal frailty, chronic kidney disease (CKD) exposes to increased risk of fracture. Age-standardized incidence of hip fracture among patients with non-dialysis-requiring CKD is estimated 1.81/1000 persons, 2-times higher than in normal kidney function patients. 2 In addition, CKD also exposes to increased risk of falling, frailty and sarcopenia, and these increases with age. Here, we addressed the crosstalk between these declining functions in older people and reported the assessment tools and the combined management of musculoskeletal failure in patients with altered renal function. We aimed to highlight the existing gap in this specific field. Although this work was designed as a narrative review, our research methodology is available at the bottom of the text. of those >85-yo. Frailty is associated with increased risk of falls and fractures. 3 The assessment of frailty includes different approaches, such as the Fried's model in which five indicators overlap with the sarcopenia assessment. 4 Sarcopenia is a global muscle disease that precipitates the loss of muscular mass, altering muscle function (strength and power) and physical performance. The diagnosis of sarcopenia remains a challenge, given the extensive number of tools available for the definition. The expert group of the European Society for Clinical and Economic Aspects of Osteoporosis and Musculoskeletal Diseases advises the use of grip strength to measure muscle strength, and 4-m gait speed or the Short Physical Performance Battery, to measure physical performance in daily practice. 5 In addition, dual energy X-ray absorptiometry (DEXA) provides the distribution of fat mass and appendicular lean mass. However, recent studies demonstrated the lean mass is a poor marker for fracture prediction after adjustment of femoral neck bone mineral density (BMD). 6 Thus, in the more recent definitions of sarcopenia, lean mass has been associated with measures of physical function/performance/strength. The European Group on Sarcopenia in Older People (EWGSOP) definition (with gait speed, DEXA, grip strength, chair stand) seemed the most predictive of all definitions in predicting fracture independent of BMD, falls and Fracture Risk Assessment Tool (FRAX ® ) in the MrOS cohort study. 7 The prevalence of sarcopenia increases with age and compromises health in 1% to 29% of older people in community living (10% in acute medicine unit) and in 14% to 33% of those in nursing homes. 8 The decline in muscular strength occurs earlier and is more severe than the loss of muscular mass (3% per decade after age 70 vs 1-2% after 50). 9 Malnutrition is also common in older people and is a major risk factor for frailty and sarcopenia. The prevalence of protein-uptake malnutrition depends on many factors and differs between acute-care or community patients, although the diagnosis relies on the definition and the tools used to define malnutrition. Prevalence rates of high malnutrition risk across all countries and screening tools ranged from 8.5% in the community setting to 28.0% for people in hospitals, in a recent analysis including 22 malnutrition screening tools validated in adults aged 65 years or older. [10][11][12][13] This observation highlights the need for a parallel assessment of both sarcopenia and undernutrition. Although anthropometry indices are used to assess nutritional status in older adults, these are not relevant for muscle mass assessment. Body mass index (BMI) measurement remains the simplest approach, with the 21 kg/m 2 threshold considered for malnutrition in older people. The extent of metabolic undernutrition is assessed with serum albumin level, according to the last recommendations. However, calf circumference remains an alternative in muscle mass evaluation when no other diagnostic methods are available. 14 As far as treatment is concerned in undernutrition, avoiding a restrictive diet in older people is recommended first, as is prescribing a nutritional oral complement when needed. Together, loss in muscle strength, function and performance are associated with severe prevalent adverse health conditions in older people. Frailty, undernutrition and sarcopenia are reliable markers associated with several outcomes such as falls, fractures and death. 15 Ageing Bone Issues Most fractures occur after a fall, the risk increasing with age. Bone strength includes both bone density (BMD) and bone quality (microarchitecture and turnover). In post-menopausal women, the association with BMD and the fracture risk easily allows to predict fracture risk. Fractures, especially hip fractures, are a public health challenge because of their cost and mortality. Most patients who experience a hip fracture are older than 80 years. 16 The impact of hip fracture on quality of life and life expectancy in older people is similar to that for other chronic diseases. 17 The advances in Fracture Liaison Service and reeducation therapies can improve the consequences of fracture and re-fracture risk. 18 However, about 75% to 80% of hip fractured patients, whom are mainly older, never received any treatment for their bone fragility within the year after the fracture. In the oldest old individuals, the risk of death competes with the risk of fracture. 19 Thus, life expectancy represents a key factor for decision-making in this population, especially in patients with major disabilities. Although the mean life expectancy at birth in Europe is 82.6 years for women and 76.7 years for men, residual life expectancy above age 85 and 95 is about 6.97 and 3.19 years, respectively, for both genders. Thus, the question of preventing osteoporotic fractures remains crucial. When investigating bone fragility in older people, the usual tools for young adults do not reach the same level of validation. Major factors remain the history of fracture or the history of fall. Although the FRAX ® now adjusts the result in very old patients for the competing hazard of death, the FRAX ® remains a 10-year prediction of fracture risk. Biological assays are performed for ruling out malignant bone diseases, a metabolic bone disorder and monitoring contraindications to therapy. Bone biomarkers help to determine the levels of bone remodeling and the monitoring/ adherence to therapy (always after a reasonable time delay after a fracture). The International Osteoporosis Foundation (IOF) only recommends serum P1NP and CTX assays because they are modified by therapy. 20 Measurements of Vitamin-D levels are recommended because Vitamin-D level must be in the normal range to initiate a therapy. Supplementation also had a significant effect on femoral neck BMD, 21 with no major effect on incident fracture prevention after age 70. 22 The last results were confirmed in a meta-analysis that only analyzed vitamin D without calcium supplementation. 23 Vitamin-D supplementation could have numerous positive effects, in particular on muscle and falls, in which the effect appears to be dose-dependent 24 and not found in the same Bolland work. BMD measurement in this population also has several limitations. BMD does not accurately reflect the situation for patients with prevalent major fracture. Therefore, reference curves and thresholds for the diagnosis are questionable. The low reproducibility in positioning and frequent presence of bilateral hip prosthesis, lumbar spine osteoarthritis, unknown vertebral fracture, spine deformity and aortic calcification 25 are frequent issues limiting BMD interpretation and reproducibility ( Figure 1). Lateral imaging of the spine reinforce BMD measurement interpretation, providing information on silent vertebral fracture ( Figure 1A), the level of aortic vascular calcification and the presence of osteoarthritis ( Figure 1B and C). Osteoporosis drugs have proved their efficiency in patients with >1 year life expectancy. 26 With age >70 and vascular risk, selective estrogen receptor modulators are a non-relevant option in this population. Therefore, discussion of osteoporosis drugs in older people will involve bisphosphonates, teriparatide and denosumab. Recently, the International Conference on Frailty and Sarcopenia Research Task Force developed targets for research on osteoporosis in frail older adults. 27 According to these recommendations, optimal treatment for osteoporosis in older people may require combined or sequential therapies. Romosozumab followed by denosumab reduced the risk of fracture in postmenopausal women in the recent FRAME phase-2 study. This combination should have been a helpful sequence, 28 although currently not approved in women above 75 with a history of ischemic heart failure. In the associated dual energy X-ray absorptiometry scan, this fracture was not suspected, so BMD was overestimated in L1 compared to L2. Abbreviations: BMD, bone mineral density; LS, lumbar spine; VFA, vertebral fracture assessment. The tolerance of any osteoporosis therapy is good in older individuals, with mild and reversible adverse effects. In older patients, oral drugs should be evaluated in the light of the lower bioavailability of oral treatment, slower metabolic rate, concomitant deficiencies and treatments. The remnant effect of parenteral bisphosphonates calls for their use when life expectancy is reduced and the risk of poor adherence is increased. Reevaluation is relevant within 3 years if no new fracture or risk factor has occurred. Overall, because of the diagnostic tools and treatment available, osteoporosis is one of the best characterized diseases to achieve musculoskeletal prevention in older people, in which osteoporosis is prevalent. Kidney Function and Assessment in Older Individuals Kidney function declines with age. The loss of function is the result of kidney aging and lifelong pathologic injuries. A progressive decline of glomerular filtration rate (GFR) of about 1 mL/min/year represents normal ageing of kidney function above age 40. Although the incidence of renal impairment is still increasing, 29 most adults with CKD never reach end-stage renal disease. 30,31 After the age of 80, the incidence of CKD increases by 10-fold as compared with 18 to 50 years. In addition, the longitudinal SCOPE study assessed the profile of comorbidity conditions in 2252 subjects aged above 75 years across Europe. They showed that CKD was the most frequent conditions and was rarely observed without any other co-occurring disease. Besides the cardiovascular diseases that were predominant, CKD was highly associated with osteoporosis and hip fracture. 32 The score of physical performance was reduced in subjects with severe CKD, suggesting that multi-organ decline. Numerous frequent comorbidities, such as cognitive impairment, probably because it is a vascular component, are related to renal impairment in older people. 33 In a diabetes mellitus cohort, patients aged >80 with dementia had an increased mortality in case of renal impairment. 34 Nonetheless, in patients with CKD stage 3, the 10-year cumulative incidence of dialysis and transplantation (0.04) contrasts with the mortality incidence (0.51, mainly due to cardiovascular diseases). 35 This fact highlights that the phenotype of older CKD patients is not similar to younger patients. Thus, this observation could be related to a difference in the assessment of renal function in older versus younger people. However, older people with CKD stage 3A without proteinuria have no additional risk of mortality as compared with similar age-class individuals with estimated GFR>60 mL/min/1.73 m 2 . 36 This underlined the probable overestimation of CKD stage 3 in older individuals, in which a few CKD stage 3A patients met the 2-parameters assessment recommended by the Kidney Disease: Improving Global Outcomes (KDIGO) guidelines. Other limitations in the CKD assessment in older individuals are identified. The measurement of GFR using isotopic or iohexol injection is optimal but is limited to dedicated physiology departments. In a routine setting, serum creatinine level remains widely used as a surrogate marker for renal function estimation. To date, no strong evidence is available to prefer other biomarkers (cystatin-C …) in old patients. However, in acute care, creatinine in older patients do not correctly reflect the renal function. Furthermore, creatinine levels are determined by muscular production, which is a major limitation in this population. Moreover, technical conditions also limit the assessment of the 24-hr urinary creatinine level. The most widely used tools for estimating GFR remains creatinine-based equations ( 4-variable Modification of Diet in Renal Disease [MDRD], CKD Epidemiology Collaboration [CKD-EPI] and Cockcroft and Gault [CG] equations). Values for patients above age 90 years are not correctly represented in each of these reference populations, which raises concerns about their validity. Of all methods, MDRD equations produce the most accurate results for an association with the gold-standard. 37 Besides kidney failure, older patients are exposed to iatrogenic issues. We will not get into it here. In the musculoskeletal approach, attention should either be paid to drugs with a negative impact on bone long-term corticosteroid, heparin, vitamin K antagonist 38 or with a positive impact (bisphosphonates, teriparatide, thiazide diuretics, statins, etc.). Also, it is important to balance the individual risk on drugs that expose to falls such as antihypertensive therapy, especially for renin-angiotensin antagonists, which protect kidney function. Need for Novel Comprehensive Approach in Concomitant Declines of Musculoskeletal and Kidney Function Frailty and Sarcopenia in CKD Musculoskeletal frailty involves several interacting systems that converge toward fracture occurrence (Figure 2), including kidney failure that increased with aging. The prevalence of frailty in CKD patients is estimated at about 14%, which increases their mortality risk by 2.5 points. Sarcopenia is mostly prevalent in late CKD stages, whatever the definition, 40 and with an altered handgrip, 41 as confirmed by results from the NHANES study. 42 Although sarcopenia progressively increases with renal impairment in CKD, only some evidence links muscular function to fracture risk in CKD, 43 with numerous hypotheses studied for their potential role in change in myostatin metabolism. One of the aims in care for sarcopenia remains to limit falls. The increase in risk of fall is well documented in CKD, 44,45 in particular in end-stage renal disease 46,47 and in patients >65 years old. Above age 65, falls are more frequent and often generate complications, especially when GFR is <45 mL/min, with the incidence of falls about 38.3 vs 21.7/ 1000 person-years with GFR>60. Risk of falls is also increased with GFR<60mL/min associated with osteoporosis, especially corticosteroid-induced osteoporosis. 48 Whether specific causes of falls in CKD have not been elucidated; some studies considered that diabetes or uremicrelated neuropathy could be good candidates. 43 In older individuals, in whom falls are multifactorial, risk of falls is increased by two-fold with polymedication, BMI <18.5 and low GFR. Dementia also increases this risk by 1.21 points. 49 Some specific quantitative gait abnormalities have been identified in CKD patients (slower gait speed, shorter stride length, reduced time in the swing phase and increased time in the double support stance phase), adjusted for age and sex associated with fall risk. 50 Although no specific program is available for these patients, the objective of limiting falls should focus on detecting frailty, improving modifiable factors (visual acuity, multiple medications, and home environment) and focusing on strengthening, gait, and balance in exercise programs. Nutrition management could appear as a realistic way of improving sarcopenia, but no drug or no specific biomarker are yet recommended. However, IGF-1 level, which are low in undernutrition, may have a positive impact on both bone and muscle. Such as serum myostatin, in which level increases along with CKD and inhibits muscular mass. 51 Figure 2 Interactions between musculoskeletal settings in older patients with chronic kidney disease. Plain frame represents diseases, and the other frames are parameters clinically assessed. This figure represents connections and interactions between phenotypes, diseases whatever the severity of each condition, in order to emphasize the key feature that fracture is, and therefore that falls and frailty must be prevented with an integrative management of all these diseases. In CKD patients, a high prevalence of fracture is reported along with a high mortality rate related to the fracture. 52 Osteoporosis in CKD patients is related to diabetes mellitus and hypertension, whose incidence increases with age. However, a recent meta-analysis reported that even about 24% of CKD cases could be related to diabetes mellitus, the condition increasing the fracture risk independent of kidney function. 53 Different entities have been identified in bone mineral and metabolism disorders related to CKD (CKD-mineral and bone disorders [CKD-MBD]) to better understand the participation of renal impairment. The KDIGO work-group defined CKD-MBD as one or a combination of three manifestations. Manifestations can be disorders of bone and mineral metabolism and/or extra-skeletal calcification, and/ or renal osteodystrophy defined as altered bone morphology associated with CKD (turnover, mineralization, volume), assessed by histology. The risk of hip fracture increases by 2-to 14-fold in CKD patients as compared with the general population. This risk appears at the early stages versus non-CKD and increases with the CKD severity. 54 Hip fracture incidence increase in parallel with CKD progression. 55 Fracture risk is increased among older individuals with CKD, 48 which illustrates that age is a major risk factor for fracture, 17 although renal impairment and osteoporosis have an independent or additive impact on fracture risk. 56 In contrast, older people with osteoporosis frequently have renal impairment with age. 57 However, these progressions differ by sex. With menopause, bone mass declines faster in women between age 50 to 70 than men, 58 whereas in general, decline in renal function is fastest in men. Nonetheless, CKD seems to remain more common among women, with an estimated prevalence of 11-13%. Regardless, Big data analyses do not individualize a homogenous male: female distribution of CKD across the world, according to countries with different economic states. 59 GFR increase its decline after age 70 in osteoporosis women, as seen in the post-hoc analysis of the Horizon-PFT study. 60 The overlap between CKD and osteoporosis can be illustrated with the NHANES-III cohort, in which 60% of women with osteoporosis also had CKD stage 3 and 23% stage 4. In patients with CKD stage 1-2 with osteoporosis and/or high risk of fracture or in those with CKD stage 3 with PTH concentration in the normal range, the KDIGO recommends managing osteoporosis as for the general population. However, CKD does not systematically imply associated CKD-MBD, whereas 85% of women with osteoporosis show altered renal function. The FRAX ® is used for assessment in CKD especially when enhanced with BMD and trabecular bone score results. 61 Nevertheless, the FRAX ® does not include key factors for CKD-MBD or GFR evaluation, number of fractures and falls. 62 Low BMD correctly predicts risk of hip fracture in older people with CKD. 63 With CKD stages 1 to 3, BMD can predict risk of fracture, which confirms that people at early stages of CKD with no evidence of CKD-MBD can be considered as having osteoporosis. Conversely, a CKD stage 3-4 was discovered in 84% of older women with osteoporosis. However, BMD in late stages remains controversial in estimating risk of fracture. Thus, the 2009 KDIGO recommendations were updated in 2017 to favor assessing BMD in patients with CKD stage 3a-5 if the result would affect treatment decisions but also to identify patients at high risk of fracture. With longitudinal follow-up in patients with CKD stage 3-5 and with measured GFR and BMD follow-up, a slight bone loss occurs at the radius only, a cortical bone. 64 However, because the levels of bone loss remain unknown, the frequency of serial BMD assessments remains to be determined. Therefore, the European Dialysis and Transplant Association and the IOF integrate the evaluation of fracture risk as a target in the management of CKD-MBD. 65 In older patients with 11 years of follow-up, a 1-SD change at the femoral neck increased by two-fold the risk of fracture in CKD as compared with no CKD. In endstage renal disease, BMD is lower, especially at cortical bone sites, 66 Investigating Bone Biochemical Parameters in CKD The last KDIGO recommendations highlighted that other priorities are competing with CKD-MBD management in people aged 75 to 78 with a 2.5-year follow-up, thus justifying the need for screening older people with a global assessment. 67 CKD patients have numerous comorbidities and there is a trend to not test patients beyond age 85. The most frequent conditions in CKD-MBD are: secondary hyperparathyroidism, hyperphosphatemia, hypocalcemia and low calcitriol level. -In CKD patients with hyperparathyroidism, serum PTH level increases in parallel with CKD progression. 68 Other causes such as vitamin D deficiency hyperparathyroidism should be explored to avoid concluding renal secondary hyperparathyroidism. KDIGO guidelines also recommend managing hypocalcemia, hyperphosphatemia and vitamin D deficiency in CKD stage 3 to 5 with hyperparathyroidism. 69 BMD should be interpreted with those bone parameters: low BMD in CKD seems to predict fracture better if associated with normal-range PTH level. 70 -Although serum phosphate levels are expected to be normal in osteoporosis, they could remain useful to assess bone fragility in older CKD individuals. According to the MrOS and the Rotterdam cohorts, increased phosphate levels could be related to fracture risk along with CKD, 71 even after adjustment for PTH and FGF-23. -Vitamin-D insufficiency must be managed with a targeted level for 25OH-vitamin D above 75 nmol/mL whatever is the aim (osteoporosis, CKD, or falls). In CKD stage 3-4, a normal gait speed is associated with the highest level of 25 (OH)-vitamin D. 72 When associated with hypocalcemia, calcitriol level can also be assessed to adapt therapy. Some evidence suggests that in inhibiting the Wnt signaling pathway, vitamin D could limit vascular calcification, one of the components of CKD-MBD. 73 Because of the effect on muscle via the vitamin D receptor, as for the mitochondrial or non-genomic pathway, vitamin D remain a good candidate for intervention in sarcopenia. 74 Vitamin D is also known to lower myostatin level 75 and increase levels of insulin-like growth factor 1, sclerostin, osteocalcin and FGF-23 and therefore muscular mass. 76 -In addition, serum FGF-23 is a hormone produced by osteocytes that stimulates renal phosphate excretion. FGF-23 level increases from the early stages of CKD and plays a role in the decrease in calcitriol level. FGF-23 is not associated with fracture or with lower BMD after adjustment on GFR. 77 Klotho allows for contact between FGF-23 and its receptor. Decreased levels along with CKD lead to a lower number of osteoclasts with higher activity. 78 -Finally, urinary calcium levels can also be helpful because fractures seem to occur more frequently in CKD stages 3 and 4 in patients with a history of kidney stone (5.56/100 pa). 79 Several other bone biomarkers are available, but their renal elimination limits their interpretation. Among bone remodeling biomarkers, only bone-specific alkaline phosphatase (bs-ALP), TRAPc5b and the trimeric form of P1NP are not impacted by kidney elimination. Bs-ALP is a bone formation biomarker associated with both low 80 and high levels of bone turnover. TRAPc5b & P1NP are poorly reported in the aging CKD population. Non-Pharmacological Approaches Also here, several types of intervention in fall prevention have been developed with poor results. 81 However, the need for care in fall prevention remains a key element for patients at high risk of fracture. Muscle stretching and strengthening, especially the pelvic belt and lamb triceps, could be advised. Physical activities such as Tai-Chi should have a positive effect on osteoporosis prevention. 82 Specific gait abnormalities in old CKD patients could be targeted with specific interventions, as long as no program is recommended. Other risk factors for falls must be corrected according to the evaluation. Malnutrition is one of them. The recommended protein uptake (1-2g/kg/d) is in competition with nephrology guidelines, stating that uptake must be maintained from 0.8 to 1mg/kg/day. In osteoporotic patients too, nutritional recommendations do not systematically match with CKD-MBD, for which avoiding a high phosphate diet is recommended. In older people, the targeted 25(OH)-vitamin D is >75 nmol/L. In the DOPPS cohort of CKD patients, all those taking vitamin D were healthier than those who did not. 83 Supplementation also affects other aspects (osteoporosis, sarcopenia, falls 84 ) especially in CKD patients for whom supplementation could improve global mobility and muscular function. 85 Dovepress Cailleaux and Cohen-Solal older people, improving calcium uptake assessment is useful although some evidence suggests increased vascular risk in patients without any calcium insufficiency. 86 Pharmacological Approach In older people, drug prescription is challenged by the prevalence of numerous treatments for their comorbidities, by cognitive impairment limiting drug observance, and by the risk itself. CKD stages 1-3a should be treated like osteoporosis. 87 In CKD stage 3b-5, the needs for priority settings on treatment must be discussed with multidisciplinary global insight. The management of confirmed biochemical abnormalities (hyperphosphatemia, hyperparathyroidism and vitamin D deficiency) should be considered before specific fracture prevention therapeutics. Table 1 summarizes data on CKD and age provided by pivotal studies. Although the older individual is the targeted population, Table 1 also illustrates that a global approach with renal function and geriatric musculoskeletal parameter outcomes remain poorly reported. Here, we discuss the main limitations of drugs in CKD patients. According to a post-hoc analysis of pivotal studies, CKD-MBD can be treated like osteoporosis until stage 3, including if biochemical parameters remain in the normal range; however, the use of bisphosphonates as any antiresorptive drug therapy in CKD can paradoxically increase fracture risk by increasing failure in mineralization, but not bone volume. This limitation is related to the prevalence of adynamic bone disease (ABD) in CKD. Thus, if levels of bone turnover biomarker remain low as in ABD, the use of bisphosphonates can worsen the condition. 88 Long-term use of bisphosphonates in CKD can also lead to ABD. 89 The kidney excretes bisphosphonates hours after ingestion by passive glomerular filtration or a proximal tubular way, whereby about 27% to 62% of the drug is set on bone. A threshold of 30 mL/min is considered for drug prescription. This threshold resulted from pivotal studies and nephrotoxicity studies of animals. 90 -The more the GFR decreases, the more the drug accumulation increases, with verified nephrotoxicity in rapid intravenous infusion of zoledronate or pamidronate, 91,92 which justifies the spacing between delivery. 93 Nonetheless, the use of oral bisphosphonates such as 5mg risedronate remains safe on the kidney and efficient for BMD 94 and has been tested specifically in older individuals. 95 Alendronate also increases BMD even with CKD stage 3 and 4, and also increases PTH level at 18 months. 96 Alendronate has also been tested on vascular calcifications. -Teriparatide is a PTH analog. Actually, data are missing in CKD patients with GFR <30 mL/min other than the posthoc analysis of the 2014 pivotal study. 97 Teriparatide could be useful in ABD or parathyroidectomy. -Denosumab is a monoclonal anti-Rank-ligand antibody whose specificity was tested in patients with 15 to 30 mL/ min GFR. Denosumab remains an antiresorptive therapy with the same bone complications as for bisphosphonates. In addition, because of a lack of residual effect when stopped, denosumab must be relayed by another antiresorptive therapy to avoid a cascade of vertebral fractures, which limits the prescription in the same range as for other drugs. Post-hoc analysis of the FREEDOM study of dialysis showed an increase in BMD at 6 months but also in PTH level and prevalence of hypocalcemia. 98 In CKD stage 4 with high risk of fracture in younger adults, Hampson suggested considering denosumab or off-label prescription of bisphosphonate. 99 Bone turnover must be explored (bone biopsy, biomarkers) to validate the absence of ABD when antiresorptive drugs have no effect. If this option is retained, the recommendation for patients with low BMD and reduced life expectancy is to keep the therapy to the end, making it suitable also in older CKD patients. Future Directions Although aging is a common notion, its mechanisms remain incompletely known. In cell biology, aging could be characterized by programmed senescence or an accumulation of lifelong injuries. Cellular senescence is a cell fate involving extensive changes in gene expression and proliferation arrest. Pathways involved are genetic instability, telomere attrition, hormonal cycle influence or immune system decline. This senescence also involves cell cycle deregulation (via p21 CDKN1α or p16 INK4α ), an increase in lysosomal β-galactosidase activity, and apoptosis resistance. Many other pathways have been described, such as oxidative stress pathways and non-enzymatic protein glycation and accumulation of advanced glycation endproducts altering protein function. Moreover, a dynamic process is involved, whereby the senescence phenotype spreads to the surrounding tissue by a specific secretion of senescent cells. secrete a range of pro-inflammatory cytokines, chemokines and proteases (interleukin 6, CXCL-12, matrix metalloproteinase, etc.), termed the senescence-associated secretory phenotype (SASP), which contributes to local and systemic dysfunction with aging. To date, three types of senotherapies are described: senolytics, senomorphics, and senosuppressors ( Figure 3). They inhibit apoptosis resistance and the activation of survival pathways and decrease autophagy, SASP and metabolic aberrations. The use of senotherapeutic agents prevented tissue degeneration and improved longevity in mice models. 100 Senotherapeutic agents also provide new perspectives for musculoskeletal aging. Different approaches are efficient for targeting cellular senescence in age-related bone loss in mice, with a transgene or a pharmacological approach. The use of Janus kinase inhibitors on the SASP also provides efficient results with lower bone resorption and maintained or higher bone formation in trabecular and cortical bone, respectively. 101 In another SASP model of senescence with senescent cell transplant impairing functional parameters used in clinical practice (maximal walking speed, muscle strength, physical endurance, body weight), the use of senolytics such as dasatinib+quercetin reduced senescent cell burden and decreased pro-inflammatory cytokine secretion, even in human adipose tissue explants. Senolytics prevent and alleviate the senescent cell transplantation-induced physical dysfunction. Clearing senescent cells alleviates physical dysfunction and increases the remaining lifespan in older mice. 102 These promising approaches could be a relevant perspective in frailty and dependence prevention in older people. Conclusions In the multi-approach course that represent the understanding of aging, this review emphasizes that common clinical concerns (CKD, fall and fractures …) need more than ever a multidisciplinary approach in order to get the big picture. We should remain as simple as possible, and we first recommend assessing residual life expectancy as a major key factor for decision-making. Osteoporosis remains a good model with efficient drugs even on CKD older individuals. It seems important to have in mind the limits of all the assays and measures that could be performed, to limit misinterpretation and excessive inflation of risk of error. Concise Methodology In this review, we first performed an electronic search from January 1980 to February 2020 using MEDLINE (PubMed) for original works and expert report. An iterative approach included 2 equations with the following MeSH terms: "Fracture" + "Chronic kidney disease"; "Bone" +" Elderly" +"Chronic Kidney Disease". The first reviewer (PEC) screened the titles and abstracts according to these keywords criteria. Then, the selection was transferred to the second reviewer (MCS) who refined and confirmed the selection. The two reviewers then performed a more careful reading of the manuscripts and selected the most relevant papers to their aim. We also added guidelines of international and national societies as well as relevant review articles in order to illustrate positions in case scientific data are not available. A narrative synthesis of each organ failure was conducted, aiming to describe the evidence and limitations for the diagnosis for each tissue insufficiency. We then analyzed the literature in the light of concomitant diseases and identify the needs for further research.
2022-05-10T16:07:08.862Z
2022-05-01T00:00:00.000
{ "year": 2022, "sha1": "0a84746818a3a355e89defb43aebd4287f8c64e0", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=80533", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6a601c725ba9115c64251d705b412d99d1b898c7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
9007681
pes2o/s2orc
v3-fos-license
High throughput RNAi screening identifies ID1 as a synthetic sick/lethal gene interacting with the common TP53 mutation R175H The TP53 mutation (R175H) is one of the most common mutations in human cancer. It is a highly attractive strategy for cancer therapy to find the genes that lead the R175H-expressing cancer cells. The aim of this study was to identify the synthetic sick/lethal gene interacting with R175H. Using lentiviral bar-coded comprehensive shRNA library and a tetracycline-inducible R175H expressed in the SF126 human glioblastoma cell line (SF126-tet-R175H), we conducted high-throughput screening to identify the candidate genes that induce synthetic sickness/lethality in R175H-expressing cells. We identified 906 candidate gene suppressions that may lead to accelerated cell growth inhibition in the presence of R175H. Inhibitor of differentiation 1 (ID1) was one of the candidate genes, and its suppression by siRNA resulted in the acceleration of growth inhibition in cell lines both transiently and endogenously expressing R175H but not in TP53-null cell lines or other common p53 mutants (such as R273H). Flow cytometry analysis showed that ID1 suppression resulted in G1 arrest, and the arrest was accelerated by the expression of R175H. ID1 is a synthetic sick/lethal gene that interacts with R175H and is considered to be a novel molecular target for cancer therapy in R175H-expressing cells. Introduction Synthetic sickness/lethality interaction is a highly attractive strategy for cancer therapy (1)(2)(3)(4). For example, in cancer cells with a KRAS gene mutation, the inhibition of polo-like kinase 1 (PLK1) resulted in cell death (5). Similarly, cancer cells with the KRAS mutation were sensitive to the suppression of the serine/threonine kinase STK33 (6). Moreover, dysfunc-tion of DNA double-strand break repair caused by mutations in BRCA1 or BRCA2 gene sensitized cells to the inhibition of poly-ADP ribose polymerase (PARP) enzymatic activity, resulting in chromosomal instability, cell cycle arrest, and subsequent apoptosis (7). This concept had been proved by a phase II trial where olaparib, a PARP inhibitor, provided objective antitumor activity in patients with a BRCA1 or BRCA2 mutation (8). TP53 is the most commonly mutated tumor suppressor gene in several different types of human cancer (9). TP53 encodes the 393 amino acid p53 protein, which binds to specific DNA sequences in the regulatory region of downstream genes (10). A variety of cellular stressors including ultraviolet rays, ionizing radiation, chemotherapeutic drugs, and hypoxia stabilize the p53 protein, and post-translational modifications activate it; this results in various cellular responses including cell cycle arrest, DNA repair and apoptosis (11,12). According to the TP53 mutation databases, ~75% of the mutations are missense mutations (13,14); to date, >1,200 distinct missense mutations have been reported. Among them, those at residues Arg 175 (R175), Gly 245 (G245), Arg 248 (R248), Arg 249 (R249), Arg 273 (R273) and Arg 282 (R282) have been reported most frequently (15). The most common p53 mutant proteins caused by TP53 hot-spot mutations are R175H, G245S, R248W, R248Q, R249S and R273H; these mutations cause a loss of the trans-activation function of downstream genes (16). However, some p53 mutants gain new functions that are not observed in wild-type p53 (so called gain-of-function mutations). For example, mice with the knock-in mutant p53 R172H and R270H, which correspond to human p53 R175H and R273H mutations, develop a variety of novel tumors such as lung adenocarcinoma, renal cancer, hepatocellular carcinoma, and intestinal carcinoma which are not generally observed in TP53-null mice (17). In addition, embryonic fibroblasts derived from p53 R172H knock-in mice gained activities of cell proliferation, DNA synthesis and retroviral transformation (18). Moreover, human p53 R273H or R248W interacted with Mre11 and suppressed the binding of the Mre11-Ras50-NBS1 (MRN) complex to DNA double-strand breaks, resulting in the chromosomal translocation and abrogation of the G2/M check point (19). According to these results, it has been hypothesized that some p53 mutant proteins, such as the activated K-ras protein, are oncogenic and contribute to carcinogenesis and cancer progression. In the present study, we conducted high-throughput RNAi screening by a lentiviral gene suppression system to identify synthetic sick/lethal genes in the presence of p53 R175H, which accounts for ~6% of the missense mutations identified in human cancer (20). As a result, we identified that inhibitor of differentiation 1 (ID1) is the first gene that causes synthetic sickness when paired with p53 R175H mutant protein. RNAi screening. One million SF126-tet-R175H and SF126tet-TON cells were seeded in 10-cm culture plates for 24 h. The medium was removed from the plates and the Decode RNAi Viral Screening Library (Thermo Scientific Open Biosystems, Huntsville, AL, USA) was added to the plate at the multiplicity of infection (MOI) of 0.3 with serum-free medium. After 6 h, the medium was replaced with virus-free medium. After 48 h, puromycin was added at a final concentration of 2 mg/ml to select the infected cells. Finally, 7x10 6 of lentivirus-infected SF126-tet-R175H and SF126-tet-TON cells were obtained. These cells theoretically contain 70,000 distinct shRNAs, and each cell should express a single shRNA product. These cells were divided into 2 groups, and each group was cultured with or without doxycycline for 10 days. Genomic DNA was extracted from each group using the Blood & Cell Culture DNA Mini kit (Qiagen), according to the manufacturer's recommendation. Barcode sequences corresponding to specific shRNAs were amplified by the following primers located outside the barcode sequence: forward, 5'-caaggggctactttaggagcaattatcttg-3' and reverse, 5'-ggttgattgttccagacgcgt-3'. Amplified PCR products were separated in 1.5% TAE agarose gel and extracted using Wizard SV Gel and PCR Clean-Up system (Promega Corporation, Madison WI, USA). Each purified PCR product was labeled with Cy5 (doxycycline-on group) or Cy3 (doxycycline-off group) using Agilent's Genomic DNA Labeling kit (Agilent Technologies, Inc., Santa Clara, CA, USA) and was hybridized on the barcode microarray in the hybridization oven at 65˚C for 17 h. After hybridization, the arrays were scanned with the Agilent DNA microarray scanner to quantify log 2 Cy5/Cy3. The 'log2 Cy5/Cy3' indicates increase and decrease of cells in the primary screening and negative value of 'log2 Cy5/Cy3' shows that the counts of R175H expressing cells (dyed with Cy5) is smaller than the counts of R175H unexpressed cells (dyed with Cy3). We conducted 2 independent experiments, and obtained 3 independent values of log2 Cy5/Cy3 were analyzed by Student's t-test. Candidate genes were identified after analyzing raw data for each shRNA using the Gene Spring software (Agilent Technologies). Microarray data were deposited in GEO (accession no. 33362). Knockdown analysis of candidate genes using siRNA. The siRNAs of 50 candidate genes, identified from primary screening, were synthesized by Hokkaido System Science (Hokkaido, Japan). The sequences of synthesized siRNA for candidate genes are listed in Table I. ID1-2 siRNA was synthesized as described previously (23). TP53 siRNA was purchased from Applied Biosystems (Foster City, CA, USA), and TP53-2 siRNA was purchased from Cell Signaling Technology, Inc. (Boston, MA, USA). A total of 3.5-5.0x10 3 cells/well were seeded and incubated in a 96-well plate for 24 h. Each candidate siRNA and negative control siRNA was added to the cells to make a final concentration of 30 nM or 100 nM using DharmaFECT 1 (Dharmacon, Lafayette, CO, USA). Cell proliferation assays were performed using Cell Counting kit-8 (Dojin Laboratories, Kumamoto, Japan), as previously described (21). CGTGGTGAAATGTCTCAAGAA TTCTTGAGACATTTCACCACG Cell cycle analysis by FACS. A total of 1.5x10 4 cells/plate were seeded and incubated in 6-cm culture plates for 24 h. The cells were further incubated in the presence of drugs for 48 h. These cells were collected, and FACS analysis was performed, as previously described (24). Results Screening of synthetic lethal genes that interact with p53 R175H mutant. A flow chart of the high-throughput screening of synthetic lethal genes interacting with p53 R175H is shown in Fig. 1. By comparative analysis, 1,362 candidate genes were identified for synthetic lethality with p53 R175H expression in the SF126-tet-R175H cell line (p<0.05 according to t-test, n=3). Among these, 43 were excluded as suppression of these genes also resulted in decreasing cell numbers in SF126-tet-TON cells after doxycycline treatments (no R175H expression). In the remaining 1,319 genes, 906 genes have validated gene symbols, which have p-value <0.05. Among these, we selected 50 genes (21 genes from the group with the smallest p-values, 20 genes from the group with the largest fold-change, and 9 genes reproduced by different siRNA sequences) for further validation testing (Table I). Suppression of candidate genes by siRNA in p53 R175H expressing cell lines and TP53-null cell lines. To investigate whether the suppression of candidate genes by siRNA resulted in p53 R175H-dependent inhibition of cell growth, candidate gene siRNAs were transfected into cell lines expressing endogenous p53 R175H (SKBr3, LS123, HCC1395, Detroit 562 and VMRC-LCD) and TP53-null cells (PC3, H1299, SK-N-MC and Calu-1). We obtained the ratio of cell growth inhibition of candidate gene siRNA transfected cells for negative control siRNA transfected cells on day 4. In 50 candidate genes, suppression of GYPC, NUP98, GP6, EFNA4 and ID1 by siRNA significantly decreased the number of p53 R175H expressing cells compared with TP53-null cells (t-test) (Table II). To examine whether the cell growth inhibition resulting from suppression of the candidate genes depends on Fig. 2A and B). ID1 suppression and p53 R175H overexpression did not influence the other protein expression level (Fig. 2C). These results suggest that p53 R175H expression and ID1 suppression cooperate to cause cell growth inhibition. Cell growth inhibition by ID1 and/or TP53 suppression in endogenously expressing p53 R175H, wt p53 cells and in TP53-null cells. To determine whether cell growth inhibition is rescued by the suppression of both candidate genes and p53 R175H, siRNAs of the targeting candidate genes and TP53 were transfected into SKBr3, a p53 R175H expressing cell line. Downregulation of p53 R175H rescued cell growth inhibition caused by ID1 suppression (Fig. 3A), but not by GYPC, NUP98, GP6 and EFNA4 suppression (data not shown). To exclude the off-target effect of siRNA, other siRNA for ID1 and TP53 targeting different sites (ID1-2 and TP53-2) were transfected into SKBr3, and we observed reproducible results to the original siRNAs ( Fig. 3B and C). Moreover, similar results were observed only in cell lines expressing p53 R175H (LS123, Fig. 3D and HCC1395, Fig. 3E), but not in wt p53 (HCT116, Fig. 3F), and TP53-null (PC3, Fig. 3G). The quantity of the Id1 protein in SKBr3 was not altered by p53 R175H (Fig. 3H), same as p53 R175H transient expression. These results support the finding that cell growth inhibition by ID1 suppression is accelerated by p53 R175H. Suppression of ID1 in cell lines expressing another common mutant p53 (R273H). To examine whether cell growth inhibition caused by ID1 suppression is accelerated specifically by p53 R175H expression, another common p53 mutant (R273H) was expressed in a PC3 cell line (TP53-null). Unlike p53 R175H expression, p53 R273H expression did not accelerate the cell growth inhibition caused by ID1 suppression (Fig. 4A). Furthermore, the cell growth inhibition caused by ID1 suppression was not restored by simultaneous suppression of TP53 in HT-29 cells expressing endogenous p53 R273H (Fig. 4B). Similar results were observed in SW480 cells expressing endogenous p53 R273H/P309S double mutants (Fig. 4C). These results indicated that the growth inhibition induced by ID1 suppression may be accelerated by p53 R175H expression in a specific manner. Cell cycle analysis under ID1 suppression and ID1/TP53 double suppression. To examine whether ID1 and/or TP53 suppression change the proportion of cell cycle phases, FACS analysis was performed in SKBr3 cells. ID1 suppression did not change the sub-G1 fraction, but significantly decreased the S phase fraction and increased the G1 phase fraction (Fig. 5A). ID1/TP53 double suppression significantly restored the proportion of S phase and G1 phase fractions. These results suggest that p53 R175H potentiates G1 arrest by ID1 suppression. In HCT116 (wild-type p53) and PC3 (TP53-null) cells, ID1 suppression increased the G1 phase fraction and decreased the S phase fraction. However, unlike in SKBr3 cells, ID1/TP53 double suppression did not restore the proportion of S phase and G1 phase fractions ( Fig. 5B and C). These results suggest that ID1 suppression induces G1 arrest and the arrest is specifically accelerated by p53 R175H expression. Discussion We identified ID1 as a synthetic sick/lethal gene that caused cell growth inhibition in the presence of p53 R175H. Id1 is a member of the helix-loop-helix protein family expressed in actively proliferating cells and regulates gene transcription by hetero-dimerization with the basic helix-loop-helix (bHLH) transcription factor (25). The homodimer of the bHLH transcription factor activates the differentiation, whereas the heterodimer, composed of Id1 and the bHLH transcription factor, attenuates their ability to bind DNA and consequently inhibits cell differentiation (26). Supporting this finding, stable Id1 expression was found to block B cell maturation (27). Moreover, Id1 can inhibit differentiation of muscle and myeloid cells by associating in vivo with E2A proteins (28,29). It has also been reported that Id1 was immunohistochemically expressed in majority of non-small cell lung cancer (NSCLC) samples (30). Furthermore, Id1 protein expression in prostate cancer cells mediated resistance to apoptosis induced by TNFα (31). These lines of evidence also indicate that ID1 may play an essential role in carcinogenesis. In the present study, we demonstrated that ID1 suppression resulted in cell growth inhibition that was independent of TP53 status. However, cell growth inhibition caused by ID1 suppression was accelerated specifically by the p53 R175H mutant protein. If the accelerated cell growth inhibition is attributable only to loss-of-function in p53 R175H, this phenomenon should also be observed in TP53-null cells and other cells expressing loss-of-function mutations other than p53 R175H. Some p53 mutant proteins acquire additional functions called gain-of-function (32). For example, ectopic expression of p53 R175H resulted in transactivation of genes that are not usually activated by wild-type p53 (33)(34)(35). On the basis of these observations, we concluded that the acceleration of cell growth inhibition was likely attributable to gain-of-function of p53 R175H. To date, synthetic sickness/lethality has been classified into 2 types based on the initial genetic event. The first type is attributable to a loss-of-function mutation in a target gene and the second type is attributable to a gain-of-function or an activated mutation in a target gene. For example, the synthetic lethal interaction between loss-of-function mutations in BRCA1 and BRCA2 genes and PARP inhibition (8) are a former type, and gain-of-function mutations in the KRAS gene and STK33 inhibition (6) are a latter type. Based on our results, it is clear that the synthetic sick/lethal interaction between p53 R175H expression and ID1 suppression is of the latter type. However, there is a clear difference between the activated KRAS-STK33 interaction and the p53 R175H-Id1 interaction. Gain-of-function in activated K-ras depends on STK33 and is therefore blocked by STK33 suppression. By contrast, the accelerated cell growth inhibition observed here cannot be explained only by blockade of gain-of-function in p53 R175H by ID1 suppression. By contrast, it is necessary for accelerated growth inhibition by ID1 suppression. Taken together, these results suggest that the synthetic sickness/ lethality of p53 R175H with ID1 suppression may be through a gain-of-function mechanism that is distinct from the previously identified gain-of-function mechanisms. Since both expression and suppression of p53 R175H had no effect on the amount of Id1 protein, p53 R175H may cooperate with downstream factor(s) that are altered by ID1 suppression and may promote synthetic sickness/lethality in cooperation with protein(s) downstream of ID1. The precise molecular mechanisms of the synthetic sickness/lethality of ID1 suppression and p53 R175H expression remain to be elucidated. In conclusion, Id1 and its associated signaling pathway is one of the molecular targets of cancer cells expressing the common p53 mutant R175H.
2016-05-12T22:15:10.714Z
2013-12-30T00:00:00.000
{ "year": 2013, "sha1": "b793ab486b2a13012bbfd4ae3441c5954c880ce9", "oa_license": "CCBY", "oa_url": "https://www.spandidos-publications.com/or/31/3/1043/download", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b793ab486b2a13012bbfd4ae3441c5954c880ce9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
197627523
pes2o/s2orc
v3-fos-license
Mechanical properties of new functional composite materials based on polymeric binders Composite materials which include components (binding and fillers) that provide technological and operational characteristics were investigated. As a fillers AlN, Al(OH)3, SiO2, CaSi03 powders are used. Binder – dimethylsiloxane rubber SKTN A + PMS silicone oil in 4:1 proportion. Studied composite materials are designed for dielectric coatings materials creation, which have a high thermal conductivity and do not support combustion (through the use of fillers with flame retardant properties). The results of experimental studies of the mechanical characteristics of composite materials based on siloxane binder and fillers in the form of fine powders AlN, Al(OH)3, SiO2, CaSi03 are presented. The values of strength, coefficient of elasticity, relative elongation in tensile tests, modulus of elasticity in compression tests, Shore hardness are measured. Research purpose The purpose of research was in experimental study of mechanical properties of composite materials based on siloxane binder and mineral fillers in the form of fine powders depends on it weight content in composition. As a fillers AlN, Al(OH)3, SiO2, CaSi03 powders were used. Selection of fillers related with it properties such as high values of volume electrical resistivity, high thermal conductivity (AlN), incombustibility (Al(OH)3), high strength (SiO2), good adhesion, which allows to use filler for anticorrosion and sealing coatings (CaSi03) creation. Also selection of fillers is related with possibility of two fillers combination creation, which allows to optimize functional properties for concrete application conditions. Powder fillers properties are studied in details and described in the literature [1 -5]. Composite materials properties based on polymeric binder and powder fillers also studied and summarized in order of new compositions appearance [6 -8]. Necessity of experimental study related to the fact that in the range of filler weight content from 20% to 80%, that is in "reversal of phase" area, theoretical description of properties is problematic. Main composite materials properties are: mechanical, thermos-physical and electrical. Special properties are: incombustibility, chemical durability, humidity resistance and etc. Results of thermal conductivity studies of considered in this article materials are given by us in separate works [9, 10]. Research methods and samples Mechanical characteristics of samples were investigated on MEGEON 03000 testing machine, hardness was measured with portable hardness tester according to Shore TS-C (scales A, D). Mechanical properties are: strength (tensile strength under tension), deformation (Young's modulus in compression, Shore hardness, coefficient of elasticity in tension). Young's modulus Е is determined from the formula: where: Ftensile/ compression force applied to the sample, Н; Sоcross sectional area of sample, m 2 ; Lоinitial length of sample, m; ΔLchange of sample length under tension (+) or compression (-), m. Relative expanded uncertainty U0,95 of measurement Е is determined by the accuracy figure of used measuring tools of input values and amounts to 0,6% for samples of typical geometrical dimension. Tensile strength P [MPa] is determined from the formula P=F/So. Relative expanded uncertainty U0,95 of measurement P for samples of typical geometrical dimension is 0,9%. Spring rate in tension is determined from the formula k=F/ΔL. Relative expanded uncertainty U0,95 of measurement k is 0,6% for samples of typical geometrical dimension. Studies are conducted in line with requirements of regulatory documents for mechanical tests [13 -16]. Samples for tensile test are flat plates of 40 mm length with width up to 25 mm (determined by the grip of the tension testing machine) and thickness of 1-3 mm. For binding in clamps of testing machine at the ends of the samples provided thickening (pad) Samples for compression test are cylinders of 20 -30 mm diameter (determined by diameter of machine upper movable platform) and length of 30 -50 mm. According to recommendations, it is desirable that the length of the sample be less than three diameters -this allows to avoid bending the sample during compression. At the same time, the effect of friction forces in the contact "sample endplatform surface" occurs in too short samples, which leads to its non-uniform deformation -as the sample is compressed, the sample becomes barrel-shaped. Lubricants such as paraffin are used to reduce friction. In our case, there is no need for this, since the polymer binder (matrix) of the test samples has a low coefficient of friction in contact with the metal. Materials studied in present research belong to highly elastic classcompressive loads are small and deformation where there are no noticeable deviations from Hooke's law, are significant. This distinguishes them from hard and very hard materials. Compression diagram of such highly elastic materials is fundamentally different from the compression diagrams of solid materials (there is no creep or cold flow area, etc.), and in addition, after testing, the sample length is the same as before testing, i.e. there were no residual deformation or plastic deformation area. In compression tests, it was taken into account that for large deformations there is a significant grown up in the compressive strength due to increase of the cross-section of the compressible sample. With a small height of samples (less than 25 mm), the formation of a "barrel" was noted -the friction force between the ends of the sample and the surface of the platform did not allow the sample to compress evenly, with the result that the so-called "barrel". For estimates of the modulus of elasticity, such deformation of the sample was not allowed. Results Results of strength and elasticity characteristics measurement of investigated compositions are shown in table 1. In all cases, the binder is dimethylsiloxane rubber SKTN A + PMS silicone oil in a proportion of 4:1. Additional studies conducted on several batches of samples with Al(OH)3 powder showed that changes in the mass content of the filler, accompanied by changes in the density of the composition by ± 20%, lead to corresponding changes in tensile strength by ± 25%. This underlines the importance of controlling the stability of the technological processes of obtaining compositions with the desired properties. Conclusion Materials with different fillers were studied in the "reversal of phase" area (from 20 to 70 wt. %), where theoretically accurate prediction of the value of the characteristic is impossible. In general, according to obtained characteristics of strength and elasticity, materials are classified as highly elastic with the level of strength typical for similar materials. Increase of filler content in all cases led to increase of tensile strength, Young's modulus in compression and coefficient of elasticity in tension. In almost all cases dependency of characteristics on the content of the filler is not linear, but rather exponential. Spread of characteristics values related with inhomogeneity of materials both inside one batch of samples, and in collection of samples from several batches of materials, substantially depends on accuracy of desired composition obtain. Therefore, to ensure stability of materials properties, it is necessary to control stability and reproducibility of technological process of compositions making.
2019-07-20T02:03:49.945Z
2019-06-17T00:00:00.000
{ "year": 2019, "sha1": "6c634b9412309a53f699052c6ce12d9a8564de7d", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/537/2/022017", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "dbf8e254772ab26f6151e8266dd5c221baf50abc", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
9939151
pes2o/s2orc
v3-fos-license
Prevalence of tinnitus in workers exposed to noise and organophosphates Summary Introduction: Research on the workplace has emphasized the effects of noise exposure on workers' hearing, but has not considered the effects of agrochemicals. Aim: To evaluate and correlate the hearing level and tinnitus of workers exposed simultaneously to noise and organophosphates in their workplace and to measure tinnitus distress on their quality of life. Method: A retrospective clinical study. We evaluated 82 organophosphate sprinklers from the São Paulo State Regional Superintendence who were active in the fight against dengue and who were exposed to noise and organophosphates. We performed pure tone audiometry and applied the translated THI (Tinnitus Handicap Inventory) questionnaire. Results: Of the sample, 28.05% reported current tinnitus or had presented tinnitus, and the workers with tinnitus had an increased incidence of abnormal audiometry. The average hearing threshold for the 4–8-kHz frequency range of the workers with current tinnitus was higher than that of the others, and was most affected at the 4-kHz frequency. The THI score ranged 0–84, with an average score of 13.1. Twelve (52.17%) workers had THI scores consistent with discrete handicap. Conclusion: There is an increased incidence of abnormal pure tone audiometry in workers with tinnitus, and its impact on the workers' quality of life was discrete. The correlation between average hearing threshold and tinnitus distress was weak. INTRODUCTION Occupational exposure to agrochemicals in recent decades has increased based on the need for the management of vectors [1] that are difficult to control and that are responsible for the onset of diseases. Chemical control of such vectors is one of the methods used by government agencies to prevent the spread of epidemics such as dengue, yellow fever, Chagas' disease, and leishmaniasis, among others; agrochemical sprinklers are the professionals responsible for implementing this measure (1). The configuration of the hearing loss caused by industrial chemicals such as agrochemicals can be very similar to that observed for ototoxic drugs such as aminoglycosides and cisplatin, and that related to noise. The usual descriptors of these disorders are very similar: sensorineural hearing loss of 3-6 kHz, with lesions mainly on cochlear hair cells, bilateral, symmetrical, and irreversible (2). In the literature, studies about the effects of agrochemical exposure on hearing are very rare (3). Therefore, further research is necessary to improve understanding of the combined effects of noise and chemicals on hearing. A greater understanding of the effects of combined exposure would allow the development of more effective prevention strategies against hearing loss (4). There is evidence that chronic exposure to agrochemicals induces peripheral and central auditory damage, and in cases of combined exposure, noise is a factor that interacts with the agrochemicals, increasing its ototoxic effects, especially at the peripheral level (3). Combined exposure to noise and chemicals produces significantly greater hearing loss than exposure [1] Disease agents. to a single agent; a synergistic effect is observed in combined exposure (5,6,7). A cross-sectional prevalence study conducted with 98 agrochemical sprinklers, who worked in campaigns to prevent dengue, yellow fever, and Chagas disease, aimed to estimate the incidence of hearing loss in this population. There was a 63.8% prevalence of hearing loss in workers exposed to agrochemicals only and 66.7% prevalence in those with simultaneous exposure to agrochemicals and noise. It was verified that the level of hearing loss and extent of the frequency range effect were greater in the combined exposure group (8). A high prevalence of auditory and vestibular complaints were observed in a study performed with 50 rural workers exposed to organophosphates, suggesting that these substances can affect these systems due to their ototoxic and neurotoxic actions. The authors found that 54% of the workers presented tinnitus (9). We may emphasize tinnitus among the effects resulting from hearing loss, which apart from causing problems in the workplace, also has a negative impact on workers' quality of life and those around them (10). Chronic tinnitus (called "ringing") is a very common audiological symptom (affecting 5-15% of the population) characterized by an auditory perception unrelated with any physical source (11). This first symptom is a warning of excessive exposure to sound stimulation and may indicate increased susceptibility to damage caused by noise, and is a major symptom in preventing noiseinduced hearing loss and one of the main predictive factors of handicap in workers exposed to noise (12). Excessive exposure to noise is a major risk factor for hearing loss and tinnitus, followed by age and gender (13). Tinnitus (ringing or roaring in the ears) is a highly non-specific symptom affecting a considerable number of the adult population; it is often, but not always, associated with hearing loss of variable degree, and can be frequently considered an expression of cochlear disorder (14). As there is no objective method for detecting the presence of tinnitus, nor for determining the severity of the symptoms, the use of questionnaires to assess patients with tinnitus is essential (15). These questionnaires, assessing functional effects, are composed of several items that measure the impact of tinnitus on various aspects of daily life (16). According to some authors, their use ensures greater reliability in the evaluation of tinnitus in comparison with other methods (17). One of these questionnaires is the Tinnitus Handicap Inventory (THI) proposed by Newman et al. (13) and later translated to Portuguese (18). The selection of the THI (19) was due to its confidence, ratified by high internal consistencies (13,20). Its application is easy, fast (approximately 5 minutes), and reproducible (copyright is not reserved) (13). This study aimed to evaluate and correlate the hearing level and tinnitus of workers exposed simultaneously to noise and organophosphates in their workplace and to measure tinnitus distress on their quality of life. METHOD This study was developed in the Department of Audiology of the Center of Studies of Education and Health, College of Philosophy and Sciences, UNESP, Marília, and was authorized by the institution's Ethics and Research Committee (protocol number 0179/ 2010). All participants signed a consent form agreeing to participate. Ten years ago, the center developed a partnership with the Superintendence of Endemic Disease Control (SUCEN), the local authority linked to the São Paulo State Health Secretary, with the purpose of performing annual audiological evaluations in organophosphate sprinklers in this region. As their main task, these workers carry out vector control using chemical products, namely organophosphates (1). To perform their activities to combat endemic vectors, dengue, and yellow fever, they use Malathion ® , an organophosphate that is known to be toxic to humans and carcinogenic to animals (8). They use a costal motor sprayer 3-4 hours a day, which emits a noise equivalent to 98.5 dB (A) (1). The study was conducted with 82 male workers aged 30 and 59 years who had performed this function over a period ranging 1-24 years (mean 15 years). The data for this study were collected from May to August in 2010. The study was performed with SUCEN workers active in the fight against dengue who were exposed to noise and organophosphates. Exclusion criteria were alterations to otoscopic inspection that prevented procedures from being performed, presence of conductive or mixed hearing loss (21), type B tympanograms (23), and one of the procedures not being performed. Then, we carried out basic audiological evaluation composed of pure tone audiometry and tympanometry. The pure tone audiometry was performed in a soundisolated booth using a GSI-61 audiometer (Grason-Stadler) with TDH-50 supra-aural earphones. The clinical pure tone threshold was tested in the frequency range of 0.25-8 kHz. When the threshold found was 25 dB or greater, we performed bone conduction testing in the 0.5-4-kHz frequency range. The examinations were performed at least 14 hours after the hearing rest. The audiograms were classified based on Ordinance 19 of the Ministry of Labor (22). Given the preventive nature of this technical standard, subjects whose audiogram revealed a hearing threshold of 25 dB (HL) or less at all frequencies evaluated are considered to be within normal limits. Based on the recommendation of this ordinance, average pure tone thresholds at 0.5, 1, and 2 kHz and averages at 3, 4, and 6 kHz were used for the audiogram analysis (22). We used a GSI-38 (Grason-Stadler) with a lowfrequency probe tone such as 226 Hz to perform the tympanometry. After sealing the ear canal, we carried out the tympanometry, a dynamic measurement of the acoustic impedance that verifies the level of mobility of the tympanic-ossicular system. The results were analyzed and classified based on JERGER (23) to fulfill the exclusion criteria. Statistical analysis was performed to analyze the relation between the variables: the average of the pure tone threshold of the right and left ears and the extent of tinnitus handicap as classified by NEWMAN et al. (13) was determined using Spearman's rank correlation coefficient in STATISTICA version 7.0. The level of significance (p-value) was 0.05. Correlation is a measure of the relation among 2 or more variables. Correlation coefficients can range from -1 to +1. A value of -1 represents a perfect negative correlation and +1 represents a perfect positive correlation. The value of 0 represents no correlation. RESULTS To analyze the results, the sample was divided into 2 groups according to the presence or absence of tinnitus. Group I was composed of 23 (28.05%) workers with a mean age of 47 years who complained of tinnitus, and group II was composed of 59 (71.95%) workers with a mean age of 45 years and without tinnitus. The relation between tinnitus and hearing loss ( Table 2) showed that workers with tinnitus had a higher incidence of abnormal pure tone audiometry (60.87%). We analyzed the average threshold of pure tone audiometry for group I (Figure 1) and observed that the thresholds in the 4-8-kHz frequency range were higher than that in the others, and were most affected at the 4-kHz frequency. While investigating the tinnitus handicap in group I, the THI score ranged 0-84, with an average score of 13.1. Twelve (52.17%) workers had THI scores consistent with discrete handicap (Figure 2). Analysis of the relation between the variables right and left ear average threshold and tinnitus distress measured by the THI were conducted using Spearman's rank correlation coefficient. The right and left ear average threshold and THI score revealed a weak positive correlation, and this tendency was statistically significant ( Table 3). Prevalence of tinnitus in workers exposed to noise and organophosphates. DISCUSSION Tinnitus is a prevalent problem that remains poorly understood by health professionals. It is a global problem that affects millions of people (24). In the United States, approximately 50 million adults have tinnitus, and the general prevalence of this symptom in the country is 25.3%. The prevalence of persistent tinnitus is higher among older adults (peak of incidence between 60 and 69 years), non-Hispanic whites, ex-smokers, and hypertensive individuals with hearing loss, exposure to loud sounds, or generalized anxiety disorder (25). This study showed that 28.05% of the workers evaluated had tinnitus. This rate is similar to that described by authors who conducted studies with the same population (6,8,25,26), but is below that described in farmers exposed to organophosphates (9,27). A study verified a significant increase in the probability of individuals with hearing loss at high frequencies (3,4,6,and 8 kHz) or at low-medium frequencies (0.5, 1, and 2 kHz) having persistent tinnitus when compared to individuals without hearing loss. In individuals with low-medium-or high-frequency hearing loss, noise exposure has been associated with a greater chance of developing persistent tinnitus (25). A cross-sectional prevalence study, conducted with 98 agrochemical sprinklers who worked in campaigns to prevent dengue, yellow fever, and Chagas disease, aimed to estimate the incidence of hearing loss in this population. There was a 63.8% prevalence of hearing loss in workers exposed to agrochemicals only and 66.7% prevalence in those exposed simultaneously to agrochemicals and noise. A Program of Tinnitus Evaluation and Rehabilitation of a tertiary hospital in Singapore evaluated 327 patients, verifying that the majority (82.6%) had hearing loss and that it was bilateral in 74% at frequencies between 3 and 8 kHz (24). In this study, we observed that the average hearing threshold in workers with hearing loss at the frequencies between 4 and 8 kHz were higher than that in the others, and that the 4-kHz hearing threshold was the most severely affected. A previous study in a similar population reported the presence of hearing loss at frequencies between 3 and 8 kHz (6,26). Another study observed hearing loss in the frequency range between 2 and 8 kHz, and hearing loss average values increased from 2-6-kHz frequencies and decreased at 8 kHz when compared to 6 kHz (8). The audiological findings of hearing loss caused by occupational chemical exposure do not differ from noise-induced hearing loss in terms of audiometric configuration. Perhaps this practically identical configuration can justify why this important issue has been neglected for many years. The greatest damage usually occurs at 4 kHz, and the higher and lower frequencies are affected more slowly than that within the 3 and 6 kHz range (4). In a study with workers exposed to several types of agrochemicals, including organophosphates, 2 groups of 42 men were formed (a group composed of agriculture workers with at least 15 years' experience and another composed of workers without agrochemical exposure and without hearing loss history). The results showed that 60% of the workers exposed to agrochemicals had a lowered hearing threshold, and that 23 of them had bilateral sensorineural hearing loss. The workers with abnormal hearing thresholds exhibited a decrease in the 3-6-kHz frequency range; however, alterations were also observed at frequencies of 1, 2, and 8 kHz (27). Tinnitus is a highly non-specific symptom affecting a considerable part of the adult population; it is often, but not always, associated with hearing loss of variable degree, and can be frequently considered an expression of cochlear disorder (14). Tinnitus is followed by outer hair cell damage, which can be very limited, and can occur in individuals with a normal audiometric threshold (11). Several studies have demonstrated a link between hearing loss and tinnitus. However, there has been no systematic evaluation of the link between perceived tinnitus distress and underlying hearing loss. The underlying hearing loss can be a significant factor in the perception of the distress (29). Patient distress is a very subjective symptom and frequently depends on external and psychological factors beyond the negative assigned to that of tinnitus (28). The use of an instrument to assess the quality of life of individuals with tinnitus is crucial to choose better treatment and monitoring. The inclusion of psychometrically robust self-report measures of perceived activity limitation/participation restriction in clinical protocols will continue to prove invaluable in audiological, otologic, and neurotologic clinical practice (30). This study investigated the tinnitus handicap and the THI score ranged 0-84, with an average score of 13.1. Twelve (52.17%) workers had THI scores consistent with discrete handicap. The low handicap found in these workers can be explained by the fact that their tinnitus was intermittent. A study with similar results reported that the average THI score was 12.3, and regarding classification, tinnitus in 73.3% of the individuals was insignificant, while that 20% was mild, and 6.7% was moderate. The author justified these findings with the fact that the data were not selected from a specialized medical service, nor had the subjects received any kind of tinnitus treatment. Thus, the author affirmed that the individuals of the study presented low distress caused by tinnitus and demonstrated their ability to change the focus of attention in their daily activities (31). In a study conducted in a Program of Tinnitus Evaluation and Rehabilitation, researchers applied the THI and found that the score in 33% of patients was compatible with no handicap, while that in 31% was mild, 18% was moderate, and 19% was severe (24). Another study using the THI observed that the score for all subjects ranged 0-88 (standard deviation +20.0). The authors reported that both hearing loss and tinnitus were "impairments" resulting from loss or Individuals with tinnitus may suffer several degrees of distress, and this may have a higher or lower impact on the quality of life. Two important factors are related to tinnitus and should be differentiated: the intensity of the tinnitus signal and the severity of the symptom (the distress it causes to the patient's life). The present study agreed with another study that the THI total score can serve as a robust measure of tinnitus distress (32). The correlation between the right and left ear average threshold and tinnitus distress on quality of life using the THI score revealed a weak positive correlation; there was a tendency for threshold values to increase along with the THI scores, and this tendency was statistically significant. One study showed that the relationship between the THI score and hearing threshold of the better ear was weak (32). In the literature, there is no agreement on the link between THI score and hearing threshold. Some studies have demonstrated that there is no correlation between tinnitus severity and hearing loss (24,28), but others have confirmed this relationship (33,34). The correlation between degree of hearing loss and tinnitus distress is related to the way the patient faces his tinnitus, rather than to any physical or anatomical measure (28). CONCLUSIONS This study concluded that there is an increased incidence of abnormal pure tone audiometry in workers with tinnitus who are exposed to noise and organophosphates, the hearing thresholds between 4 and 8 kHz were higher than that for other frequencies, and the 4-kHz frequency was the most affected. The impact of tinnitus on the workers' quality of life was discrete. The correlation between the average hearing threshold and tinnitus distress was weak, and there was a tendency for threshold values to increase along with THI scores.
2016-05-04T20:20:58.661Z
2012-07-01T00:00:00.000
{ "year": 2012, "sha1": "198829a79ceaf7ca4bc07cc83e8c2cb03e5baaec", "oa_license": "CCBYNCND", "oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.7162/S1809-97772012000300005.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "259c9ed25ce0d0d2c549b62156a2db0ab1d40cb0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10077349
pes2o/s2orc
v3-fos-license
Mental Models of Online Privacy: Structural Properties with Cognitive Maps Individuals usually build small-scale representation of reality to help them navigate their environment. Although mental models have been used in HCI before, they mostly occur as analogies and metaphor within the privacy and security research space. The meaning for users, the values associated and reasoning over online privacy have not been investigated before. In our research we explore and depict users’ mental models of online privacy through the content, properties and structure of privacy mental models. We believe mental models provide a framework for understanding user cognitive processing and reasoning and consequently privacy decison-making. In this paper we present an on-going study that use Amazon’s Mechanical Turk and cognitive mapping technique to elicit and illustrate mental models. We compare the cognitive maps generated for two different questions and analyse their structural properties. We find that while a list of concrete privacy evaluations populate the cognitive maps when asked directly about privacy, the examples are generally scarce if not absent when queried about personal importance of the online environment. We also find that the degree of vertices complemented with the source and sink vertices can help to identify key concepts, triggering links and clusters within the maps. INTRODUCTION Online privacy designs vary among categories of privacy-by-policy and privacy-by-architecture or privacy-by-design approach (Spiekermann and Cranor, 2009).Although legally and technically sound, these approaches are far from being effective as illustrated by the dichotomy between privacy attitudes and behaviour (Spiekermann et al., 2001;Acquisti and Grossklags, 2005). We seek to explain this phenomenon by hypothesising that the design does not correspond to users' cognition in privacy desicion-making.In particular, perception, interpretation and evaluation of privacy decisions may exhibit cognitive associations not taken into account in the design for privacy.By providing an excerpt into human cognition, mental models provide us with a gateway to investigate cognitive associations.In addition, it enriches our ability to communicate with users in a manner that tunes into their mental models and activate privacy attitude associations.Furthermore examination of the content of users' privacy mental models, of the associations between concepts and their properties such as the proximity and similarity between clusters will provide key insights that support effective and usable design interventions. This paper first provides the background research followed by presentation of a study conducted as part of our endeavour to develop users' mental models of online privacy.The study aims to answer a research question: How do different framings of questions affect users' mental models of privacy?We present the methodology followed by an analysis of the structure of the cognitive maps developed.We then discuss our findings and provide our future research directions. BACKGROUND In this section we introduce the difficulties in eliciting the cognitive dimensions of privacy.We provide a brief of mental models and their use in privacy reseach follows.The section ends with a note on the cognitive mapping technique.and concerns, gathered self-reports of privacy behaviour or observed behaviour under laboratory design settings (Spiekermann et al., 2001;Acquisti and Grossklags, 2005).Elicitation of privacy perception and concern is a difficult task due to its sensitivity to priming effects.The methodology poses the risk of triggering cognitive associations, processing and activation of mental models leading to responses that might not usually arise in everyday interactions.This might explain the dichotomy observed by studies of privacy concerns and behaviour.Indirect approaches are thus often used such as eliciting perceived risks of online interactions (Miyazaki and Fernandez, 2001) or disclosure decision in specific context (Spiekermann et al., 2001). Mental Models Mental models are internalised, mental representations of a device or idea that facilitates reasoning (Johnson-Laird, 1983).They are simplistic and small-scale representations of reality (Craik, 1943).Mental models are valuable because they are the lenses through which individuals see and interact with the world.The lens shapes how individuals interpret the world.Thus by conjecture, mental models would comprise our attitudes, beliefs, opinions, theories, perceptions, mental maps of how things are or should be and frames of reference. Mental models vary with user expertise and experience.Experts' mental models are richer and more abstract than those of novices.Novices' models represent more concrete levels of knowledge and have a more naive problem representation as they present objects in real time (Larkin, 1983).Compared to novices, experts use chunking strategies to represent problems thus helping in problem representation (Chi et al., 1981;Chase and Simon, 1973). It is also thought that users build and use models to guide the way they learn and interact with computers.Mental models enable users to predict and explain the operation of a target system (Norman, 1983).By interacting with systems, users formulate mental models of the system that need not be technically accurate but are functional that is the model can be 'run' and works within a certain scenario.Since users improve their models with experience, mental models are often incomplete and partial descriptions of the operations of the system.However the mental model uncertainty principle, that is mental models are not directly accessible or observable, poses the inherent problem of representing mental models (Richardson et al., 1994). Mental Models of Privacy Mental models therefore promise a valuable framework to facilitate investigation.They do so by enabling illustration of conceptual relationships that hold semantic information, which would portray users' cognitive processing and reasoning.Mental models have been associated with privacy and security research before through analogies and metaphors.These include 'situational faces' (S.Lederer et al., 2003), 'audience-view' (Richter-Lipford et al., 2008), card-based metaphors (Wastlund et al., 2012), physical security model (Raja et al., 2011) or modeling of security risks (Camp, 2009).These involve areas of applications such as security warnings (Bravo-Lillo et al., 2011;Diesner et al., 2005) including firewalls (Raja et al., 2011), mobile security (Lin et al., 2012), end-toend email security as means to e-mail protection (Renaud et al., 2014) and anonymous credentials (Wastlund et al., 2012).Recently there have also been proposals to elicit user mental models of security and privacy (Volkamer and Renaud, 2013;Coopamootoo and Groß, 2014). Cognitive Mapping Cognitive maps can be regarded as expressions of mental models and cognitive mapping to the task of mapping a person's thinking about a problem or issue.It is a technique used to structure, analyse and make sense of accounts of problems that can be verbal or written.Cognitive map has had a long history, the idea originally coined to depict mental representations of the routes and paths of the environment used by people and rats (Tolman, 1948).However Axelrod (1976) used it as a 'map of cognition' while Eden later used it as reference to a map 'to aid cognition' (Eden, 1992).Axelrod's map of cognition has been used in artificial intelligence (Kosko, 1986) and experimental research such as system dynamics (Doyle and Ford, 1998).In our research, we also use cognitive maps as originally referred to by Axelrod.Also an agreed upon cognitive mapping methodology is not yet available between research domains (Vennix, 1990). OUR APPROACH In this section, we present a study aimed at eliciting and developing user privacy mental models.We present our design followed by an analysis of the structural properties of the models. Design Our main research question is: What do user mental models of online privacy consist of?Given that user reports of privacy are primed by the framing and design of studies, we postulate mental models elicited are likely to suffer from such biases.The research question leads to subordinate questions such as: How do different framings of questions affect users' mental models of privacy?. Our on-going research includes elicitation of privacy mental models.Given the presence of the researcher might affect responses provided, we opted to start the research with Amazon's Mechanical Turk for its accessible subjects.We are conducting between-subject studies with questions requiring 100 to 250 words responses.Our questions were: • Q1 -What does privacy online mean to you? • Q2 -What do you usually use the internet for? What is important to you when you are online? The first two questions are chosen since they are believed to be at far ends of the direct to indirect spectrum.As a test case for our methodology we collected and analysed the data of five participants for each of these.We aim to add to the study with these questions framed differently such as positive and negative framings or questions including sharing rather than privacy for instance we recently launched another question: Q3 -What does sharing online mean to you? Elicitation Process The free text collected by Mechanical Turk are first collated.We then use CMapTools to develop cognitive maps 1 made up of a hierarchy of concepts connected to each other via directional links as shown in Figure 1 and Figure 2. The elicitation process is as follows: • each response is divided into distinct phrases of no more than 10-12 words long (possibly much shorter), • statements within each phrase such as subject concept A exercising an 'action' on an object concept B are identified, • relations are classified such as ontology ('is a', 'includes'), constraint (restriction of the application of the concept), cause-effect 'action' between the concepts or negation thereof, • relative clauses associated with a concept already consumed are translated by duplicating the concept and forming a separate phrase, • if concept A does not exist in teh concept set, a vertex labelled with concept A is created, 1 http://cmap.ihmc.us• if concept B does not exist in teh concept set, a vertex labelled with concept B is created, • an arrow from A to B labelled with the identified relation is created, typed with the relation type. Definition 1 (Cognitive Map) A cognitive map is a directed, possibly cyclic, vertex-labeled and edgetyped/-labeled multi-graph.The vertices are labeled with distinct concepts.The arrows depict thought processes for a person with links or associations from one concept, the source to another, the sink.An arrow is derived from a one-to-one mapping of a phrase to concept relation.The directed associations could encode cause/effect or means/ends but are not limited to these. Structural Analysis We first look at the shape of the maps.The different questions give different structures: • maps for Q1 have a hierarchical structure pointing towards/from the main concept 'privacy online' and often linking to three clear subordinate but important concepts: the person, personal information or data and other people who can be authorised or not.These link to concrete examples making a three-level graph on average as shown by Figure 1.• for Q2, three maps had a shallower hierarchy leading to the superordinate concept 'person' from information or type of activities, often also leading to the concept of 'friends' or social connections.Therefore the maps show the different activities for which the person uses the online environment.Each of these three maps has one to three longer links that show who the person shares specific information with and the benefits of obtaining information on the internet. Third we look at the degrees of vertices which refers to the number of direct links (both input and output).Table 3 and 4 provide the list of concepts that received at least a degree of 3 for each partcipant of Q1 and Q2. DISCUSSION Sink and source together with degree of vertices to point important concepts or clusters for participants.These might be an indication of triggers to activate more elaborate mental models of privacy or enable privacy attitude evaluation.Concepts leading to multiple sink vertices might indicate their strength.Reachability of concepts and cycles might give further indication of the users' thought processes.It would however be interesting to find out whether the part of the mental model triggered links that approveor reject decisions and behaviour.For instance it appears from Table 1 and Table 2 that we are able to identify sink and source vertices. Table 3 shows the high importance of 'personal information' or 'data', the 'person', other 'people' and 'privacy online', Table 4 shows the prominence of the concept 'person' and much of the social benefits of the online environment through 'friends', 'social reach' and 'shopping'.However, the high degree for participant 1 of Q2 include 'unknown' and 'known'.This corroborates with Table 2, where the same participant produced sink vertices including 'favourable', 'trustworthy', 'careful' and 'bank information'.Participant 4 has less risk related concepts but mention 'sensitive information' without being prompted about privacy and Table 2 identifies 'hackers', 'banking sites' among sink vertices. The shape of the graphs together with the lengths of arguments can be an indication the participants' cognitive ability with respect to the question.However the shape can be influenced with thoughts and ideas that are more salient at the time of participating in the study or can be induced by the type of questionning.For instance Q1 included 'What does .. mean to you?' whereas Q2 was 'What is. . .'.This might contribute to Q2's generally shallow map associated with activities. Further analysis and evidence are required to corroborate these findings across types of maps and establishing whether a particular user belongs to a segment would depend on the consistency of the maps.Also given the 'mental model uncertainty principle' (Richardson et al., 1994) different elicitation approaches might lead to different results.In addition given the questionable stability of mental model over time, techniques to ascertain stability akin to those in trait theory would be valuable for the research.Our cognitive mapping methdology has not been validated yet nor have we assessed whether other methods would be a more suitable and reliable.Our research agenda includes developing a rigorous, systematic and reproducible methodology and conducting the study with a larger sample. More extensive research are also needed to eliminate potential confounding explanations and investigate the multitude of framing possibilities.In addition the quality, readability and complexity of the questions and participants' cultural background are important confounds to the mental models derived. CONCLUSION AND FUTURE WORK In this paper we present initial results of on-going research aimed at depicting user mental models of online privacy.Our study was aimed at assessing cognitive maps produced from different framings of an elicitation question.We conclude that the way the question is designed influences the structural properties of the mental models gathered.In addition we find that the methodology presents the potential to contribute towards identifying users' privacy inclinations and their cognitive ability with respect to privacy.For instance we posit that the domain score complemented with the head and tail nodes can help to identify key concepts leading to most associations thus potentially helping to categorise privacy concerns. Our future work first includes expansion of our study across a larger user pool while investigating an array of different questions that could have privacy implications and hence generate privacy related mental models.We aim to facilitate this by developing a structured method of eliciting, analysing and mapping users' cognitive maps.We consider comparing results of cognitive mapping with other approaches such as repertory grids or semiotic analysis. Second, the investigation will benefit from other methods of collecting user data such as face-to-face interviews, drawing of concept maps or observations.Analysis methods such as co-occurence matrix, cognitive distance, cluster analysis, multidimensional scaling and hierarchical cluster analysis might shed more light on the cognitive maps depicted. Third we aim to explore a series of subordinate research questions and hypotheses such as: • whether and how mental models support behaviour, • how cognitive ability influence privacy mental models, • how cognitive effort impact the activation of models and their size, shape, complexity, • how the type of reasoning such as inductive or deductive methods influence privacy models. Table 1 : Sink and Source vertices for Q1 for participants 1 to 5 Table 2 : Sink and Source vertices for Q2 for participant 1 to 5
2016-01-15T18:20:01.362Z
2014-09-09T00:00:00.000
{ "year": 2014, "sha1": "938441be58279e3b67af4ca86b5de6334963254b", "oa_license": "CCBY", "oa_url": "https://www.scienceopen.com/document_file/8455f9a4-32cf-4aa3-b812-3dc7929201ef/ScienceOpen/287_Coopamootoo.pdf", "oa_status": "HYBRID", "pdf_src": "Grobid", "pdf_hash": "938441be58279e3b67af4ca86b5de6334963254b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
225370311
pes2o/s2orc
v3-fos-license
Theorizing Through Literature Reviews: The Miner-Prospector Continuum While literature reviews play an increasingly important role in theory development, understanding how they contribute to the process of theorizing is lacking. This article develops the metaphor of a miner-prospector continuum, which allows review scholars to identify approaches taken in literature reviews to develop theory. We identify eight strategies located on a continuum ranging from miners—who position their contributions within a bounded and established domain of study alongside other researchers—to prospectors, who are more likely to step outside disciplinary boundaries, introducing novel perspectives and venture beyond knowledge silos. We explore the pathways between miner and prospector in terms of strategies followed, choices made, risks borne, and benefits gained. We identify the roles to be played by different stakeholders in balancing the mix between miners and prospectors. While respecting the need for both miner and prospector approaches, we suggest that collective efforts toward encouraging prospector reviews could assist management research in tackling, through reviews, the complex challenges facing organizations and society today. Using this eight-category miner-prospector continuum, we explore the implications for theory development within review papers adding new insights regarding how theoretical contributions may be developed, and articulating the risks and benefits of each. In so doing we support authors to engage in what Hoon and Baluch (2019) describe as "powerful theorizing" through review, while enabling them to envision where their work "sits" on the continuum between convention and novelty. The article is structured as follows. First, the miner-prospector metaphor is explored in detail. The different paths taken by authors of reviews who adopt respectively a miner or prospector approach are outlined, with consideration given to choices made. Such articulation facilitates authors, when theorizing through literature reviews, in deciding about the positioning of a review contribution based on the relative opportunities of each approach along the miner prospector continuum. Having explored the continuum between miner and prospector approaches, we provide examples of review papers under each of the miner-prospector categories. We acknowledge that some papers may contain elements of more than one categorization, however we classify the examples in relation to what we understand to be the main contribution within each paper. We then draw on these literatures to exemplify specific strategies used by either miners or prospectors when developing theory through literature reviews. In the final section of the article, we identify roles to be played by different stakeholders in nurturing and developing researchers to follow these paths. We further outline three core features of a review-transparency, inclusivity, and criticality-which apply to all approaches across the miner-prospector continuum. We suggest that journal editors might disrupt publication norms through encouraging more innovative papers, as opposed to incremental, consensus-based research (Alvesson & Sandberg, 2013;Hoon & Baluch, 2019). While explicating the benefits of both mining and prospecting reviews, and reserving a place for miners to review rich seams of knowledge, we argue that more prospector reviews are needed within organization and management studies. Without prospector reviews, "mines" of knowledge may become depleted, with new perspectives needed to address the challenges facing organizations today (G. Wood et al., 2018). The Miner-Prospector Metaphor The miner-prospector metaphor has been drawn upon in other research traditions to illuminate differences between approaches that are more conventional versus those that are more original. For example, Nugent (2011), quoted at the start of this article, utilizes it to describe the different trajectories pursued by history scholars as regards their research careers. Similarly, Cozzo (1999) uses the metaphor to describe the contribution of pure philosophers, who introduce new (what he describes as "pre-theoretical") ideas to a field, and logicians, who add rigor and theory to consolidate (or critique) existing ideas. We were introduced to the metaphor through a conversation with Howard Aldrich, University of North Carolina. Similar to Nugent (2011), our definition of the term envisions miners as working to plumb an existing seam of resource as deeply as possible, while prospectors pursue new avenues "sinking or swimming on their own hunches" (Nugent, 2011, p. 209). In using metaphors to illuminate our arguments within this article, we pursue a path well established within management studies. Metaphors act as a powerful device for sensemaking, enabling better comprehension of complex scholarly and organizational phenomena (Hekkala et al., 2018;Weick, 1989). Making accessible concepts that may be hard to imagine or define because these are abstract or covert, metaphors facilitate evaluation through creating meaning (Kendall & Kendall, 1993;Morgan, 1980;Tsoukas, 1991). Metaphors may be illuminative, offering new ways of seeing and understanding hidden practices (Cornelissen, 2004;Cornelissen & Clarke, 2010). For example, the use of the metaphor "glass ceiling" has shed light on the invisible yet impenetrable barriers that impede women's career advancement and might otherwise have been obscured from view (Jackson & O'Callaghan, 2009; see also Powell & Butterfield, 1994). We acknowledge here that the miner and prospector reviewer approaches might not be mutually exclusive. Furthermore, while the two approaches could at first glance be viewed as a dualism of opposites, we observe in practice an important interrelationship between them (Putnam et al., 2016;Seo et al., 2004). We suggest that a more productive way of viewing miner and prospector approaches would be to treat these perspectives as a mutually constitutive duality in the sense described by Putnam et al. (2016): That is, rather than being necessarily at odds with one another, miner and prospector approaches could also be defined as "interdependent" (Putnam et al., 2016, p. 74; see also Farjoun, 2011;Seo et al., 2004). Our interpretation of interdependence implies that creative and apparently pretheoretical terrains uncovered through a process of prospecting might, in practice, emerge from landscapes mapped by the hard work of mining. Drawing upon adventurous discoveries found through prospecting, mining in turn may develop theory through contributing rigor and conceptual clarification to the new excavation. The dynamic interplay between mining and prospecting thus creates a tension and energy (Putnam et al., 2016), as each category exists through and influences the other-together offering potential for theorizing through literature reviews within organization and management studies. The Miner's Path As Morris (2008) and Gilles (2014) have observed, both in relation to the actual practice of mining (Morris) and in metaphorical terms (Gilles), the goal of the miner is to extract from an existing mine (or scholarly field) sufficient material to make a living (or write a paper). Metaphorically, "miner" reviews seek to extract a distinct contribution relative to others working within the field and to position a literature review within a domain of study. At the beginning of the miner-reviewer's journey, a potential target is identified that fits both the miners' interests and capabilities. Within this target mine, there will be many scholars competing for resource within the crowded space, from well-seasoned and battle-hardened old-timers to novices finding their way (Rollag, 2004). The former will tell hero and war stories (which fill novices with both inspiration and trepidation; Nugent, 2011) of exploiting hard-won mining leases, muscling against each other (and the novices) for prize positions (Pickering et al., 2015;Rollag, 2004). Those who have invested most in the mine are more likely to aggressively defend their patch. As Francis Crick once noted, "The dangerous man is one with only one theory, because he'll fight to the death over it" (Burkhard, 2011). The Miner's Choices The goal of the miner is to mine an unexploited section of the mineral seam or, in research terms, to fill an identified gap within an existing knowledge domain (Pickering & Byrne, 2014). In achieving this goal, miners have a number of important choices to make over time. First, they must choose the right mine and seam (or topic) to work on. Two questions need to be addressed here. Does the mine seam look profitable enough to work? Does the miner have the resources needed to extract mineral from this (Torraco, 2005)? Regarding the former question, one must gauge the potential value of a particular approach, theory, or paradigm to deliver contributions now and into the future (Pickering et al., 2015). There are clearly unknowns here. However, the miner might examine then extend the exposed seam and thus extrapolate projections into the future. With regard to the second question, the miner needs to consider whether they have the experience and capabilities to first gain access to the seam and then carve out valued contributions (Pickering & Byrne, 2014). Second, the miner must locate themselves (and their review) relative to the other workers within the metaphorical mine and position their work and future contributions (Webster & Watson, 2002). This involves the miner first in gaining understanding of the field, and of the contributions being made by others within the same working group. They then need to become part of the group of miners working that seam. This involves learning the language of the group and positioning themselves within the hierarchy of coworkers (Torraco, 2005). Some coworkers are more senior and demand the respect of more junior colleagues (Ylijoki & Henriksson, 2017). In this manner, the miner learns to fit into the working patterns of those within the group, where each contributes to the collective effort of mining that particular seam (Nugent, 2011;Ylijoki & Henriksson, 2017). Third, the miner needs to exploit their stake and carve out their contribution. As noted above, they must carefully work at this, with the views of other coworkers in mind (Pickering & Byrne, 2014). Too hasty an advance might encroach upon the work of another, with the threat of isolation or retaliation. If the work is too clumsy, it might undermine the foundations of neighbors, or potentially the integrity of the entire seam. Carving out the contribution involves seeing how one's contribution fits with that of other coworkers. It is thus a codependent process, as the individual miner works alongside others to carefully advance the seam together. Ultimately, the miner seeks to make incremental steps (Pickering et al., 2015), working alongside others as they collectively extract value from the seam. The Miner's Approach to Managing Risks The miner seeks to manage risks by following the insights of others in an uncertain world. However, the process is not without risks. First, by choosing a particular mine and seam within it, miners put all their eggs in a single metaphorical basket, adopting a textbook silo approach. On the one hand, this allows miners to put boundaries on the knowledge domain within which they work. If the mine seam is extensive, then they can secure a steady stream of future earnings (or publications) by theorizing and adding rigor to such a domain (Cozzo, 1999). On the other hand, putting boundaries on knowledge constrains their ability to move outside the seam, as they invest within extant disciplines. As they become increasingly invested in working the seam, miners view the world, speak the same language, and use the same tools as their coworkers. Among purist miners, this worldview however runs the risk of becoming obsolete if the seam runs dry, and they search for new sources of value in the world beyond. Moreover, if the market changes and the minerals they mine no longer have value in a changing environment, all their efforts could come to nothing in an unknown future world. The Prospector's Path The prospector has a different calling to the miner. The prospector does not follow a predetermined path to knowledge acquisition but seeks one that is less trodden (Cozzo, 1999). As they move through the research landscape, prospectors pay attention and are open to what may unfold. This "wayfaring" scholar prepares for this journey with "a backpack of tentative interests and ideas, and a commitment to the craft or art of inquiry" (Cunliffe, 2018b). Prospectors use imagination to make less obvious connections, leading to new insights, and bringing together ideas that may seem at first unconnected (Cunliffe, 2018b). By following this path, the prospector hopes to "strike it rich" and discover the next big thing (Nugent, 2011, p. 209). In some cases, prospectors (both in scholarly terms and in practice) might discover marginal mines, with limited seams of mineral (Gilles, 2014). They live in hope that they will find a larger, productive mine, which has many future seams of mineral of varying quality extending beyond the core source (Nugent, 2011). As the source of new ideas, prospectors can lay claim to future work in their area, divvying out licenses to future miners, all of whom will pay dividends (in the form of citations) back to the finder (Cozzo, 1999). The prospector thus seeks to shape the direction of research or search for new knowledge domains. The prospector avoids established hierarchies in existing mines, and follows their sense of adventure into unknown worlds beyond (Cozzo, 1999;Nugent, 2011). As Simsek et al. (2015) note, "While some opportunities are of the 'low-hanging fruit' variety, others call for creative and courageous efforts to explore topics of unknown variety with a substantial risk of dead ends and empty hands but with potential to rejuvenate and enlighten the entire landscape." The Prospector's Choices The prospector seeks not to work solely within an established mine but to search for "new and unexpected" mines (Anderson & Thomas, 2014). The prospector's explorations are not completely random, but calculated. Anderson and Thomas (2014, p. 10), who use the metaphor "olden-day gold prospector" as a lens for exploring new ideas regarding metacognition, report how, in relation to the practice of gold mining, "The rock 'spoke' to the prospector and could provide telling indications as to whether they were near a potentially productive vein." The first choice faced by prospectors is thus the decision regarding where to search for future mines. Prospectors might be guided in their search by looking towards other mines. The location of these mines might hint at some wider pattern of seam within an unexplored area. This approach therefore involves stepping back and examining the location of mines within the wider environment and possible connections between these mines. On the other hand, prospectors might be guided by signs within the untouched hills themselves. Learning from prior prospectors, they search for signals of untold riches in the landscape itself (see Anderson & Thomas, 2014;Cozzo, 1999). While the prospecting author of a literature review does not seek to work within an established mine, they nonetheless need to identify which existing mines and/or prior prospectors (or scholars) to study in order to search for targets, as noted above. In this respect, they still need to position themselves relative to other domains. If they prospect too close to an existing mine, they may be accused of infringing an existing claim. If they prospect in disputed areas between more than one mine, they may equally be caught up in an ongoing conflict (Pfeffer, 1995;Van Maanen, 1995). If they prospect at too far a distance from existing mines, they may lack the resources and infrastructure needed to later exploit the mine (Toulmin, 1972). Once they locate the mine, they then need to move fast to lay claim to future returns and carve out their contribution. If they delay in their actions, or lack the resources or skills to capitalize on the find, they may be overtaken by other opportunistic prospectors, or even miners whose luck has run dry in an existing claim. The Prospector's Approach to Managing Risks Ultimately the prospector's path is more high risk than that of the miner, though there are possibilities that the return on investment may be greater (Nugent, 2011; see also Anderson & Thomas, 2014). In order to locate new sources, prospectors must take a broader perspective, and not be constrained by disciplinary boundaries. On the one hand, this allows the prospector to wander between mines, following a nomadic path (Alvesson et al., 2017). Unconstrained by domainspecific boundaries, prospectors can cross established boundaries, which in turn encourages the cross fertilization of ideas, opening up new paths and insights (Cozzo, 1999). On the other hand, the higher risks faced by the prospector need to be managed. They need to have both the nerve, and the resilience, to continue on this path. As they strike out into the unknown, prospectors may lack the social support of both the novice miners and more established supporters who might be reviewing their papers, especially if prospectors seek to review a field which is new, relative to other areas of concern (for example, information technologies: Webster & Watson, 2002). Given the unknown nature of their journey, prospectors cannot follow the paths and approaches taken by others (such as, for example, following the guidance proffered on producing a quantitative review as advised by Pickering et al., 2015). With low chances of striking gold, they may wish to hedge their bets and prospect in a number of places at the same time. Ultimately, prospectors may be driven more by the risks and search process itself than the safety of the collective; "once the gold is found and miners start working," prospectors may move on to "explore new terrain" (Cozzo, 1999, p. 56). From Mining to Prospecting While our view of the miner prospector relationship might appear initially to present both approaches as a dichotomy, the paths chosen by academics might be more helpfully viewed as falling within a miner-prospector continuum (see Figure 1). Drawing on prior research, we have This figure shows how different approaches taken to develop theory in literature reviews, might by driven by either a miner and prospector orientation. Within each category (e.g., problematizing the literature) both miner and prospector approaches are possible. However, as one moves down the continuum, the approach taken tends towards a prospector orientation and vice versa. The approaches are mutually constitutive, i.e., each has potentially an influencing effect on the other. Miners dig deeply into new or "pretheoretical" ideas proposed by prospectors. Prospector reviews must base on the ground work of miners their creative leaps forward, and prospecting reviews benefit subsequently from "mining" reviews which build additional rigor and add theoretical clarification. identified eight strategies consistent with miner or prospector paths, through which scholars seek to develop theory in review papers. These strategies are classified below as spotting (though not necessarily addressing) conceptual gaps, organizing and categorizing literatures, problematizing the literature, identifying and exposing contradictions, transferring theories across domains, developing analogies and metaphors across domains, blending and merging literatures across domains, and setting out "new" narratives and conceptualizations. Within each category both miner and prospector approaches are possible (see Figure 1). However, as one moves down the continuum, the approach taken tends toward a prospector orientation and vice versa. Figure 1 shows how different approaches taken to develop theory in literature reviews might be driven by either a miner or prospector orientation. Within each category (e.g., problematizing the literature), both miner and prospector approaches are possible. However, as one moves down the continuum, the approach taken tends toward a prospector orientation and vice versa. The approaches are mutually constitutive, that is, each has potentially an influencing effect on the other. Miners dig deeply into new or "pretheoretical" ideas proposed by prospectors. Prospector reviews must base on the ground work of miners their creative leaps forward, and prospecting reviews benefit subsequently from "mining" reviews, which build additional rigor and add theoretical clarification. In order to illustrate the eight strategies on the miner prospector continuum, each author worked, at first individually, to identify reviews that would best explicate each category. Each of us subsequently proposed between two and three reviews, which we shared and discussed between us. Following these discussions, we eventually converged on and selected a sample of 24 review papers (3 papers in each category) that we felt best illustrated the central thrust of each approach (see Figure 1). We recognize that each of these papers may touch on a number of approaches across and from both sides of the continuum. However, we have categorized papers according to their key contribution, the central aim, and scope. For instance, while Felin and Foss's (2009) paper highlights a gap in the "routines" literature with regard to microfoundations, its central focus is on problematizing the collective view taken by existing scholars; it thus fits into the "Problematizing the Literature" category (see Table 1). As with the miner and prospector paths described above, each of these strategies involves different degrees of risk for authors. However, the general thrust of our argument is that while miners immerse themselves in the mine, prospectors may open up the field, having discovered new and rich seams of scholarly gold. Miners By harvesting knowledge from within a defined domain, miners seek to identify unexploited gaps within seams of the mine, reorganize approaches taken to extract valuable knowledge within that mine, or increase the effectiveness of the mining operation. The purist miner focuses on one domain, where the boundaries of that domain are defined in a systematic fashion as noted above. Given this focus, papers tend to be more comprehensive in their literature search inclusion criteria when compared to prospector reviews, reviewing different streams, and subthemes within a given literature (Denyer & Tranfield, 2009;Rowe, 2014). Miner papers also tend to present an in-depth and critical analysis of these streams (Webster & Watson, 2002), as they spot conceptual gaps, organize and categorize literatures, problematize the literature, and expose contradictions. At the point of "transferring theories across domains" we see authors stepping from the miner domain into prospector territory. As noted above, these approaches are not mutually exclusive. Spotting Conceptual Gaps In this strategy, "miner" authors seek to make conceptual contributions by completing literature reviews in which gaps are identified, future research agendas are articulated within a well-defined Identifies a gap in cognitive psychology research with regard to the notion of mind wandering and reviews literature on controlled processing to consider the implications of mind wandering on cognitive resources. Reviews empirical work on entrepreneurial human capital, organizational structure, social capital, and networks to fill a gap in knowledge between learning and small firm growth. Explores the concept of opportunity evaluation under a number of themes including mental models, integration, congruence, action orientation, thus pointing to important themes for future conceptualization. Identifies a clear gap in cognitive psychology literature and considers methodological and theoretical possibilities associated with filling this void. Argues that the growth process is more complex than theoretical life stage models and emphasizes the importance of a firm's access to knowledge resources as it resolves a variety of growth challenges. Organizing and categorizing literatures Becker ( Organizes the routines literature into two camps of capabilities-and practice-based approaches. Categorizes the literature into temporal antecedents, outcomes, and evolution. Organizes the literature around points of consensus and difference, shaping the future direction of research in this area. Points to ways in which both sides can inform research within the other, and ongoing challenges for both. Highlights the external and internal dynamics of the capabilities and practicebased views, respectively. Problematizing the literature Srivastava (2007) Problematizes the literature. Argues that that a generalist frame of reference for green supply chain management is insufficiently developed. Suggests that regulatory bodies seeking to enhance growth of business and economy suffer from the absence of this frame of reference. Classifies the literatures on the basis of the problem area with regard to influential areas with supply chain literatures. Also provides a timeline to describe developments in research. Suggests that Generalized Darwinism is insufficiently clear about evolutionary processes in the social domain, aspects of (continued) Tables 1. Argues that the routines literature is missing a focus on microfoundations. As a result, it is difficult for these concepts to explore the origins of the same phenomena. their products, and the nature of evolving populations of organizations. Argues that routines research needs to renew its focus on the origins of routines, intentionality, and aggregation from micro to macro levels. Identifying and exposing contradictions Vaghely & Julien (2010) Maon et al., (2019) Marlow (2006) Exposes contradictions in the literature on opportunity identification. Having first identified two broad camps within the literature, namely opportunity discovery and opportunity enactment, they then argue that neither camp has explored the process through which entrepreneurs actually identify opportunities. Reviews the literatures on corporate social responsibility (CSR) and its micro-level impacts and proposes an integrative framework to track problematic relational outcomes of CSR activities on employee and customer stakeholders. Exposes a contradiction in HRM research in small firms. Differences in how human resources are managed between small and large organizations, raising doubt over whether the former can be analyzed within an HRM framework. Taking an information processing view, integrates these two apparently opposite viewpoints. Adopting a paradox-based perspective, this review reveals how unexpected, adverse stakeholder reactions to CSR are caused by tensions, relating to stakeholder concerns. Contextual and personal contradictions can trigger and explain undesirable relational outcomes of CSR. Offers a research agenda for developing enhanced understandings of CSR-related tensions. Critiques the conceptual standing of HRM by examining the way employment relationships is small firms have been analyzed. (continued) Tables 1. Reviewing the literature on life cycle models, the paper transfers concepts from absorptive capacity to develop a framework for small firm growth. Transfers and reconceptualizes the concept of absorptive capacity as a dynamic capability pertaining to knowledge creation and utilization that enhances a firm's ability to gain and sustain a competitive advantage. Works the concept of "role taking" into the existing ostensive-performative duality espoused by the practice-based view of routines. Integrates the concept of absorptive capacity into a capability model and suggests that firms are differentially able to acquire, assimilate, transform, and apply knowledge to navigate these tipping points. Builds on prior research to redefine absorptive capacity as a set of organizational routines and processes by which firms acquire, assimilate, transform, and exploit knowledge to produce a dynamic organizational capability. Developing analogies and metaphors across domains Simonton ( Uses the metaphor of a "blind spot" to capture the lack of connection between positivist and critical research on worklife balance. Uses the metaphor of imprinting to develop a multilevel theory of change and persistence in organizations. Drawing on core concepts from biology, they begin by defining the concept in organizational terms, and then explore processes of imprinting at the levels of the individual, organization, and industry. Uses the metaphor of evolution to present prior literature on creativity through a new conceptual lens. Argues the case for an intersectional approach, which draws upon lenses of diversity and intersectionality to capture the previously hidden everyday practices among diverse working families. Sheds new light on a range of literatures from early career formation to new venture creation. (continued) Tables 1. Blends business and management scholarship with disciplinary areas related to women's studies, such as sociology and anthropology. Reviews literature that has adopted the "organization as theater" metaphor and shows that the emergent meaning structure of this metaphor cannot be explained or reduced to concepts from the source or target domains (i.e., theater and organizations, respectively). Proposes that a routine can seed a TMS and a TMS can crystalize into a routine. By being these two literatures together, they seed ideas for future research in both. Explores how far and where studies carried out on women entrepreneurs over the past four decades (within and outside of management studies) have impacted on theories of entrepreneurship and on research in this larger field. Argues that the blended structure from both domains can be translated back to input concepts to provide new conceptual insights. Setting out new narratives and conceptualizations Rhodes & Pullen (2018 literature and some observations regarding extant literatures are made (Alvesson & Sandberg, 2013). Junior scholars are advised to adopt the gap spotting approach as an achievable means to getting published, partly on the basis that richer reviews with a narrative or discursive approach (see Baumeister & Leary, 1997;Green et al., 2006) are more accessible to experienced scholars with established knowledge in the field (Pickering & Byrne, 2014;Pickering et al., 2015). Some gapspotting papers might lean toward gap-filling, extending beyond mere identification. For example, M. S. Wood and McKelvie (2015) make the case for research on opportunity discovery to also consider the underresearched phenomenon of opportunity evaluation. They argue that without this extension, the literature suffers from an "incompleteness problem" (Locke & Golden-Biddle, 1997). Their review explores evaluation under a number of themes including mental models, integration, congruence, action orientation, thus pointing to important directions for future conceptualization, which complements the discovery literature (M. S. Wood & McKelvie, 2015). As gap spotting approaches generally do not question the assumptions in a given literature in a substantive manner, it can be challenging to develop "interesting" and significant conceptual contributions (Sandberg & Alvesson, 2011). Through gap spotting, authors do not attempt to challenge existing views but instead seek to build directly on previous thinking and theorizations. In this respect the gap is recognized by researchers within the domain. Smallwood and Schooler (2006), for instance, position their review within the literature on controlled processing, making the case for a better understanding of the related phenomenon of mind wandering. They put forward a definition of mind wandering as "a situation in which executive control shifts away from a primary task to the processing of personal goals" (Smallwood & Schooler, 2006). They then consider the implications of mind wandering in relation to existing literature, focusing on methodological and theoretical possibilities associated with filling this void. Authors might develop theory by presenting synthesized coherence in their reviews and by arguing that researchers working in different areas are not aware of common points of similarity and intersection, and so identifying underdeveloped research areas (Locke & Golden-Biddle, 1997). Macpherson and Holt (2007) extend theory on small firm growth in this way, tackling the underexplored area of knowledge and learning. They emphasize the importance of a firm's access to knowledge resources as it resolves a variety of growth challenges (Macpherson & Holt, 2007). These examples illustrate the consensus-building approach taken by authors, as they extend and build upon existing literatures to fill underresearched gaps. Of course, authors need to present their case for taking such an approach (Haveman et al., 2019), and not all readers will agree that all gaps need to be filled. By joining well-established research conversations scholars enter crowded spaces, thus constraining the scope of any potential contribution (Patriotta, 2017). Organizing and Categorizing Literatures Here the miner-researchers seek to develop theory within a defined domain by organizing and categorizing a reviewed body of literature according to some dimension or framework, such as antecedents-process-outcomes. Generally speaking, as with the spotting conceptual gaps approach, such an organization of the literature confirms existing interpretations of researchers within the field (Oswick et al., 2011). As a result, this strategy again exemplifies the miner's approach. The takeaway for readers is a conceptual reorganization or framework, as opposed to a new set of explanatory concepts. This framework however may be the starting point for the subsequent development of theory (Strauss & Corbin, 1998). In Becker's (2004) review of the literature on organizational routines, he highlights a number of characteristics and effects of routines, helping to identify shared themes within the literature, areas of consensus and growth. For example, he discerns consensus within the literature around the view that routines enable coordination, provide stability, economize on cognitive resources, and bind knowledge. By drilling down into the different conceptualizations of the routine, Becker (2004) organizes the literature around these points of consensus and difference, shaping the future direction of research in this area. In a later review of routines, Parmigiani and Howard-Grenville (2011) make a further contribution by organizing and categorizing the routines literature into two broad camps of capabilities-and practice-based approaches. While not directly challenging the different approaches taken, their review helps demarcate the two camps, and highlights important differences in terms of foundations, levels of analysis, theoretical assumptions, and also areas of common interest. They thus point to ways in which both sides can inform research within the other, and ongoing challenges for both (Parmigiani & Howard-Grenville, 2011). By focusing on the growth of knowledge and consensus of views within a domain, such review papers can develop and focus lines of inquiry (Locke & Golden-Biddle, 1997). Furthermore, by positioning their contribution within a defined field (Oswick et al., 2011), reviewers who organize the literature seek to carefully fit into an established group of authors mining that seam of knowledge. For instance, Turner (2014) builds directly from Parmigiani and Howard-Grenville's earlier review, by focusing on the temporal dimension of routines, within both capabilities and practice perspectives. Using this earlier organizational scheme, Turner further categorizes the literature into temporal antecedents, outcomes, and evolution. Within each perspective, he discusses time as a signal for action, time, and outcomes and the evolution of routines over time. Turner then feeds these findings back into the capabilities-practice dichotomy, highlighting the external and internal dynamics of the capabilities and practice-based views respectively. In this manner, reviews that organize and categorize the literature can contribute to the growth and development of theory within a domain, helping to shape emergent themes and research streams. However, at times, such organizational activities can fail to highlight important limitations and contradictions within the domain as a whole. Problematizing the Literature In this category, authors seek to stimulate theory development by problematizing the literature within a given domain (Shepherd & Suddaby, 2017). Researchers thus review the current body of literature, to identify a tension or opposition, which represents the starting point for novel theorizing (Suddaby et al., 2011). In this way, authors can show the literature is incomplete, inadequate, or incommensurable (Locke & Golden-Biddle, 1997;Rowe, 2014), and challenge taken-for-granted assumptions in an established literature (Nadkarni et al., 2018). For example, Srivastava (2007) problematizes the literatures on green supply chain management. This highly cited paper argues the lack of a generalist frame of reference for green supply chain management. The author suggests that regulatory bodies, seeking to enhance growth of business and economy, require such a frame of reference in order to achieve results (Srivastava, 2007). Scholz and Reydon (2013) equally problematize the explanatory power of a Generalized Darwinist approach in studies of organizations. The authors argue that the approach is insufficiently clear about evolutionary processes in the social domain, aspects of their products, and the nature of evolving populations of organizations. They therefore caution against transferring such metaphors between domains as distal as biology and social science (Scholz & Reydon, 2013). While a problematizing strategy might be seen to fall within the miner approach, the researcher is seeking to upset the status quo within an existing mine. This process can lead to opportunities to create new or capture existing knowledge seams and/or mine these using different approaches or techniques, thus generating new ways of understanding within a given area of concern (Alvesson & Sandberg, 2020). For example, Felin and Foss (2009) problematize the literature on routines, by arguing that it has overly focused attention on the collective level, ignoring important microfoundations. As a result, they argue it is difficult for the routine concept to explore the origins of the same phenomena. This creates the opportunity for new strands of research, as Felin and Foss (2009) make the case for routines research to renew its focus on the origins of routines, intentionality, and aggregation from micro to macro levels. Problematizing literatures can therefore redefine research directions or open up new seams of knowledge within a given domain. At the same time, in many cases the researcher does not question the validity of the overall mine (i.e., the bounded domain of knowledge), and by highlighting noncoherence, authors merely highlight different approaches as belonging to a common research program or goal but linked by disagreement (Locke & Golden-Biddle, 1997). Identifying and Exposing Contradictions Finally, here authors extend the problematizing approach and develop theory, by challenging the theoretical foundations or implicit assumptions within a domain of interest (Alvesson & Sandberg, 2013;Suddaby et al., 2011). They might achieve this by setting up two competing views against the other and, in so doing, identifying similarities and differences between the two. For example, Vaghely and Julien (2010) expose contradictions in the literature on opportunity identification. Having first identified two broad camps within the literature, namely opportunity discovery and opportunity enactment, they then argue that neither camp has explored the process through which entrepreneurs actually identify opportunities. Taking an information processing view, they then integrate these two apparently opposite viewpoints (Vaghely & Julien, 2010). By identifying contradictions, scholars present a much greater challenge to the way things work within the mine, and who holds the balance of power. Maon et al. (2019) review the literatures on corporate social responsibility (CSR) and its micro-level impacts. They propose an integrative framework to track problematic outcomes of CSR activities on internal and external stakeholders. Using a paradox-based perspective, this review reveals how contextual and personal contradictions can set off undesirable relational outcomes of CSR. The paper offers a research agenda for developing a better understanding of CSR-related tensions. By exposing contradictions, authors can construct a mystery, focusing on breakdowns and discrepancies between empirical material and prevailing theories (Alvesson & Kärreman, 2007). Equally, by setting up one approach against another through contrastive explanation, authors compare the explanatory power of current key constructs with alternative explanations (Suddaby et al., 2011). Marlow (2006) exposes a contradiction in HRM research in small firms, exploring differences in how human resources are managed between small and large organizations. This exercise raises doubt over whether the former can be analyzed within an HRM framework (Marlow, 2006). Marlow critiques the conceptual standing of HRM by examining the way employment relationships in small firms have been analyzed. Marlow concludes that given limitations in how HRM has been conceptualized, due to its focus on large firms, its application to small firms is not productive. Authors who highlight contradictions in this way present a threat to the status quo within existing mines, potentially shaping the foundations of theory within. By exposing fundamental contradictions, authors hope to rally support for their cause, and so increase their chances of grabbing extra seam space. However, such direct attacks can provoke defensive reactions from both readers and reviewers. Prospectors While miners seek to explore and exploit underresearched areas within a domain of knowledge, prospectors set their sights beyond existing mines. The prospector aims to identify new lines of inquiry across and between domains and disciplines-as Cozzo (1999) describes (with reference to philosophers), proposing new ideas for understanding organizational phenomena (see also Nugent, 2011, with respect to historians). In this manner, prospecting authors use literature reviews to bridge across isolated silos of knowledge (Hoon & Baluch, 2019). As one moves along the minerprospector continuum, contributions become increasingly less bound to prior assumptions and logic within a given literature (Barney, 2018), as "institutionalized lines of reasoning" are disrupted (Sandberg & Alvesson, 2011). Given the wider range of literatures included in prospector reviews (when compared to "pure" miner reviews), literature search inclusion criteria tend to be more selective within each of the domains from which articles are drawn (Denyer & Tranfield, 2009;Rowe, 2014). Furthermore, the critique of these literatures occurs often with respect to theories and approaches drawn from other disciplines (Webster & Watson, 2002), as authors transfer theories, develop analogies and metaphors, blend and merge literatures across domains, and set out "new" narratives and conceptualizations. As observed earlier, it should be noted these approaches are not mutually exclusive, and indeed scholars may pursue more than one simultaneously. The provision here of the miner-prospector continuum offers strategies that will enable authors to take informed decisions regarding where to place their review within the framework and to manage the risks and benefits accordingly. Transferring Theories Across Domains In this strategy authors seek to make a conceptual contribution by transferring theories between domains, or applying a theory from one to another domain (Nadkarni et al., 2018). The transfer here occurs at largely a substantive level or area of application (Glaser & Strauss, 1967), and as a result the approach does not challenge the underlying theory that is transferred. For example, Dionysiou and Tsoukas (2013) import the concept of symbolic interactionism (Mead, 1934) to conceptualize the process of routine formation. They thus use Mead's concept of "role taking" to develop an account of routine emergence, extending the ostensive-performative conceptualization put forward by practice scholars Feldman and Pentland (2003). The process of transferring concepts and theories is motivated by the desire to apply established theories to a new empirical setting (Suddaby et al., 2011). This strategy can be viewed as the beginnings of a prospector approach, in that the scholar is moving away from one established mine and transferring techniques to another. Phelps et al. (2007), for instance, transfer concepts from absorptive capacity (Cohen & Levinthal, 1990) to develop a framework for small firm growth. By integrating absorptive capacity into a capability model, they suggest that firms are differentially able to acquire, assimilate, transform, and apply knowledge to navigate key growth tipping points (Phelps et al., 2007). They thus seek to shift the study of small firm growth away from life cycle models by proposing an alternative conceptual framework for the growing firm. The transfer of theories can therefore result in more significant conceptual insights and innovation and, with this, potential rewards over time. Zahra and George (2002) also apply the concept of absorptive capacity to reconceptualize dynamic capabilities, which pertain to knowledge creation and utilization that enhances a firm's ability to gain and sustain a competitive advantage. In this process, Zahra and George introduce new insights into this literature by redefining absorptive capacity as a set of organizational routines and processes by which firms acquire, assimilate, transform, and exploit knowledge to produce a dynamic organizational capability. While the transfer approach can lead to such insights through the cross-fertilization of ideas, transferors tend to stick to and build upon theoretical foundations developed by scholars in the source mine. For instance, Dionysiou and Tsoukas (2013) transfer the concept of role taking from symbolic interactionism (Mead, 1934), to rework the practice view interpretation of the routines as noted above. In addition, such boundary spanning research can be challenged by the disciplinary thinking in both source and target domains, limiting the potential for such cross-fertilization (Nadkarni et al., 2018). Developing Analogies and Metaphors Across Domains This transfer between domains occurs at a higher level of abstraction, through formal theory or grand narratives (Cornelissen, 2004). For example, Simonton (1999) draws on the metaphor of evolution, to conceptualize a blind-variation and selective-retention model of creativity. Focusing on the mechanism of blind-variation, he draws on experimental, psychometric, and historiometric literatures across the field to support his view that ideas mostly emerge from a blind-variation process (Simonton, 1999). Using the metaphor of evolution, Simonton thus presents prior literature through a new conceptual lens. In this manner, metaphors also involve the transfer of information from a source domain to a target domain (Tsoukas, 1991). While the author seeks to reveal a deep structure that exists between the two domains (Cornelissen, 2004), the similarity between them is less clear cut. As a result, this approach challenges established views within the target domain. Ö zbilgin et al. (2011) use the metaphor of a "blind spot" in this way, to capture the lack of connection between positivist and critical research on work-life balance. The authors argue the case for an intersectional approach, which draws upon lenses of diversity and intersectionality to show previously hidden practices among diverse working families. Scholars who adopt metaphors could be regarded as now firmly within the prospector path, as they set out beyond existing mines to identify opportunities and patterns of discovery in others. Marquis and Tilcsik (2013) also use a biology metaphor (imprinting) to develop a multilevel theory of change and persistence in organizations. Drawing on core concepts from biology, they begin by defining the concept of imprinting in organizational terms, and then explore processes of imprinting at the level of the individual, organization, and industry. In so doing, Marquis and Tilcsik shed new light on range of literatures from early career formation to new venture creation. Cornelissen (2004) assesses metaphors through their aptness or meaningfulness (or whether they offer new insights into an unfamiliar field) and the "distance" between the domains. The greater the contextual distance between the two domains (e.g., biology and management), then the better the prospects of the metaphor being insightful (Cornelissen, 2004;Morgan, 1980). However, as the strategy is largely one-directional, theoretical assumptions from the source domain are again not questioned. As a result, if these source foundations become discredited or redundant, then the basis of the prospectors' strategy likewise collapses, as argued by Scholz and Reydon (2013) above. Blending and Merging of Literatures Across Domains This strategy extends the borrowing of theories at a higher level of abstraction, by developing theory in both the source and target domains. An example of this is provided by Santos et al. (2018), who explore how far studies carried out on women entrepreneurs over the last four decades (within and outside of management studies) have impacted on theories of entrepreneurship. They bring into the business and management literatures from disciplinary areas related to women's studies, such as sociology and anthropology, and in so doing they reflect on how theories relating to women entrepreneurs have impacted on theories of entrepreneurship within a broader context. Blending in this way involves the projection of mental frames from two domains into a separate "blended" mental space (Cornelissen & Durand, 2012). Blending is thus a two-way correspondence involving meaningful engagement in both domains producing new insights in both (Oswick et al., 2011;Schoeneborn et al., 2013). For example, Argote and Guo (2016) contribute to the literatures in routines and transactive memory systems (TMS) by comparing and contrasting literatures in both. They examine the dynamics of change within each literature, and then consider the potential reciprocal relationship between the two concepts. This results in new insights in both literatures, as they propose that on the one hand, a routine can seed a TMS, and on the other, a TMS can crystalize into a routine (Argote & Guo, 2016). By bringing these two literatures together, they thus seed ideas for future research in both. These blending prospectors thus straddle multiple mines to identify opportunities to make conceptual contributions in both and beyond. Cornelissen (2004) for example reviews literature which has adopted the "organization as theater" metaphor. He shows that the emergent meaning structure of this metaphor cannot be explained or reduced to concepts from the source or target domains (i.e., theater and organizations, respectively). He further argues that the blended structure from both domains, can be translated back to input concepts to provide new conceptual insights (Cornelissen, 2004). The combination of theories across domains is complicated by the conceptual distance between the phenomena under examination and the underlying assumptions of each theoretical lenses (Okhuysen & Bonardi, 2011). An additional challenge relates to the compatibility of lenses, or the degree to which the different theories "rely on similar or dissimilar individual decisionmaking processes, organizational mechanisms, or other properties in the development of their explanations" (Okhuysen & Bonardi, 2011, p. 7). If theories are too close together in terms of sharing compatible assumptions and addressing similar phenomena, they can struggle to show sufficient novelty to warrant publication (Suddaby et al., 2011). On the other hand, the further the distance between and the more incompatible the underlying assumptions appear incompatible, then the more difficult papers are to craft (Okhuysen & Bonardi, 2011;Suddaby et al., 2011). Setting Out "New" Narratives and Conceptualizations This final prospector strategy leaves the door open so to speak, to possible new conceptualizations, not necessarily emanating from other disciplines, and with no precedent in any other field of studywhat Cozzo (1999) might describe as "pretheoretical" ideas, opening up new pathways (or seams of gold) for scholarly investigation. These new narratives can side-step building on or challenging an existing literature (Sandberg & Alvesson, 2011). Rhodes and Pullen (2018), for instance, step into uncharted waters as they draw upon insights from feminist theory and political theology, articulating corporate business ethics as a public glorification of corporate power, based on a patriarchal conception of the corporation as deeply rooted in Christian ceremonial practices. Setting out new theoretical agendas for understanding the reasons for corporate adoption of business ethics, they balance their creative exploration of theory which may destabilize the ethical glorification of the corporation, displacing corporate masculinist privilege, with the requirement to shape their arguments so that their review may be located within a management studies context. In metaphorical terms, these prospectors are not guided by the experiences of former miners, but use their intuition and creative leaps to identify sources of new mines. In their review of crises and crises management, Bundy et al. (2017) bridge across disciplinary siloes, integrating literatures from strategic management, organization theory, and organizational behavior, public relations and corporate communication. In so doing, they create a framework with incorporates two perspectives: one internally focused on technical and structural aspects of a crisis and the other externally oriented toward managing stakeholder relationships (Bundy et al., 2017). Bundy et al. thus open up the possibility for new theoretical development within and across literatures, with their framework serving as a foundation for future multilevel research on crises and crisis management. Setting out new paths (or pure prospecting) is considered high risk, with many more misses than hits over time (Nugent, 2011). Authors might set out new directions based on practical rationality, by setting current theory against the actual practices of management (Suddaby et al., 2011). Alternatively, researchers might use complex real-world problems as the starting point for theorizing beyond established domains of theoretical disciplines. Taking the complex life journeys of entrepreneurs as a starting point, Aldrich and Yang (2014) draw on a range of literature across domains of routines, habits and heuristics to argue that entrepreneurs acquire the knowledge they need to organize new businesses across their lifetimes. This multidisciplinary approach reflects the complexity of such career paths, as entrepreneurs acquire knowledge from family, schools, and work careers prior to the start-up stage, in addition to learning through the start-up process. Aldrich and Yang thus introduce a holistic life course model of selection and learning in nascent entrepreneurs, which spans multiple areas of research. Prospectors such as those mentioned here, seek to break free of prevailing norms by writing differently, being more imaginative, experimental, dialogic and reflexive (Gilmore et al., 2019). After all, writing in the social sciences is not just a matter of representation but of "imagination, originality, particularity, emotionality and expressiveness" (Rhodes, 2019, p. 27). However, given the novelty of their contributions, these prospectors might face, in practice, editor/reviewer criticisms that their reviews lack legitimacy within any camp. Thus, pure prospectors might experience compromised capacity to find the resources needed to support their ventures-it is not possible for a review paper to develop theory if it is never published. This strategy of prospecting for "new" scholarly gold therefore represents a high level of risk for authors as they pursue, what may become, lifelong projects. From Literature Reviews to Theory Development Reflecting on the range of review strategies taken by authors outlined above, it is important to define, regardless of where an approach might be located on the miner-prospector continuum, at what point does a literature review become a theory paper and vice versa. Prospectors after all increasingly move into the unknown and away from established domains of knowledge. In relation to literature reviews, the need for contributions to surprise is particularly challenging (M. S. Davis, 1971). On the one hand, a review, by definition, involves researching, gathering, and combing through prior works to present the field in a new light, and/or spot previously unseen trends or gaps. On the other, the review needs to develop theory, diverging from, while at the same time aligning itself to, a field of study. Reviewing reconstructs an account of the field by re-presenting the literature and intervening in the literature (Gond et al., 2020). Thus, the International Journal of Management Reviews seeks papers which "make significant conceptual contributions, offering a strategic platform for new directions in research, and making a difference to how scholars might conceptualise research in their respective fields" (Gatrell & Breslin, 2017, p. 1). At what point then do literature reviews become theory papers, and what differentiates the two? We propose that (regardless of whether it mines a rich vein of scholarly knowledge or prospects within new terrain) a review paper can be differentiated from a theory paper in terms of its "systematicity" (Rowe, 2014;Tranfield et al., 2002). Such systematicity is likely to include a number of elements and which at its core requires theory development to be situated and contextualized within the evidence base provided by previous research: First, the review should be transparent (Denyer & Tranfield, 2009), in setting out how the authors identified, analyzed, and interpreted the literature (Denyer & Tranfield, 2009;Fink, 2010;Tranfield et al., 2003). While methods may differ, transparency allows the reader to understand the boundaries of the domain reviewed, and the process that has shaped the author's thinking. By being transparent, review authors are thus clear about the background to their work, and assumptions made in the paper. It is acknowledged here that integrative reviews by experienced authors (published, for example, in Academy of Management Annals) might adopt a more narrative approach and would not necessarily include a methods section, but would nevertheless usually be expected to identify the specific fields they are reviewing, offering a clear sense of where these are located and how they relate to one another (see, e.g., Jaskiewicz et al., 2017) Second, the inclusivity of the review should fit the goals of the paper (Denyer & Tranfield, 2009;Rowe, 2014). Inclusion allows reviewers to avoid a myopic selection of supportive scholars and works, which can strengthen the development of the paper's contribution. In this sense, one must look back in order to look forward. Literature reviews base their theorizing on the evidence of extant knowledge (Elsbach & van Knippenberg, 2018;Hoon & Baluch, 2019), and regardless of how it is presented, the review paper must be organized around a full review of evidence within a given field as described by Elsbach and van Knippenberg (2018, p. 1; see also Elsbach & van Knippenberg, 2020). These editors (of Annals) describe how the journal preference is for papers that develop new theory, but caution that papers that privilege theory at the expense of the review will not be accepted. Inclusivity furthermore helps to position the paper within the existing body of research, both in terms of motivating the work and in terms of reconciling contributions back into that literature. However, the more comprehensive this inclusion criterion, then the more challenging it becomes to integrate the literature into a unifying framework or model (Rowe, 2014). Furthermore, the more the paper seeks to develop theory, then the more the breadth of the supporting review becomes compromised (Jones & Gatrell, 2014;Kilduff, 2006). Third, reviews constitute a critical aspect as they interpret and analyze the literature (Blumberg et al., 2005;Jones & Gatrell, 2014;Webster & Watson, 2002) in order to identify biases and gaps and set out new research directions (Rowe, 2014). For example, the International Journal of Management Reviews argues that papers published should be "analytical" rather than "descriptive" (Jones & Gatrell, 2014). Success here rests in presenting an in-depth critical understanding of the extant knowledge base (Elsbach & van Knippenberg, 2018) allowing scholars to track irregularities and anomalies (Nadkarni et al., 2018). In this manner, the review should not present an "unsurprising overview" of the literature (Rowe, 2014) but provide the foundation for advancing knowledge by facilitating theory development (Webster & Watson, 2002). A critical assessment of prior work can motivate the contribution and, in addition, create the building blocks for the development of theory, as the authors identify gaps, connections, or insights that are molded into a new contribution. The critical review thus sets out the departure point, for future theorizing. We stop short at setting out "methods" for literature reviews in prescriptive terms (see Post et al., 2020, for ideas about how to write a review), but instead recognize that these three features characterize review papers from across the miner-prospector continuum. For example, Rhodes and Pullen (2018) are transparent both with regard to their prior knowledge of the literature and in relation to the inclusion of literature from key articles the authors were already familiar with, and a focused search for papers that question ethics in business. Rhodes and Pullen (2018, p. 489) then critically assess this literature to suggest that previous research did "not go far enough in interrogating the corporate enthusiasm for ethics," and build on this to argue that businesses have a hidden gendered substructure that seeks to glorify itself through ethics. Similarly, Dionysiou and Tsoukas (2013) set out the background to their review of the routines literature (i.e., transparency), highlighting the paucity of research on microfoundations and relational aspects of routines. Their review thus focuses on including studies of the performative perspective on routines. Their critique of this literature highlights gaps in understanding the internal dynamics of routines, arguing that past research had focused largely on established, and not emergent, routines (Dionysiou & Tsoukas, 2013). They build on this critique of the practice view (Feldman & Pentland, 2003) to make the case for a new conceptualization of routine emergence. Finally, Ö zbilgin et al. (2011) are transparent in describing the method used, and motivation for, their literature review. They set out to review worklife literature with a view to addressing a narrow focus on traditional family structures, including both positivist and critical approaches to the areas of life, diversity, and power. Critically assessing this literature, they argue that previous theorizations are incomplete. Presenting an intersectional approach, they invoke a rethink of "the treatment of life, diversity and power in order to reconceptualize the work-life interface" (Ö zbilgin et al., 2011, p. 186). At times, and particularly as one moves toward the prospector end of the continuum, it can be difficult to differentiate between a review and theory paper because the latter are frequently also developed from a review of the literature (Kilduff, 2006). However, while review papers arrive at new conceptual insights through an integration of the evidence, in theory papers, the emphasis is on the former as opposed to the latter (Elsbach & van Knippenberg, 2018). Furthermore, unlike literature reviews, a transparent, inclusive, and critical presentation of the field is not a necessary goal of theory papers. Instead, systematicity (with a small s), as outlined above, shapes the main purpose and body of a review and acts as the principal foundation for any theoretical contribution that is developed. Finally, while the above review has focused on the central strategy used within a paper, it is possible that elements of both miner and prospector approaches can exist within each review paper. Thus, a "miner" paper may contain elements of a prospector approach. For example, Parmigiani and Howard-Grenville (2011) use the metaphor of the "black box" to differentiate between capabilities and practice approaches to studying routines. While the former approach assumes routines are enacted as designed, the latter seeks to open up processes within this "black box." Similarly, Turner's (2014) review has been classified above as a miner approach (i.e., organizing and categorizing literature). However, when discussing the temporal antecedents of routines, Turner transfers the notion of clock and event time (Ancona et al., 2001) to explore the implications of these different conceptualizations on capabilities and practice camps. Equally, prospector reviews may contain elements of a miner approach within, identifying gaps and problematizing the literature before setting out new theoretical directions. For example, Bundy et al. (2017) first spot a conceptual gap in crises and crisis management, highlighting a lack of theoretical rigor, and thereby justify the need for developing a multidisciplinary approach. These examples highlight the opportunity of a mutually constitutive relationship or duality (Putnam et al., 2016) between miner and prospector approaches in the crafting of review papers. Implications for Organization and Management Studies Research In this article we have developed the miner-prospector continuum to examine the choices, risks, and implications for theory development through literature reviews associated with various approaches located along its length. In so doing we enable authors to carefully position different choices and approaches within a context. We build on the work of others who show how theory development can occur in review papers (Hoon & Baluch, 2019;Kunisch et al., 2018;Post et al., 2020). Authors can choose a miner approach, adopting the norms of the discipline and carving out their contribution, while prospectors might choose to view existing literatures as a launch pad for future endeavors, challenging, disrupting, or circumventing established disciplinary norms and assumptions. In so doing we facilitate editors and reviewers in identifying and articulating where a paper is positioned, explicating clearly where and how a paper might require to set the boundaries between familiarity and adventure. We have noted above the need to nurture novelty within organization and management studies research, increasing support for scholars to choose not only the "safer" miner approach to reviewing (Pickering et al., 2015), but also the perhaps riskier and challenging prospector paths (Cozzo, 1999). However, we also acknowledge that while prospector reviews might, in the abstract, offer potential for making conceptual contributions through opening up new horizons (Cunliffe, 2018b), individual scholars might take a cautious view regarding the wisdom of pursuing a revisionist (or prospector) pathway. Such caution may relate to justified fears that reviewers and editors will be conservative in their views, resisting new ideas that undermine current beliefs or taken-for-granted assumptions (Bartunek et al., 2006;Cunliffe, 2018a;Patriotta, 2017;Starbuck, 2003). Due to concerns about achieving sufficient publications to secure career advancement (Aguinis et al., 2020;Gabriel, 2010;Knights & Clarke, 2014), plus the received wisdom that incremental "miner" reviews will get published where more adventurous prospecting reviews might fail (see Pickering et al., 2015), scholars may repress prospector approaches in favor of tried and tested miner formulae. Authors may resist the lure of innovation and heterogeneity, delivering theoretical contributions as incremental revisions to established debate, rather than proposing radical change in order to get reviews accepted for publication (Aguinis et al., 2020;Pickering & Byrne, 2014). Given the above-noted pressures for academics to follow incremental, low-risk research paths, how can our profession produce a more balanced mix of miners and prospectors? After all, the wider research process depends on a healthy supply of both. As discussed, organization and management research requires theories that reflect the complexity and multidisciplinary nature of our field, setting out new narratives and conceptualizations. Literature reviews have a key role to play in this process. Scholars require space to develop new ideas-a pretheoretical state where new pathways may be explored (Toulmin, 1972): a "scholarship of foresight, imagination and reflexivity" (Cunliffe, 2018b, p. 1431). At the same time, institutional forces act to uphold publication norms, resulting in a proliferation of miner strategies at the expense of prospectors. Below, we explore two key issues that can be seen to change this trend. Nurturing Novelty Reviews facilitate the identification and classification of extant research, yet also offer possibilities for challenging existing paradigms through proposing new theoretical paths and developing new conceptualizations (see Cunliffe, 2018b;Nadkarni et al., 2018;Suddaby et al., 2011). Different stakeholders have a role to play in changing the institutional environment to encourage such groundbreaking research paths. First, journal editors can seek to disrupt publication norms and to encourage more imaginative and innovative papers, in addition to incremental, consensus-based research. Editors might thus alter their publication criteria and editorial boards (Corbett et al., 2014;G. F. Davis, 2010): They have the remit to reposition their journals to develop prospector reviews, providing space where "ideas from different places [can] meet" (Burrell et al., 1994). These decisions likewise involve risk for editorial teams, with journal impact factors hinging on papers being both read and cited. It may thus take a brave prospector editor to take a proactive role in the peer review process, ensuring that "the demands of a broad agreement between referees, Associate Editors and Editors in Chief does not squeeze out work which is provocative, irritating or stylistically demanding" (Parker & Thomas, 2011, p. 426). In this manner, prospector editors can shape the peer review process by calling on reviewers who are supportive of prospector goals (Gilmore et al., 2019). Universities could create a climate that encourages scholars to be more reflective, through workshops focusing on questioning assumptions, as opposed to cultivating academics as paper authors (Alvesson & Sandberg, 2013). Wider academic communities, such as the Academy of Management, also have a role to play in nurturing novelty, by both "incubating" blue skies ideas for further development and encouraging prospector mind-sets (Renwick et al., 2019). Learned societies can nurture new concepts through discussion teams, special interest groups, and specialized journal fora. Efforts may include developing prospector talent through PhD mentoring programs. By providing incubation space, new ideas may "demonstrate their merits before being swamped in the larger population" (Toulmin, 1972, p. 294). Incentive policies might also be changed to reward innovative research, regardless of the stage of its development. Equally workloads could be managed to support such approaches. Approaches that radicalize and challenge should be promoted, increasing the chances of research being disruptive and novel (Hoon & Baluch, 2019;Suddaby et al., 2011) triggering new paths and revolutions. Such unconventional research often requires greater investments in time and risk (Corbett et al., 2014), and setting a one-size-fits-all incentive scheme might favor miners over prospectors. Such shifts in institutional strategies involve many risks and gambles, but without these moves, the future of their research pipelines, and that of the wider organization and management field, becomes increasingly constrained. Setting the Researcher's Path Academics themselves also have a role to play in increasing the mix between prospectors and miners through the choices and priorities that they make (Alvesson & Sandberg, 2013). In this sense, it is not just the "winners" in the publication game who are reluctant to change, but the "losers" as well (Alvesson et al., 2017). The latter play the miner's game in the hope that one day they will get space within the crowded mine (Nadkarni et al., 2018). However, many risk going away empty-handed (Alvesson et al., 2017). Therefore, while making a contribution to knowledge via the miner's path is often, in relation to literature reviews, presented to early career researchers as being of lower risk (see Pickering et al., 2015), the increasing demands of journals for contributions that are both novel and "interesting" heighten the risks that such strategies will result in rejections and publication deadends. While some academics are pressured to play a hard miner's game in the pursuit of tenure or the next promotion (see Knights & Clarke, 2014;Pickering et al., 2015), others (perhaps especially those fortunate enough to be tenured: Ylijoki & Henriksson, 2017) still enjoy relative academic freedom to follow research paths for which they have a passion. Given the longterm nature of the research process, it is important to choose and prioritize questions about which the researchers "truly care about the most" (Corbett et al., 2014). As Rynes (2007, p. 1382) notes, researchers should ideally be given chance to "commit to . . . ideas we care about rather than focusing on what our publications will do for our image, our compensation, or our careers." When writing literature reviews, authors might step back and reflect on the assumptions and norms prevalent within their domain of interest. They could actively seek to embrace conflict and disagreement within the literature, thereby revealing limitations and anomalies, problematizing the literature and sowing the seeds of new theory (Nadkarni et al., 2018). Review authors may immerse themselves in domains not only adjacent to their fields of interest but distal to them (Byron & Thatcher, 2016;Nadkarni et al., 2018) and consider transferring theories or applying analogies and metaphors in new ways. Scholars might prospect into the unknown, discard disciplinary blinkers, follow their intuition, and engage with problems in the world of practice (Kilduff, 2006; see also Hambrick, 2005). As a result, researchers themselves need to take on many of the risks associated with following both a miner and prospector's path. While the former may be perceived as lower in risk, trends in publication noted above make it increasingly difficult to get such incremental research into the top-tiered journals. Prospectors on the other hand face an uphill battle as they seek to make bridges between disciplines, potentially meeting in this process competing demands from defensive miner reviewers. Given the challenges of both approaches, it is perhaps advisable for researchers to develop (and universities to facilitate) a portfolio of projects spanning the miner-prospector continuum. Nadkarni et al. (2018) suggest scholars develop a portfolio, including core, adjacent, and transformational projects. In this way, the scholar keeps in play a range of projects spanning the miner-prospector continuum, with potential for each to influence the other and to coevolve over time, in a mutually constitutive manner. While individual academics may have a tendency to lean across their portfolio of work toward a miner or a prospector approach (Nugent, 2011), there is potential for a prospector in every miner and vice versa. Ultimately, researchers need to retain a focus on the goal and direction of longer-term projects, despite the threat of potentially slower career paths. The passion associated with following whatever path is what drives and ultimately fulfills the researcher's calling. Indeed, prior research has shown that the stronger the perceived competence or self-efficacy of the researcher, the more likely they will pursue a consensus-challenging research path and the more likely they will be willing to bear associated risks (McMullen & Shepherd, 2006). Conclusion Going forward, research in organization and management studies needs a balance between miner and prospector approaches, and literature reviews have a key role to play in this journey. It is recognized how institutional forces push the research community further down the path of the miner. As a result, making a theoretical contribution via either miner or prospector paths could now be seen as laden with risk. We argue that all stakeholders from institutions and editors to reviewers and researchers have a role to play in redressing the balance. Yet while we recognize the risks and benefits of both the miner and the prospector approaches, we remain concerned that, on balance, the prospector path might seem riskier, meaning that both editors and authors might eschew the rockier path of the prospector journey. Institutions have a key role to play in nurturing novelty, "incubating" blue-skies ideas for further development through incentive and performance assessment policies (see also Aguinis et al., 2020). In the absence of these collective efforts toward prospecting, we argue that organization and management studies research will continue to meander down the path of normal science and fail to tackle the complex challenges facing organizations and society today (Renwick et al., 2019;Stern, 2016). Our miner-prospector continuum takes a step in the direction of supporting such efforts. In classifying different stages within the continuum, it facilitates a range of potential contributions within review papers including valuable and relevant mining reviews through to more adventurous prospecting approaches. Through clarifying where reviews might be positioned on the length of the eight-category miner-prospector continuum, we facilitate authors (and editors and reviewers) in understanding the risks and benefits of each approach, enabling the proactive and strategic management of choice regarding tradition versus new challenge.
2020-08-13T10:03:33.901Z
2020-08-06T00:00:00.000
{ "year": 2020, "sha1": "db20b1a03e8793ec4134bf2006b064300626e99d", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1094428120943288", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "2d818dccddac9d4b27aa48fc5ee0e548cbda592c", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Sociology" ] }
269748812
pes2o/s2orc
v3-fos-license
Confidence Intervals for the Mean of Birnbaum-Saunders Distribution with Application to Wind Speed Data Thailand is dealing with air pollution, particularly from small particulate matter (PM), significantly impacting public health. Wind speed is pivotal in the dispersion of these particles. Due to its unpredictability, we are interested in estimating the confidence interval (CI) for mean wind speed data using a Birnbaum-Saunders (BS) distribution. We have constructed various intervals, Bootstrap confidence interval (BCI), Percentile bootstrap confidence interval (PBCI), Generalized confidence interval (GCI), Bayesian credible interval (BayCI), and The highest posterior density (HPD). Using the R statistical software, a simulation study evaluated their coverage probabilities (CP) and average lengths (AL). GCI emerged as the most effective method overall. With increased sample size and shape parameters, these intervals displayed reduced average lengths. Applying these intervals to wind speed datasets in Nong Prue subdistrict, Chonburi province, Thailand, demonstrated their effectiveness. INTRODUCTION Thailand faces an annual challenge with particles that have a diameter of less than 2.5 micrometers (PM 2.5 ) particles, most noticeable from October to April due to still winds that trap dust.These particles originate from various sources like open burning, transportation, industry, and cross-border smog.Factors such as pressure, wind speed, rainfall, and temperature compound their impact, making it harder for these tiny particles to disperse, resulting in increased dust levels during the early months of the year.In 2023, PM 2.5 levels surged past safe standards, peaking at 180 micrograms per cubic meter [6].This situation significantly impacts the health of individuals, especially those at higher risk, including children, pregnant women, the elderly, and individuals with chronic conditions like asthma and other respiratory issues.Paneangtong et al. [20] indicated that individuals living near industrial estates are at a heightened risk of health problems due to exposure to dust and smoke compared to those residing farther away.Ammuaylojaroen et al. [1] research revealed the impact of wind speed on reducing PM 2.5 levels, showing that increased wind speed correlates with decreased PM 2.5 concentrations.Due to the unpredictable nature of wind speed, estimating it using a suitable method allows us to determine whether the wind speed in the future will increase or decrease.This, in turn, enables us to predict the potential changes in the amount of PM 2.5 likely to occur in the future.Additionally, Mohammadi et al. [18] investigated and assessed the application of the two-parameter Birnbaum-Saunders (BS) distribution for examining the wind speed distribution in a long-term time series of recorded wind speed data collected from ten distinct stations.Their results emphasized the successful performance of the BS distribution across all ten stations.The CI holds more utility than a point estimator as it offers a range of anticipated values.Therefore, there is interest in using this distribution to study the CI for the mean wind speed in industrial areas.This study will use daily average wind speed data collected from August to October 2023 in the Nong Prue subdistrict of Chonburi province, Thailand.Chonburi Province was selected due to its classification as an industrial area, leading to increased levels of PM2.5 that surpass standard limits. Birnbaum and Saunders [27] were interested in finding a distribution that described how long a material specimen subjected to fatigue would last before failing.As a result, they developed the fatigue-life distribution based on a model that measures the total time until the combined damage, caused by the formation and growth of a primary crack, exceeds a specific threshold, leading to the specimen's failure [29].The BS distribution has become widely studied and applied in various fields, including earth and environmental sciences [9-11, 13, 21].Originally formulated to address material fatigue, Leiva et al. [12] expanded its use as a suitable model for describing environmental data through the proportionate effect law.This adaptation stems from how pollutants spread or gather within a volume influenced by environmental factors, moving them from their initial location while maintaining their original quantity.Several researchers have contributed to developing parameter estimation methods for the Birnbaum-Saunders (BS) distribution.Birnbaum and Saunders [28] introduced the maximum likelihood estimators (MLEs) for the parameters (, ).Ng et al. [8] devised modified moment estimators (MMEs) and a straightforward bias correction technique to enhance the MLEs and MMEs.Jantakoon and Volodin [19] contributed to the field by presenting percentile bootstrap and generalized pivotal processes that create CIs for the shape and scale parameters of the BS distribution.Lu and Chang [14] formulated a bootstrap method for forecasting intervals related to the BS distribution in a different approach.Wang et al. [25] extended the discourse by considering Bayesian inference for the parameters of the BS distribution.They based their methodology on inverse-gamma (IG) priors and computed Bayesian estimates.Lastly, Paggard et al. [22] contributed by presenting CIs of variance and the difference of variances within the BS distribution. We focus on inferring the statistical aspects related to the mean or expected value of a random variable.This value represents the long-term average of random variables, derived by integrating the variable's product with its probability by the distribution.As the mean stands as a widely used measure, our attention lies in creating CIs to estimate the population mean.Thangjai et al. [23] presented CIs for the mean and the difference between means from two normal distributions featuring unknown coefficients of variation.Maneerat et al. [16] introduced Bayesian techniques for constructing the highest posterior density (HPD) intervals concerning the mean and the difference between means of two delta-log normal distributions.However, there is currently no available literature that investigates the development of CIs for the mean within BS distributions.Therefore, we propose CIs for the mean of BS distributions.We provide five different approaches based on the bootstrap confidence interval (BCI), parametric bootstrap confidence interval (PBCI), generalized confidence interval (GCI), the Bayesian credible interval (BayCI), and the highest posterior density interval (HPD).To demonstrate the effectiveness of the suggested methodologies, we also applied them to wind speed data from Nong Prue subdistrict, Chonburi provinces, Thailand, that were gathered from August to October 2023. METHODS A random variable is said to follow the two-parameter BS distribution with shape parameter () and scale parameter (), (, > 0) denoted as ∼ (, ), if its cumulative distribution function (c.d.f.) is given by where Φ(•) is the c.d.f. of the standard normal distribution, and the probability density function (p.d.f.) of BS distribution can be written as The BS random variable can be generated based on this correlation with a normal random variable.The mean (expected value) and variance of are provided by ( ) = 1 + 1 2 2 and ( ) = () 2 1 + 5 4 2 , respectively.Therefore, the mean, denoted as , can be defined as: (3) Bootstrap Confidence Interval The bootstrap technique was first introduced by Efron [3] as a resampling method based on randomly selecting new samples derived from the original sample.Ng et al. [8] showcased that the MLEs for and exhibit bias when using Monte Carlo simulation.Meanwhile, the Constant-Bias-Correcting (CBC) parametric bootstrap method, introduced by Mackinnon and Smith [15], was found by Lemonte et al. [2] to be the most productive in terms of bias reduction.Let x = ( 1 , 2 , ..., ) be a random sample of size from (, ). The MLEs of and denoted as α and β.Bootstrap sample size is obtained from ( α, β),which is denoted by The percentile bootstrap estimates for α * and β * are Therefore, the bootstrap estimator of the mean can be obtained as and the percentile bootstrap estimator of the mean can be obtained as ). Generalized Confidence Interval Weerahandi [26] formulated an approach for GCI using the generalized pivotal quantity (GPQ) as its foundation.Suppose X = ( 1 , 2 , ... ) is a random sample from the BS distribution and x = ( 1 , 2 , ..., ) in Equation ( 1) with sample size .Sun [30] and Wang [24] derived the GPQ of and as follows where, ∼ ( − 1) follows a t-distribution with − 1 degrees of freedom, ∼ 2 (), where 2 () is a Chi-squared distribution with degrees of freedom, 1 = =1 and 2 = =1 1/ .Two solution for denited as 1 and 2 can be derived by solving the following: where = [( − 1) Pivotal quantities and then, To replace and in equation ( 3), crucial values and are used, resulting in the expression of the GPQ of the mean (denoted as ) as ). Bayesian Confidence Interval Wang et al. [25] introduced appropriate priors utilizing known hyperparameters through the utilization of Inverse Gamma (IG) priors for and 2 , denoted as ( | 1 , 1 ) and ( 2 | 2 , 2 ).Since it is difficult to demonstrate that and 2 are independent of one another.Consider following an IG distribution characterized by parameters and .(denoted as ( |, )), then the PDF of IG distribution is ( |, ) = ( /Γ()) −−1 (−/), where , > 0 Let X = ( 1 , 2 , ..., ) be a random sample from (, ) and x = ( 1 , 2 , ..., ) be observations of X.The likelihood function, without accounting for the additive constant, is expressed as follows: The joint posterior function of (, 2 ) is acquired by combining the likelihood function with the inverse-gamma (IG) priors of and 2 , resulting in: Consequently, the marginal distribution of and the posterior distribution of 2 given of are as follows: and The Equations ( 17) and ( 18) samples are obtained using Markov Chain Monte Carlo methods.The generalized ratio-of-uniforms method, as expounded in the subsequent subsection, to generate posterior samples of .Conversely, the posterior samples of 2 can be readily obtained using the package within the R software suite.This convenience arises due to the analytical intractability of the marginal distribution in Equation (17). To construct the Highest Posterior Density (HPD) interval for the mean, we employed the ℎ function available within the package of the R software suite.This was executed after obtaining the Bayesian mean estimator in step 4. SIMULATION RESULTS The study examined five methods (GCI, BCI, PBCI, BayCI, and HPD) to construct CIs for the mean within BS distributions.This analysis was conducted through a Monte Carlo simulation implemented in the R statistical program.R, being a programming language for statistical computing, is both free and open-source.It empowers researchers and practitioners with the capability to design and implement simulation studies, spanning from straightforward to highly sophisticated, by utilizing a combination of built-in functions and numerous user-created packages in the R program [4].The performance of these five methods was assessed based on their coverage probabilities (CPs) and average lengths (ALs).Two critical criteria determine selecting a preferred method: ensuring CP is equal to or near the nominal confidence level of 0.95 and achieving the shortest AL.The simulation setting consisting of the number of replications was 5, 000, with 5, 000 pivotal quantities for GCI, = 500 for BCI and PBCI, and = 1, 000 for BayCI and HPD interval.For mean of BS, the data were generated for ∼ (, ) with sample size = 10, 20, 30, 50 or 100 and the shape parameter = 0.10, 0.25, 0.50, 0.75 or 1.00 .The scale parameter values were fixed at 1 for all cases.For BayCI and HPD, we considered = 2 and = = = = 10 −4 hyperparameter which was recommended by Wang et al. [25] When examining the mean of a BS distribution, the results from the simulation displayed in Table 1 revealed that GCI exhibited CPs higher than or near 0.95, even with large sample sizes and high values.Conversely, despite PBCI showcasing the shortest ALs, its CPs fell below 0.95, although they improved with larger .Moreover, the ALs among the five methods displayed a decreasing trend and appeared similar as the sample size increased. AN EMPIRICAL APPLICATION Wind energy is an eco-friendly and sustainable power source, untainted by carbon emissions or pollution [17].According to Mohammadi et al. [18], the BS distribution is the optimal method for estimating wind speed distribution.We employed datasets containing daily wind speed records from August to October 2023 in Nong Prue Subdistrict, Chonburi Province, Thailand [7], to demonstrate the efficiency of CIs for the mean of BS distributions obtained through methods such as GCI, BCI, PBCI, BayCI, and HPD. Since the data contain positive values, they can be fitted to BS, Uniform, Cauchy, exponential (Exp), Weibull or normal distributions.Therefore, we tested the distributions of positive wind speed datasets using the Akaike information criterion (AIC) and the Bayesian information criterion (BIC).The results in Table 2 show that the wind speed datasets from Chonburi province fit a BS distribution, as confirmed by the AIC and BIC because the value of AIC and BIC for BS distribution was smallest.The essential statistics computed for the daily wind speed data are displayed in Table 3.The mean for Chonburi provinces was calculated as 1.8141.The 95% CI for this mean, utilizing GCI, BCI, PBCI, BayCI, and HPD, is outlined in Table 4.According to the simulation outcomes, PBCI exhibited the shortest ALs, followed by BCI.Notably, similar to the simulation findings, the CP of PBCI was below 0.95, while GCI hovered around or above 0.95.In summary, GCI emerges as the most suitable method for constructing a CI for the mean of a BS distribution. CONCLUSION We constructed CIs for the mean of BS distributions using GCI, BCI, PBCI, BayCI, and HPD.The performance of the CIs was evaluated in terms of their CPs and ALs.The simulation results show that the CPs of GCI were more significant than or close to the nominal CI of 0.95.Even though BCI and PBCI had shorter ALs than GCI, their CPs were the lowest and under 0.95.Thus, it cannot be recommended.Therefore, the GCI method is the most effective for constructing CIs for the mean of the BS distribution.Furthermore, when calculating the CIs for the mean of wind speed datasets from Chonburi province, Thailand, using the proposed methods, it was observed that the GCI method provides the best results in the empirical scenario.This method is particularly beneficial for estimating the mean of wind speed in the months of August to October, providing essential information for predicting fluctuations in 2.5 levels, whether they increase or decrease.Such predictions are crucial for preparing and effectively addressing the 2.5 dust problem. Table 1 : The CPs and ALs of 95% of two-sided CIs for mean of BS distribution ( = 1). Notes:Blod represents values that satisfy criteria and the best-performing method. Table 2 : AIC and BIC values for fitting six asymmetric distributions. Table 3 : Descriptive statistics for the wind speed data from the Chonburi dataset. Table 4 : Confidence intervals for the mean of wind speed for the Chonburi datasets.
2024-05-13T15:17:37.426Z
2024-02-21T00:00:00.000
{ "year": 2024, "sha1": "dd86ce572d6a859365e7639bb5365876af935200", "oa_license": "CCBY", "oa_url": "https://dl.acm.org/doi/pdf/10.1145/3653912.3653918", "oa_status": "HYBRID", "pdf_src": "ACM", "pdf_hash": "987265767595c74e6cabed7d61b2d6dc373ccb70", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [] }
51899670
pes2o/s2orc
v3-fos-license
Floral Homeostasis Breakdown in Endangered Plant Valeriana jatamansi Jones ( Valerianaceae ) in North Eastern Himalayan Region An inhabitant of north western Himalayan region and a gynodioceious plant having a lot of medicinal properties, Valeriana jatamansi Jones (Valerianaceae) is an enlisted endangered plant in the world. It was introduced in the research station of North Bengal Agricultural University (27.06 ̊N 88.47 ̊E) situated in north eastern Himalayan region of Darjeeling district in the state of West Bengal from Sikkim Sangtok (27 ̊25'N 88 ̊31'E) of north Sikkim area in India for the purpose of conservation. As we noticed some irregular development of floral organs, we investigated some pertinent questions regarding ecological aberrations found in plants. We found plants introduced in north eastern Himalayan region changed their homology of number flower petals and position of stamens seen naturally in north western Himalayan region as per the reports. Was there any genetic or extreme environmental stress condition caused a sudden change in floral structure as it is generally known as a rare phenomenon and frequently not seen? What were the correlations of different floral parts and fitness of population in different morphotypes? We predicted possible outcomes of seed setting by univariate regression models in a particular environment in addition to this investigation. We proposed three models of heterozygousity for answering the reasons of unstable floral form from a general known floral form, where silent mutations help the plants to survive in adverse conditions in spite of deformed or variable formed of floral morphology. Introduction Valeriana jatamansi Jones is a known endangered plant in the world [1]- [5].It is gynodioceious in nature [4]- [6].The plant belongs to the family Valerianaceae and it has almost 350 sps distributed throughout the world except Australia and New Zealand [4].This plant is mostly found in higher altitudes with many species in higher alpine zones [4].The female plants were found to have pistillate flowers without an androceium and four white (large) or tinged with pink (small) petals [4] [5] [7]- [9].In North Western Himalayan region, hermaphrodite plants were reported to have five petals and three stamens that are epipetalous and opposite to the corolla lobes and female plants were found to have four petals and not five like hermaphrodite (Table 1) [4]- [6] [8]- [10].However, we found the abrupt appearance of four and five petals randomly in the inflorescences of flowers in hermaphrodite plants and also in female plants (Figure 1 and Figure 2).They were not seen in each and every flower of the inflorescence.Stamens were also located in between the petal spaces of the flower as it was otherwise reported [4] [5] [9] [10].They were reported to be found in white, long prominently larger than pistillate flower (Table 1) [4] [5].Anthesis was observed 3 -4 days earlier in female flowers than hermaphrodite flowers [6] [11].Anthesis of female flowers was reported to be found earlier than hermaphrodite flowers in the elevations of KUBG (Kashmir University Botanical Garden) (1490 m asl), Ferozpura (2150 m asl) and found at the same time in Gulbarg (2650 m asl), Kashmir [10] in different elevations [6]. Origin of the plant material: The plant was introduced in the research station by collecting seedlings from Sangtok (27˚25'N 88˚31'E), North Sikkim, for the purpose of conservation programme as per the work programme planned by Directorate of Medicinal and Aromatic Plants Research (DMAPR), a body developed by ICAR (Indian Council of Agricultural Research) with the sole purpose of collection and conservations programmes of medicinal and aromatic plants including rare endangered medicinal plant species in their natural habitats by constructing field gene banks and other necessary facilities to protect them from natural damage or extinction from the country and the world [12] [13]. The overall objective of our study is to find out reasons behind divulging the natural process of formation of floral structure in the plant Valeriana jatamansi Jones in this new region from what it is generally reported [4] [5] [9] [10] because floral development genes were found to be somewhat obstinate genes which was not frequently reported to be found changed in different environments [14]- [21]. We divided process of investigations into following headings: 1) The random relative change in floral morphology in some of the flowers in both types of sexes of the plant (Figure 1 & Figure 2).Figure 1.The character of flowers where in hermaphrodite flower it was found to have both type of flowers having four and five number of petals; the stamens are found in between the petal spaces; this was found to be nature's disruption of stabilizing selection in which deformed and random development of the reproductive organs were found due extreme environmental stress in the new environment.The female plant having flowers having four and five number of petals in the same plant which also resembling the same pattern of evolution as was found in hermaphrodite plant (Figure 1); the evolutionary process was found going on in this region. Table 1.Structural differences of flower found in the ex-situ conservation in Regional Research Station, Kalimpong (3000 ft asl). Structure of Hermaphrodite Flower Structure of Female flower Commonly found in other sporadic fragmented populations in Kashmir hills [9] [10] Found in the Regional Research station field gene bank [11] (Figure 1) Commonly found other sporadic fragmented populations in Kashmir hills [9] [10] Found in the Regional Research station field gene bank [11] (Figure 2) Size is larger than female flower Size is larger than female flower Size is smaller than hermaphrodite flower Size is smaller than hermaphrodite flower Calyx is persistant, leafy pappus and reduced to small tooth-like structures. Same calyx morphology was observed in this hermaphrodite population Calyx is persistent, leafy pappus and reduced to small tooth-like structures. Same calyx morphology was observed in this female plant population pappus like Petal actinomorphic, arrangement, epipetalous. Five in numbers. Petals actinomorphic, epipetalous and five and four in numbers (Figure 1). Petals actinomorphic and four in numbers. Petals actinomorphic and five and four in numbers.( The ranges were reported within the same range found in Kalimpong Size of the flowers was found to be 7 -9 mm in length. Size of the flowers were found to be 3 -4 mm. in length. The ranges were reported within the same range found in Kalimpong The same structural features were reported. The flowers were found to be heterostylous in nature.Length of the style and length of filaments were found to be different.Length of the filaments bearing anthers was found to have longer in length than the length of the style. The same structural features were reported. The lengths of styles are larger than petal length. The ranges were reported within the same range found in Kalimpong Length of the filament was recorded 4 -5 mm. in length.Length of style was recorded 3 -4 mm in length The ranges were reported within the same range found in Kalimpong. Length of the style was recorded 3 -4 mm 2) Relationship between floral traits and seed setting in a new environment (Tables 2-4, Table A1 and Figures 3(a)-(e)). i) In the first case we discussed the reasons behind it (Model 1, Model 2 and Model 3) where we found its changes in floral biology in both the types of sexes of the plant.Natural history or population biology of the species in question or developmental mechanisms of the species should be understood before justifying the changes due to DI which was earlier reported [4] [5] [9] [10].[15]- [21] found phenovarients in floral morphology in their experimental organisms in different places where deformation of reproductive organs reported due to environmental insults like us.The photoperiod length and maximum and minimum temperatures are given in (Table 4) due to which environmental insults were found in this region for the plant's adaptation. ii) In the second case, relationship between different floral traits with seed setting was investigated (Table 2, Table 3, Figures 3(a)-(e) and Table A1).The effect of floral organs in fitness of population was found [17] [22]- [25] in different regions of the world. Geographical Position of the Study Site at Research Station The hill zone of West Bengal, India, as one of the six parts of the different geographical zones comprising of three hilly subdivisions, Darjeeling (27˚3'N88˚16'E), Kalimpong (27.06˚N 88.47˚E) and Kurseong (26˚52'40''N 88˚16'38''E) which is situated in northern part of the state West Bengal.The geographical area of this zone is about 3115 sq km, which is 3.5% of the total state area.The elevation of the site is about 3150 mt.above sea level.The highest average rainfall varies from 2500 -3000 mm of which 80% is received during June to September.Snowfall during first half of January is also very common in areas above 2200 m above sea level.The average maximum and minimum temperatures round the year recorded 34˚C and 7˚C respectively (Table 4). Measurement of Flower Morphotype A plant would survive in a new environment could only be interpreted by its fitness or the ability of forming seeds [17] [23].Flower morphology was recorded by sampling [23] 30 plants per morphotype (populations of entire leaf margin, sinuate leaf margin and wavy leaf margin) where, from each plant five flowers were recorded.After recording, the average of five flowers in each plant and consequently total average of flowers were done in 30 plants in each of the morphotype population.The three types of morphs were taken into consideration to find out any relevance of difference of results present if any.The results of the floral parameters were given in Table 2. [16] and [26] suggested in their investigations particularly taking into account of different floral parameters in a zoophilous plant and Penstemon centranthifolius of Scorphulariaceae family respectively about the contributing effects of floral parameters on fitness of the population and also type of selection by nature.The flower parts were taken into account in hermaphrodite and female flowers were i) length, breadth and number of sepals, ii) length, breadth and number of petals iii) length and number of stamens iv) length, breadth and number of anthers v) length and number of style, vi) length, breadth and number of stigma and vii) total length of the flower and viii) total number of flowers in a single plant (Table 2) [16] [17] [22] [23] [26]. The measurement of flower parts was conducted by venire calliper scale to record the observations of the parameters of flowers of both hermaphrodite and female flowers (Table 2).Numbers of flowers were also taken both in hermaphrodite and female plant and correlation of seed wt with each parameter were done to find out if any relationship with any of the floral trait was found at all in this region. Correlation Analysis Correlation matrix in SPSS software was measured for each of the floral traits and relationship between each one of them (Table 3).We also did correlation analysis to identify the magnitude of influence of flower parameters for fitness of a single plant.Relationship of each floral parameter was found as it signals the process of evolutionary phenomenon of selection of nature of a plant during the period of domestication [22] [23]. Univariate Regression Model The validity of regression model for predicting successful pollination and seed set in a plant was one of the methods for finding out the role of floral traits for pollination by animals or insects in many previous reported investigations [17] [22]- [25] which was investigated in addition to the investigations of breakdown of floral homeostasis in this plant. Seed yield or seed setting in a plant is the most important signal for domestication of a plant in a new region [22].[27] utilized a regression model where he did linear regression model while investigating different floral traits and fruit size among the various species of genus Phacelia.Regression analysis was done with each parameter to find out the predictability of the effect of flower parameters with respect to seed yield of the plant.Scatter diagrams were observed to infer the relationship between each parameter and the seed wt of the plant (Figures 3(a)-(e)).Here non-parametric relationships were found to be better fitted for the regression analysis of some floral parameters depending on the r 2 (co-efficient of variation) values. The Change in Floral Morphology It was earlier reported in Linanthus sps., variable expression in petal numbers [15], in Eichhornia panuculata variable expression in petal numbers was reported [19] and in Mimulus guttatus, variable expression of size of floral organs was found [20], it was not found or reported by ecologists or scientists very frequently. From the ecological point of view, the ability of organisms to withstand different environmental insults or stress to produce a predictable phenotype is named as developmental homeostasis [28] [29].The generalisation of overall process can be presented as a schematic diagram in the following way: Flow chart Developmental instability is known to be the estimator of genetic stress [30] which results in disruptive selection and development of deformed structure of floral organs, although quantification developmental homeostasis due to genetic effect is very complex and difficult to explain clearly.[30] gave a view of quantification by measuring out the variability of genetical or phenotypical variance (Vg) or (Vp) in a population, but still it is a very complex process for exact quantification.There were found to be various other factors that affected in a process of genetic instability.Disruption of stabilizing selection is sometimes suddenly observed in a new region where any population is introduced by man or nature depending on the type of stress given by nature [16].Due to extreme environmental stress of a new region or sudden change in environmental parameters, developmental instability (DI) and fluctuating asymmetry (FI) is observed where a plant population loses its original homology of floral or vegetative structure from its parent habitat (Figure 1 & Figure 2) [29]. Canalization and Developmental Instability According to [28], canalization is the process or ability of any organism to produce same phenotype across different environments whereas developmental stability is the ability of a plant or any organism to produce same repetitive phenotype within the same environment.The phenotype may be any vegetative or floral organs of a plant.It means overall robustness of the plant or an organism.DH is the result of continuous process of canalizing selection which helps to reduce the accidental phenotypic variation in any region [14] [21] [31]. Developmental stability (DS) shows the processes in a plant that reduce phenotypic variation arising from environmental accidents [32].Breakdown of developmental stability or homeostasis certainly leads to disruptive selection of the plants and consequently changes in vegetative or reproductive organs [14] [15] [19] [20].Structural differences of an individual female and hermaphrodite flower found in this region and normally found were given in (Table 1) confirm the developmental instability in a region. Unique flower morphology and Lerner's hypothesis-According to Lerner's genetic homeostasis model, 7 important hypotheses are to be mentioned [33]. i) Sexually reproducing organisms have certain genotypes that develop self regulating developmental patterns and phenotype. ii) Genetic composition (variation) that enhances natural fitness of the environment at hand is retained through self equilibrium properties. iii) Developmental and genetic homeostasis is maintained because of the superiority of the heterozygotes in the population. iv) The evolution of auto regulation requires the evolution of certain levels of obligate heterozygousity in mendelian populations. v) The appearance of occasional morphological deviants (Phenodeviants) which as demonstrated by Waddington can arise in response to a change in environment-is linked to heterozygousity as "a certain percentage individuals of every generation falls below the threshold of the obligate proportion of loci needed in a heterozygous state to ensure normal development" vi) Metric traits result from the inheritance of additively acting polygenic systems.vii) Natural selection-essentially the sum of past evolutionary history-counteracts against attempts to shift populations too rapidly and/or too far from adaptive means to maintain a phenotypic balance between fitnessdetermining characters. Models and Explanation of Breakdown of Floral Homeostasis From the seven hypotheses, the fifth is close for describing the cause DI in floral form in our present investigation.A certain percentage of individuals fall below the threshold of obligate proportion of loci needed for heterozygous state.We presented a model as a contribution to the hypotheses based on legible points about a plant.The plant population which will have minimum amount of heterozygousity undergoes the mutation in one or more genes of co-adaptive gene complexes, causes silent mutation in the complex and does not inhibit the production of necessary proteins or enzymes responsible for floral organogenesis.Especially [34] propounded buffering action of genes and more heterozygousity means more buffering action against adverse environments and he evolved balance theory [34]. Basic Configuration of Genes Genotype of a plant is AB/CD responsible for FH where gene A, B, C, and D co adaptive gene complexes responsible for flower initiation, development of flower organs, formation of male and female gametes and subsequent maturation of gametes according to their type of sexuality. We consider two genes in non-homologous chromosomes responsible for minimum number of genes present for heterozygousity, because one gene in one chromosome does not create any heterozygousity, minimum two genes are required for any heterozygous condition for the formation of co-adaptive gene complex. i) We assume a condition where more than 50% of mutation will cause lethality of the plant/organism, it would be difficult to produce flowers or even if flowers are produced, it will have abortive stamen or non functional flower genes which will not produce organs of the flower and the plant/organism will die (first condition).Here mutation is extreme and non functional proteins are produced due to non sense mutation where fitness of population is obstructed. ii) We also assume the conditions where temperature and photoperiodism will cause particular changes in coadaptive gene complexes and DI/BFH/FA is observed due to shortening of photoperiod length and above the range of average temperature in a particular niche. Mutation specifically silent mutation takes place in producing the desired product of desired genotype of the floral organs, no frame shift or nonsense mutation takes place due to extreme stress of photoperiod and temperature.There must be the end result of the protein or enzyme even low amount or the amount which will help to produce the consistent desired phenotype and effective gametic cells with consistent meiotic divisions even if there are environmental insults like deviation length of photoperiod or heat or any type of abiotic stress impulse have an effect on plant. Particular length of the photoperiod or temperature range initiates flowering stage of a plant and shifting of vegetative stage to reproductive stage starts on a critical period of length of photoperiod and range of temperature (First condition).More than the critical range of temperature or length of photoperiod will alter the development of flower organs and DI/BFH/FA is found. Correlation Analysis The total flower length of female flower was found to be always higher than hermaphrodite flower in all three morphs.Breadth of the sepal and length of the sepal was found to be moderately correlated (0.472) (Table 3) in the correlation matrix assessment although breadth of the sepal was found to be negatively correlated with seed wt per plant (−0.300).Length of the sepal was found to be negatively correlated (−0.051) with seed wt per plant (Table 3). In case of length of the petal or corolla it was found to be weakly positively correlated with length of the sepal (0.272) and moderately positively correlated with breadth of the sepal (0.511) (Table 3).However length of the petal was found to be strongly negatively correlated with seed wt per plant (0.845) although it was found to be very strongly correlated with breadth of petal (0.956), length of stamen (0.981) length of anther sac (0.739), breadth of anther sac (0.939), flower length (0.959) (Table 3). In case breadth of corolla, it was found to be strongly negatively correlated with seed wt per plant (−0.889) although it was observed to strongly positively correlated length of stamen (0.950), breadth of anther sac (0.809), flower length (0.916) and number of flowers per plant (0.719).Length of style was found to be negatively correlated with all the parameters of flower except in length of stigma (0.295), breadth of the stigma (0.267) and most importantly moderately strong with seed wt per plant (0.561) (Table 3). Regression Model From the Table A1 (Supplementary Data) and Figures 3(a)-(e), the following interpretations could be gleaned out to reach out the predictions. a) Number of flowers per plant: The prediction of seed wt could be done by adopting the regression model where the regression equation would be 2 197.6 Difference in Flowering Time of the Plant and Photoperiodism During the month of December, the lowest temperature in this region was found 6.4˚C and highest temperature was recorded 15.1˚C (Table 4).In February, average temperature gradually increases to 20.1˚C as highest and lowest was found 10.6˚C.In the month of March, highest temperature was recorded around 24.1˚C whereas lowest temperature was recorded 12.1˚C.In the month of April, highest temperature was recorded 27.3˚C whereas lowest was recorded 15.2˚C (Table 4).But no initiation of flowering was recorded or found in the overall population in the next months in a year.The appropriate location was found in the elevation 1200m•asl -1500 m•asl according investigations done previously [10] and also from the present investigation where maximum flowering was observed and positive fecundity was observed in North Eastern Himalayan region from the month of December to April.The fact of differential time of flower initiation can be reasoned and consequently supported by the studies on phenological variation between population of one species either along latitudinal or elevation gradients [18] or between ecologically distinct habitats [35] and because of low temperature regime and short range of photoperiodism. One of the important reasons for changes in morphology is environmental stress factors where flower structure changes in a new habitat [14]- [16] [19] (Model 2 and Model 3).The change in floral morphology in our investigation may be distinguished as of environmental stress and genetic stress which would be very significant from the evolution point of view and we showed them in Model 3. The process of random development of number of petals was found (Figure 1 and Figure 2) to be undergoing breakdown of floral homeostasis (Model 2) due to silent mutation.Environmental stress [15] [19] which causes the genetic manipulations and silent mutations in the population ultimately results in breakdown of developmental homeostasis.In this region huge scarcity of rainfall along with low temperature and humidity and short day length (Table 4) was found to imbalance the homeostasis of floral genes (Model 3) and co adaptive gene complexes of the population (Model 3). We observed three prongs in the upper portion of the stigma (Figure 2) which would draw attention of the pollinators better than earlier settings where three prongs were found in the middle part of the stigma (Table 1) [9] and thus would very likely to enhance the chances pollination and very much significant also from the evolution point of view. Number of Flowers per Plant Asper reports, number of plants was found to have certain direct influence on the seed setting of the plant [22] [36].The prediction of seed setting in a plant could be done from the regression equation ( ) Now the plant was found to survive well in temperate region and it was found here also when newly introduced in this region.This regression model could be applicable to anywhere in the world where the plant survives well. Although the co-efficient of determination was found to be not so strong (0.395) (Figure 3(a)), the results was fairly fitted in the regression line and F value was also found to significant.The number of flowers per plant both in hermaphrodite and female plant was found, so prediction of seed setting per plant could be done in a new habitat as well in a natural habitat of the plant [22] [36]. Length of Flower Length of flower was found to have effect in the seed setting of a plant.The regression equation was found ( ) where the prediction of seed wt could be ascertained when length of flower was found.The coefficient of determination in the scatter diagram (Figure 3(b)) was 0.751 which was quite strong and the performed the goodness of fit in the regression line.The F value in the F table and other co-efficients of independent variables in the t table was also found to be significant.The equation was found to be universally acceptable when by applying known flower length; the prediction of seed set per plant could be ascertained [27]. Length of Petal Length of corolla or petal was found to have influence on seed setting.[27] investigated the relationship with corolla length with fruit parameter and found a linear relationship with them.The coefficient of determination was found 0.295.(Figure 3(c)) which was not so strong, but the F value was found to be significant and also intercept, coefficient of style length and it's square values were found to be significant in the "t" The regression equation was universally acceptable as this was found significant in its natural habitat and also its introduced region.The prediction of seed set could be done adopting the regression model. Breadth of Petal Breadth of was found to have influence on the seed wt of the plant and in this case regression equation was found to be ( ) ( ) where 3 rd degree polynomial equation was adopted because regressors were found to be better fitted in the regression line.The co efficient of determination was found in the scatter diagram (Figure 3(d)) 0.716 where it was found quite strong and more than 70% regressors or independent variable could explain the regression line. The prediction of seed wt per plant could be done after adopting the regression model. Length of Style Length of style was found to have effect in seed setting in a plant and its length facilitates the process of pollination by pollinators.The trifid stigma was found in female plant and chances of pollination would be more in female plant.Polynomial regression equation was found ( ) 408.387 212.17 29.154 Y X X = − + .The co efficient of determination was found in scatter diagram (Figure 3(e)) which was not so strong although the F-test confirmed significance at 5% level of significance.The prediction of seed set could also be possible from the length of style.Now the other factors were very much important for prediction which should also be carefully considered. Genetic Factors Responsible to Gateway of New Flower Morphology In connection to the genes governing the floral morphology, earlier it was an accepted idea that flowers are relatively constant because of the genetic homogeneity of the genes that regulate the flower morphology [37]- [39] postulated that 'a genic balance or equilibrium for unknown reason is adaptive and is maintained by natural selection.'He was close to one of the ideas of Lerner's seven hypotheses.[40] investigated the constancy of pentamerous corolla in a Linanthus sps generation after generation where he found great degree of variability in floral organs and number of petals when they were exposed to simulation hervivory.[26] found heritability of floral traits in Penstemon centranthifolius in future generations.[41] observed developmental instability and fluctuating asymmetry in floral forms in Brassica campestris family which indicated the changes in floral structures or breakdown of floral homeostasis.[41] reported that it was a case of genetic phenomenon under extreme selection pressure and carried out generation after generation.However detailed study of genetic effect on developmental instability is still not extensively done, only the level of asymmetry was calculated in some cases of quantification of the level of developmental instability in a particular area of research [42]. Conclusions 1) Female and hermaphrodite population of all three morphs was found to have unique morphology of flower having four and five number of corollas in the same plant in this North eastern Himalayan region.The random change in floral morphology was the example of disruptive selection of nature under extreme environmental stress in the month of December in this part of the world.The breakdown of canalisation initiates extreme changes in gene complexes of reproductive organs in this new region (Model 3). 2) Length of petal, breadth of petal, length of style, flower length and number of flowers were found to have contributing effects on seed setting of a plant where dimensions of each one of them was found to have effects on seed setting in a new region.Some of the floral traits were found to have direct significant relation and influence on cumulative seed setting of an individual plant. 3) From the evolution point of view, this plant survives well in this region, but changes in flower morphology in both sexes of the plant indicates breakdown in genetic homeostatsis in certain environmental conditions of the plant in this region.This investigation has further scopes if selection of lines of those plants having both types of flowers will be done and consequently observe the pattern of heredity of those characters in future generations. Length of Style In case length of style, from the Figure 3(c), the regression statistical model for polynomial regression was found to be significant from the F table (Significance F 0.38 > 0.05) (Table A1(c)). Here intercept, length of style, length of style 2 were all also found to be significant in "t" table at 95% level of significance. From Table A1(c), the regression equation would be Breadth of the Petal In this parameters also breadth of petal was found significant (Significance F 0.269 > 0.05) (Table A1(e)) (Supplementary data).Here intercept, breath of petal, breadth of petal 2 and breadth of petal 3 were all significant in "t" table at 95% level of significance. this value is also found to be close the mean value of seed wt within the range of standard deviation value (±3.1) of hermaphrodite plant having entire leaf margin. Figure 2 . Figure2.The female plant having flowers having four and five number of petals in the same plant which also resembling the same pattern of evolution as was found in hermaphrodite plant (Figure1); the evolutionary process was found going on in this region. Figure 3 .Model 1 .Model 2 .Model 3 . Figure 3. (a) 2 nd order Polynomial function of number of flowers−1 plant.All the flowers from three types of morphs showed a negative trend in both hermaphordite and female plants.Though there are some outlayers which may be number of flowers in female plants which cannot be predicted; Coefficient of determination R 2 = 0.395; significant at 95%; (b) 2 nd order polynomial function of flower length; Co-efficient of determination R 2 = 0.751; significant at 95% level; All the flowers of from three morphs from both hermaphordite and female plants has positive trendline upto a certain value after which negative trend will follow in case of length of flower.Higher value of coefficient of determination will explain 75.1% of the observed value for prediction of seed wt; (c) Here positive trendline was found and length of the style was found to have positive trend with seed wt per plant in 2 nd order polynomial function of length of style; Co efficient of determination R 2 = 0.344; significant at 95% level of confidence; (d) 2 nd order polynomial function of length of petal; Here flowers of female and hermaphrodite plants show from all three morphs more of a negative trend.There are out layers which signify the trend may not be explained in female plants.Coefficient of determination R 2 = 0.295; significant in 95% level of confidence.Coefficient of determination is also low; (e) Here negative trend line was found and breadth of the petal was found to have negative trend for predicting seed wt per plant of 3 rd order polynomial function of breadth of corolla; significant at 95% level of confidence.Coefficient of determination R 2 = 0.716; 71.6% of the result can be explained for prediction of seed wt per plant which is quite strong. are epipetalous, three in number present opposite to petals. Petals Figure 2) are larger than female flower. are epipetalous, three in number in between the gaps of petals inside the flower, not exactly opposite to the petals (Figure1).Petals are larger than female flower. Table 2 . Quantitative parameters of flower and seed wt of Valeriana jatamansi Jones. of sepal Breadth of sepal Length of corolla Breath of corolla Length of style Length of stigma Breadth of stigma Length of stamen Length of the anther sac Breadth of the anther sac Flower length No. of flowers −1 plant Seed wt −1 plant Length of sepal plant −0.051 −0.300 −0.845 −0.889 0.582 0.514 0.079 −0.848 −0.378 −0.646 −0.744 −0.528 Table 4 . Average Weather data and Mean day length: April 2012-March 2014. Data are from Automated Meteorological Laboratory, Regional Research Station, Uttar Banga Krishi Viswavidyalay, Kalimpong, Darjeeling West Bengal, India. table.The regression equation was Table A1 . Regression table and ANOVA of (a) Number of flowers per plant and seed yield, (b) Flower length and seed yield, (c) Length of style and seed yield, (d) Length of corolla and seed yield, (e) Breadth of corolla and seed yield. It is close to the mean value of seed wt 24.3 and within range of standard deviation value (±3.1) of hermaphrodite plants having entire leaf margin.Regression statistics was found to be significant after observing significance F value (0.086 > 0.05) (TableA1(d)).All the values of intercept, length of petal and length of petal 2 were all significant in "t" table at 95% level of significance. Putting the value of breadth of petal of hermaphrodite plant having entire leaf margin, we found
2018-07-31T17:05:46.760Z
2015-12-02T00:00:00.000
{ "year": 2015, "sha1": "55e7e59880a2b35e621d5bb3753457269b7377a6", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=61833", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "55e7e59880a2b35e621d5bb3753457269b7377a6", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
238237272
pes2o/s2orc
v3-fos-license
Pediatric Chronic Critical Illness: Protocol for a Scoping Review Background: Improvements in the delivery of intensive care have increased survival among even the most critically ill children, thereby leading to a growing number of children with chronic complex medical conditions in the pediatric intensive care unit (PICU). Some of these children are at a significant risk of recurrent and prolonged critical illness, with higher morbidity and mortality, making them a unique population described as having chronic critical illness (CCI). To date, pediatric CCI has been understudied and lacks an accepted consensus case definition Background Over the past two decades, the increased survival of even the most critically ill children is greatly attributed to the improvements in the delivery of intensive care [1]. An unintended consequence of this success has been a shift in the population of patients admitted to the pediatric intensive care unit (PICU), with an increasing number of children with chronic or complex medical conditions and significant long-term morbidities following critical illness [1][2][3][4]. There is a growing recognition that a subset of pediatric critical illness survivors experience persistent multiorgan system dysfunction and functional morbidities following critical illness that subsequently render them with either a prolonged need for critical care support as inpatients or dependence on medical technology to be cared for as outpatients [5][6][7][8]. These children are increasingly recognized as a uniquely high-risk PICU population, also referred to as children with chronic critical illness (CCI) [4,6]. Despite being a uniquely high-risk population in the PICU, research on pediatric CCI remains limited. This patient population has been understudied, largely because of the lack of an accepted consensus case definition. The limited research to date, using variable definitions, suggests that the prevalence of children with CCI is increasing [1,2] and that these children have relatively higher morbidity and mortality rates after critical illness [6,7,9]. These convergent and complex issues exert significant strain on the health care system, health care providers, and caregivers [10][11][12]. To position the field of pediatric CCI research for systematically evaluating this important patient population, a consistent approach is needed with respect to the population that is being described and studied. Only then is it possible to determine modifiable risk factors for poor patient outcomes, and develop and evaluate interventions to improve the care and survivorship of this important PICU patient population. Objectives Given that we expect a heterogeneous and complex body of work, we have used a scoping review methodology to explore and describe the nature of pediatric CCI research [13,14]. Our primary aim is to evaluate how pediatric CCI is defined in the literature, including concepts such as prolonged or long-stay PICU admission, as it has been proposed that prolonged PICU admissions are important qualifiers for pediatric CCI [4,6]. The secondary aims of this scoping review are to describe the methodologies used to develop and validate any existing definitions of pediatric CCI. We will also seek to describe the prevalence of CCI in the PICU based on existing definitions and describe the key demographic and clinical characteristics of the patient populations studied. Finally, we describe the nature of the reported outcomes in children with CCI. Protocol This is an original scoping review following the standard methodology proposed by Arksey and O'Malley [15] and elaborated upon others [13,16]. This protocol is reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for scoping reviews [17]. We uploaded the protocol as a preprint to the Open Science Framework on February 1, 2021 [18]. We plan to document protocol amendments in the Open Science Framework with the date, description, and rationale. Patients and the public were not involved in the design, conduct, reporting, or dissemination plans of this research. Types of Participants or Population We will include studies that evaluated critically ill children (ie, <18 years old) admitted to any PICU, explicitly identified with CCI. We will also include studies that evaluated prolonged, protracted, chronic, or long-stay PICU admission, as this concept has been identified as an important qualifier for pediatric CCI. However, we excluded records if they (1) evaluated adult or neonatal intensive care unit populations only, or included children among these populations but did not report separate data for children; (2) evaluated pediatric patients in intermediate care, step-down, high-dependency, or chronic ventilator or respiratory units; and (3) did not include or reference a definition of pediatric CCI or prolonged PICU admission, as applicable to the study (eg, as a case definition in a prevalence study). Types of Interventions, Comparators, and Outcomes We will not apply any restrictions regarding interventions, comparators, or outcomes. Types of Publications We will include observational and experimental studies, qualitative studies, and protocols that provide a working definition of pediatric CCI or prolonged PICU admission. Then, we will exclude literature reviews, unpublished literature, editorials, commentaries and opinion pieces, conference proceedings, abstracts, and books. Given the emerging nature and recognition of CCI in children, we will exclude records published before 1990. We will exclude studies that were not published in English or French. Search Strategy We developed a preliminary search strategy in two electronic databases (MEDLINE and CINAHL) and piloted this in consultation with a health research librarian (RC). We developed the final search strategy in MEDLINE, which was peer-reviewed by 2 additional health research librarians not involved in the study, and then translated it into the other databases, as appropriate (Textbox 1). We will search four databases that index citation titles or abstracts using English Medical Subject Headings terms and keywords from their dates of inception to March 2021: Ovid MEDLINE, Embase, CINAHL, and Web of Science. We will review the reference lists of all included studies to identify any studies that may have avoided the final database search. • or/10-12 • 5 and 9 and 13 • ((chronic* or persist* or long term or longterm or long-stay or prolong* or protract* or extend* or extensive or lengthy or difficult*) adj5 (acute* or critical* or intens* or ill or illness* or sick or sickness* or care)).mp. [mp=title, abstract, original title, name of substance word, subject heading word, floating sub-heading word, keyword heading word, organism supplementary concept word, protocol supplementary concept word, rare disease supplementary concept word, unique identifier, synonyms] • 5 and 15 • 14 or 16 • ((p?ediatric* or child or children*) adj5 (chronic* or persist* or long term or longterm or prolong* or protract* or extend* or extensive or lengthy or difficult* or ((long or duration) adj3 stay)) adj5 (acute* or critical* or intens* or ill or illness* or sick or sickness* or care)).mp. Search Strategy and Study Selection Criteria Piloting The team used an iterative approach to evaluate and refine the preliminary search strategy and study selection criteria. Using the results of the preliminary search strategy, 4 members of the core study team independently reviewed an initial set of 100 randomly selected citations using the initial study selection criteria. Each record was reviewed in triplicate. We screened the 100 citations in two steps (title and abstract, then full text), discussed discrepancies, and refined the eligibility criteria. The lead investigator (DZ) reviewed the reference lists of studies meeting all inclusion criteria, identified any relevant studies, and, together with the health sciences librarian, refined the search strategy if these relevant studies were missed by the database search. Following this initial round, we reevaluated the revised study selection criteria using a second set of 100 random citations assessed independently and in triplicate. The conflict rates were 45.5% (5/11 full texts) and 7.7% (1/13 full texts) in full-text assessment during the two iterative piloting rounds. Following these two iterative piloting rounds, the team established a consensus on the study selection criteria. A total of 8 eligible studies were identified during the piloting. Crowdsourcing Given the large number of citations identified in the final search strategy, we will use a hybrid approach comprising crowdsourcing and a machine learning (ML) algorithm to expedite the screening of records. The crowdsourcing methodology for systematic reviews has been previously validated [19,20] and used in a variety of health research reviews to accelerate the citation screening and provide more timely research output, while still allowing for rigorous review conduct [21][22][23]. We will recruit a curated crowd of approximately 30 English-and French-speaking reviewers with content and methodological expertise from international PICU networks (eg, Canadian Critical Care Trials Group, Pediatric Acute Lung Injury, and Sepsis Investigators group), email, social media (using the hashtags #PedsICU, #PICSp, and #CCI), and a dedicated study crowdsourcing event page on insightScope [24]. Authorship incentives will be offered to crowd reviewers who achieved specific screening milestones (ie, group authorship if ≥500 abstracts and ≥50 full texts screened, named authorship if ≥1000 abstracts, ≥100 full texts screened, and participated in data abstraction). Before formal screening, prospective reviewers will be provided with a copy of the protocol and selection criteria. Prospective reviewers will first perform screening on a test set designed using the piloted study selection criteria [25]. The test set will contain 100 citations from the pilot phase with 10 eligible (true positive) citations. Prospective reviewers must achieve a sensitivity of ≥80% before they are given access to the full set of study records. Reviewers who do not achieve ≥80% sensitivity will be provided with additional training before being given access to the full set of study records. We will use a dedicated channel on Slack (Slack Technologies), a cloud-based team communication platform, to streamline the study progress updates and reviewer communication [26,27]. ML Algorithm ML algorithms are being increasingly used to assist in citation screening for systematic reviews, particularly in large reviews [28][29][30][31]. We will develop an ML algorithm to semiautomate citation screening for this scoping review at the title and abstract stage only, which is consistent with previously described approaches ( Figure 1) [31]. The independent and duplicate screening of at least 4000 citations through to the full text by crowd members will constitute a training set that we will use to evaluate five ML algorithms (bag of words, term frequency-inverse document frequency, word to vector, document to vector, and fast text). These algorithms assess the citation title and abstract (where available) and rank each citation by relevance based on the text captured in the study selection criteria and project goal, with the highest ranking citations being retained based on a threshold set by the investigator (eg, a threshold of 70% would retain the 30% highest ranking citations). The titles and abstracts of citations from the four electronic databases were downloaded in English; therefore, no language adaptations were required to apply the ML algorithms to non-English-language studies. We will select the two highest performing algorithms from the training set and evaluate their sensitivity and specificity at a variety of thresholds, when used alone and in combination with a single human reviewer. We will also separately evaluate the performance of the two highest performing ML algorithms for citations without an abstract (ie, title only) to evaluate whether a unique threshold would be required. For both ML algorithms, we will determine the threshold at which the sensitivity is >95% when used in combination with a single human reviewer. This approach is consistent with the individual sensitivity of expert reviewers, as described in previous studies [20,23,32,33]. Once developed, we will evaluate the performance of the two candidate ML algorithms on an additional validation set constituting at least 2000 citations screened independently in duplicate by crowd members. Our a priori methodology will be to proceed with the duplicate independent human assessment of citations above the selected threshold score, and machine plus one independent human assessment for citations below the threshold score. We will also plan to apply an additional lower threshold score if the sensitivity data for the candidate ML algorithms consistently exceed our sensitivity goal (ie, 95%). This lower threshold will serve to exclude the most irrelevant citations through assessment by the ML algorithm alone. Integration of Hybrid Crowdsourcing and ML Algorithm Citation Screening The integration of crowdsourcing and ML algorithm methods into citation screening in this scoping review is outlined in Figure 1. We will download records from the electronic search into Endnote for duplicate removal and export the citation list for screening to insightScope [34], a platform for executing large reviews through crowdsourcing. We will upload citation abstracts and full-text articles with inclusion and exclusion criteria for insightScope. Screening will be performed in two steps (title and abstract, then full text) against the inclusion criteria by 2 independent reviewers. We will record reasons for the exclusion of citations excluded from full-text screening. As previously described, no language adaptations to the screening process for non-English studies will be required for the title and abstract stage, as citations retrieved from electronic databases are in English. However, full texts in French will be reviewed independently and in duplicate by French-speaking crowd reviewers. All screening conflicts (either between 2 humans or a machine and 1 human) will be resolved by third-party adjudication by the members of the core study team, as required. Data Charting We will perform data abstraction using the piloted electronic data abstraction forms created in insightScope. The data abstraction forms were created by one investigator (DZ) and piloted by the members of the core investigative team (JDM, BR, NP, KO, and KC) against a total of 8 eligible studies. We have described the data items in Textbox 2. Before formal data abstraction, we will provide all data abstractors with training (ie, a data abstraction manual and training video). Data will be abstracted by 2 independent reviewers from the crowd, both independently and in duplicate. We will abstract data from the full-text publication and any related publications, referenced published protocols, or supplementary materials. Where necessary, one reviewer will extract graphical data using SourceForge Plot Digitizer, which will be checked by the second reviewer for accuracy. Moreover, where necessary, data will be abstracted from publications in French by French-speaking crowd reviewers independently and in duplicate. The study lead (DZ) resolved conflicts in data abstraction, as required. In the event of missing or unclear data related to our outcomes of interest, we will make a maximum of three attempts to contact the study authors for clarification. Study characteristics • Author name and contact information • Functional status characteristics (using validated tools, as categorized by the article) • Severity of illness characteristics (using validated tools, as categorized by the article) • Comorbidity and medical complexity status, including if and how patient medical complexity and comorbidity was described in the study • Prevalence and types of organ support technologies in study participants (eg, mechanical ventilation, feeding support, circulatory support [vasoactive drugs, extracorporeal membrane oxygenation, ventricular assist device], and extrarenal filtration) • Types of study participants (eg, children with chronic critical illness or prolonged pediatric intensive care unit admission, families, siblings, and health care providers) Outcomes evaluated • Stated primary outcome, including how it was measured and result • Patient outcomes, including mortality (pediatric intensive care unit, hospital, and overall), discharge disposition (eg, high-dependency unit, ward, rehabilitation facility, and home), and health-related quality of life • Family and sibling outcomes (any, as categorized by the article) • Health care provider outcomes (any, as categorized by the article) • Health care system outcomes, including the length of stay (pediatric intensive care unit and hospital), pediatric intensive care unit bed-day use or consumption, pediatric intensive care unit readmission rate or occurrence, and pediatric intensive care unit cost analyses Results Synthesis We will report data related to study characteristics descriptively using counts with the percentages or measures of central tendency and variance (eg, means/medians with SDs/IQR), as appropriate. We will use tables to narratively summarize data related to study population definitions, including the prevalence of the population studied (if applicable) and contextual variables related to study type, setting, and evaluated patient population. We will describe the important elements of the methodology used to derive the case definition of CCI and prolonged PICU admission, including but not limited to the size of study, study design, setting(s), and if criteria for agreement or convergence established a priori. We will group included studies into one of the two definition domains based on their explicitly identified study population of interest (ie, CCI or prolonged PICU admission) and summarize data for each, separately. We plan to categorize patient-and family-based outcomes evaluated in the included studies according to the domains of the PICU Core Outcome Set [35] (ie, overall health, cognitive function, physical function, and emotional function), as applicable, to help formulate a priority agenda for future research. Statistical analyses will be performed using SPSS Statistics, version 26 (IBM), as necessary. We will not perform any meta-analyses of epidemiological or outcome data collected from primary publications, in keeping with the descriptive nature of this scoping review. In keeping with a scoping review methodology, we will not complete the risk of bias assessment for included studies or undertake the certainty of evidence assessment for this scoping review [13,14]. However, the limitations of the nature and extent of populations and outcomes evaluated in current pediatric CCI research will be addressed in the Discussion section of the paper. Results Database search, citation screening, and the data abstraction phases of this scoping review started on March 3, 2021, and were completed on April 16, 2021. Data verification is ongoing, with data analysis as follows: the analysis of the review, with results, is anticipated to be completed by fall 2021. Crowdsourcing and ML Algorithm Methods A total of 32 crowdsourced reviewers completed the test set of 100 citations, achieving a mean sensitivity of 91.6% (SD 0.09). Two reviewers with exactly 70% sensitivity on the test set were provided additional training on the study protocol and study selection criteria before citation screening. Of these, 28 reviewers, with a test set sensitivity of 92.1% (SD 0.09), participated in the citation screening. Reviewers originated from 11 countries and 5 continents. As a prerequisite to incorporate an ML algorithm into citation screening, we determined the optimal algorithm and sensitivity threshold for operationalization. The sensitivities of the five evaluated ML algorithms when used alone or in combination with a single human reviewer to assess citations from the training set are presented in Figures 2 and 3, respectively. The 4110-citation training set included 28 citations meeting the inclusion criteria following an assessment by 2 reviewers after full-text review (ie, true positives). The two highest performing ML algorithms were bag of words and term frequency-inverse document frequency, demonstrating 93% and 100% sensitivity, respectively, at a threshold of 80% when citation assessments were performed by the ML algorithm alone. The sensitivities for both these ML algorithms were 100% at a threshold of 80% when citation assessments were performed by the ML algorithm in combination with a single human reviewer. Additional sensitivity analyses were performed using the bag of words and term frequency-inverse document frequency algorithms using a separate threshold for citations without an abstract (ie, title only) to evaluate whether these citations perform differently. For this analysis, the threshold for citations with an abstract was fixed at 70%, and the threshold for citations without an abstract varied among 30%, 50%, and 70%. The bag of words and term frequency-inverse document frequency algorithms demonstrated sensitivities of 100% for all dual threshold combinations (ie, 70/30, 70/50, and 70/70), both when citations were assessed by the ML algorithm alone or in combination with a single human reviewer. We subsequently evaluated the bag of words and term frequency-inverse document frequency ML algorithms on a validation set of 2174 additional citations. Again, these citations were screened independently and in duplicate by crowd reviewers. The validation set included nine unique citations that met the inclusion criteria. On the basis of the sensitivity results from the training set, we chose to apply the following conservative thresholds to evaluate performance on the validation set: 70% for citations with an abstract and 50% for citations with title only. Both the bag of words and term frequency-inverse document frequency algorithms demonstrated a sensitivity of 92% when citations were assessed using the ML algorithm alone, and a sensitivity of 100% when used in combination with a single human reviewer. In addition to sensitivity, we evaluated the specificity of the ML algorithm. Both the term frequency-inverse document frequency and bag of words algorithms demonstrated a similar specificity at 70% threshold (ie, 0.68), but the term frequency-inverse document frequency algorithm retained three fewer false positive citations. Given this marginally better performance, term frequency-inverse document frequency was selected as the final ML algorithm. Considering that ML algorithms are relatively novel in the conduct of large scoping reviews, we adopted a conservative approach to integrating the algorithm into citation screening for the remaining citations in the review. For citations with an abstract, the following three thresholds were selected: 1. Citations with a score ≥70% threshold were assessed by duplicate independent human assessment. 2. Citations with a score between 30% and 70% threshold were assessed by machine plus one independent human assessment. 3. Citations with a score ≤30% threshold were assessed by machine-only assessment. For citations without an abstract (ie, title only), we adopted a conservative approach by selecting a 50% threshold and no option for machine-only citation assessment. Therefore, citations with a score ≥50% threshold were assessed by duplicate independent human assessment, and citations with a score <50% threshold were assessed by machine plus one independent human assessment. Strengths and Limitations This scoping review is the first phase of a larger research program to systematically evaluate children with CCI. To our knowledge, this scoping review is the first evidence synthesis to provide a systematic overview of the definitions used in the literature for identifying children with CCI and prolonged PICU admission. As such, the results of this review will be used to inform the development of a consensus case definition for pediatric CCI and set a priority agenda for future research. Defining pediatric CCI is an essential first step in understanding the epidemiology of this high-risk PICU population, and a prerequisite for conducting future interventional and outcomes research. As the aims of this scoping review are descriptive and exploratory in nature, this preliminary study will identify the potential need to conduct a systematic review to address targeted and explanatory epidemiologic questions. This scoping review will also demonstrate the feasibility and validity of two innovative evidence synthesis methods, crowdsourcing and an ML algorithm, to execute a large scoping review. This review has several important limitations. As the goal of this scoping review was to describe the definitions of pediatric CCI and prolonged PICU admission, it is limited to studies that explicitly identified and defined these concepts. This review will potentially miss records that did not use this specific language to define their population, and excluded studies that did not provide or reference a definition of pediatric CCI or prolonged PICU admission. Similarly, the study selection criteria in this review will exclude studies that focused only on the concept of prolonged technology use (eg, prolonged mechanical ventilation, prolonged extracorporeal membrane oxygenation). We seek to broadly understand pediatric CCI, and as a part of this objective, we will describe how the concept of organ support technology is applied in the published definitions of pediatric CCI. Conclusions This scoping review is the first, to the best of our knowledge, to (1) provide a systematic overview of the definitions used in the literature for identifying children with CCI and prolonged PICU admission and (2) describe the demographic and clinical characteristics of the populations historically defined in the pediatric CCI literature. This comprehensive literature review will evaluate existing or suggested definitions of pediatric CCI. In the absence of definitions, the review results will be used in future research to identify the key terms and constructs to inform the development of a working definition of pediatric CCI. Defining pediatric CCI is an essential first step in understanding the epidemiology of this high-risk PICU population and a prerequisite for conducting future interventional and outcomes research.
2021-10-02T06:17:21.775Z
2021-05-27T00:00:00.000
{ "year": 2021, "sha1": "2794fa8659c4a33de71379ada0df15e2158b4df6", "oa_license": "CCBY", "oa_url": "https://www.researchprotocols.org/2021/10/e30582/PDF", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "53ae0f952a8a2ad0be8183f159d9c57dfe0f6afa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257428703
pes2o/s2orc
v3-fos-license
SARS-CoV-2 causes DNA damage, cellular senescence and inflammation We discovered that SARS-CoV-2 infection causes DNA damage both in cultured cells and in vivo. Mechanistically, SARS-CoV-2 degrades the enzyme CHK1, which leads to a reduction in dNTPs and impaired DNA replication. Moreover, inhibition of the formation of binding protein 53BP1 foci by the SARS-CoV-2 nucleocapsid protein hinders the repair of damaged DNA. The ensuing accumulation of DNA damage causes cellular senescence and inflammation. The discovery To establish whether SARS-CoV-2 infection results in activation of the DNA damage response (DDR), we infected human cell lines with SARS-CoV-2 and performed immunoblot analysis of DDR markers. We showed that SARS-CoV-2 infection activated the DDR, and we confirmed the presence of DNA fragmentation through the use of comet assays. These events were accompanied by pro-inflammatory signaling and the establishment of cellular senescence (a form of cellular aging). We next probed the molecular mechanisms that caused the DNA damage. We discovered that SARS-CoV-2 expresses proteins that, by distinct mechanisms, hijack cell nucleotide metabolism. Specifically, the viral factors ORF6 and NSP13 promoted the degradation of checkpoint kinase 1 (CHK1), an enzyme involved in coordinating the DDR. A reduction in CHK1 levels is thought to result in the accumulation of rNTPs, which we propose is needed to fuel viral replication (SARS-CoV-2 being a RNA virus). However, the accumulation of rNTPs seemed to occur at the expense of dNTPs, which we detected at lower levels after SARS-CoV-2 infection and which resulted in impaired DNA replication and DNA damage. In addition, we found evidence that DNA breaks accumulated because they were not efficiently repaired. Indeed, we discovered that the SARS-Cov-2 nucleocapsid protein impaired focal recruitment of the binding protein 53BP1 and decreased DNA repair by competing with 53BP1 for association with damage-induced long non-coding RNAs. Overall, these findings suggest that SARS-CoV-2 both induces DNA damage and impairs its repair, ultimately causing cells to age and spread inflammation (Fig. 1). Finally, we demonstrated that these events happened in vivo in SARS-CoV-2-infected mice and in patients with COVID-19. The implications Our findings reveal the profound impact that SARS-CoV-2 infection has on cellular biology, threatening the most important cellular constituent: nuclear DNA. The accumulation of DNA damage is known to be associated with cancer and aging 1 . Although the long-term consequences of severe COVID-19 on lung cancer incidence are unknown at present, accelerated aging phenotypes have been reported 2,3 . Our results may provide a mechanistic explanation for post-COVID-19 syndromes with hastened aging features, to which the establishment of cellular senescence and the triggering of inflammatory processes might be a crucial contributing factor. Indeed, chronic inflammation is thought to be the underlying cause of lung fibrosis 4 , brain degeneration 5 and overall frailty. Thus, local events initially restricted to the respiratory system may have systemic consequences. Our study does not exclude the possibility that additional viral gene products also threaten genome stability by hitherto unknown mechanisms. Moreover, whether the mechanisms described here are altered in the various SARS-CoV-2 variants remains unknown. In the future, it will be interesting to explore the possibility of exploiting the altered nucleotide metabolism of SARS-CoV-2-infected cells to develop anti-viral strategies or interventions aimed at taming the cellular consequences of COVID-19. Fig. 1 | Impact of SARS-CoV-2 infection on genome integrity and cellular senescence. Schematic of the events that follow SARS-CoV-2 infection and lead to reduced genomic integrity, with subsequent inflammation and cellular senescence. Two mechanisms are presented. In one (left), depletion of CHK1 leads to loss of the ribonucleoside-diphosphate reductase subunit RRM2, which results in a reduction in cellular levels of dNTPs and DNA replication stress. In the other (right), the SARS-CoV-2 nucleocapsid (N) protein binds to damage-induced long non-coding RNAs (dilncRNAs), which results in inactivation of 53BP1 and defects in DNA repair. © 2023, Gioia, U. et al., CCBY 4.0. expeRt opinion "I believe that this manuscript presents important information to broadly understand host-viral pathogen interactions, as they relate specifically to the induction of a DDR. Most viruses will need to develop mechanisms to modulate the DDR, as exemplified here." An anonymous reviewer.
2023-03-11T06:17:41.157Z
2023-03-09T00:00:00.000
{ "year": 2023, "sha1": "b2dbca141fa1d398cc50ed23efeb969fffc0a309", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41556-023-01097-w.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "4a0afb4813f20bc978edfd49ad27429e63c7f427", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
261147085
pes2o/s2orc
v3-fos-license
Interactions between egg parasitoids and predatory ants for the biocontrol of the invasive brown marmorated stink bug Halyomorpha halys The brown marmorated stink bug Halyomorpha halys is an Asian species that has become a major agricultural pest in North America and Europe. Ants from the genus Crematogaster are predators of H. halys nymphs in Asia, as well as in the Mediterranean, where known native predators are still few. At the same time, ants usually do not harm H. halys eggs, which are the target of the main biological control agents, the scelionid parasitoids of the genus Trissolcus. However, ants, as generalist predators and territorial organisms, may kill or displace a variety of other insects, potentially interfering with parasitoids and biological control programmes. We conducted laboratory experiments to investigate the interactions between the Mediterranean ant Crematogaster scutellaris and the parasitoids T. japonicus and T. mitsukurii, evaluating the possibility that the ants could damage the parasitized eggs, attack the parasitoids during emergence or interfere with the egg‐laying behaviour of female parasitoids. Our results demonstrate that C. scutellaris is not able to damage parasitized eggs and is not aggressive towards adult parasitoids at any stage. The presence of ants can slow down the parasitization rate in T. mitsukurii females in the smallest laboratory setups; however, this has not been observed in a more natural setting. We suggest that ants may play a complementary role together with egg‐parasitoids in the control of H. halys without interfering with each other. was first officially detected in 2012 (Maistrello et al., 2016), H. halys quickly became a key pest of fruit orchards (Maistrello et al., 2017); and in 2019, the estimated damage to fruit production in northern Italy was € 588 million, with yield losses of up to 80%-100% in orchards (CSO Italy, 2020).To counter this invasive pest, the use of broad-spectrum insecticides has increased dramatically, resulting in a major disruption to previous integrated pest management (IPM) programmes with negative consequences on the environment (Maistrello et al., 2017).Long-term and more sustainable management strategies include conservation and classical biological control. In native Asia, H. halys egg masses are attacked by different species of egg parasitoids, among which the Scelionidae Trissolcus japonicus (Ashmead) and T. mitsukurii (Ashmead) have the highest specificity and parasitization efficiency, ranging between 50% and 90% (Qiu, 2010;Yang et al., 2009;Zhang et al., 2017).In Northern Italy, adventive populations of T. mitsukurii and T. japonicus were first detected in 2016 (Scaccini et al., 2020) and 2018 (Sabbatini Peverieri et al., 2018), respectively.A large-scale survey conducted throughout northern Italy and Switzerland in 2019, showed that both species had rapidly spread into all types of habitats where H. halys is present, with a wide distribution, continuous expansion and high levels of parasitism (Zapponi et al., 2021).Furthermore, in 2020, T. japonicus was selected by the Italian Ministry of Environment and the Protection of the Land and Sea as a candidate for classical biocontrol of the invasive pest (MATTM, 2020) and thousands of these parasitoids were released in the northern Italian regions for 3 years, leading to one of the largest biocontrol projects ever attempted in Italy and Europe.Meanwhile, laboratory studies conducted to verify the potential of generalist antagonists showed that ants are among the most efficient predators of H. halys (Bulgarini, Badra, et al., 2021;Bulgarini, Castracani et al., 2021;Castracani et al., 2017).Specifically, experiments with the two European ants most frequently encountered in agroecosystems, Crematogaster scutellaris (Olivier) and Lasius niger (Linnaeus), demonstrated their ability to kill H. halys nymphs without damaging eggs or adult stink bugs (Bulgarini, Castracani, et al., 2021;Castracani et al., 2017).Further studies conducted with the Japanese ant Crematogaster matsumurai Forel, 1901 andC. osakensis Forel, 1900 as well the cosmopolite invasive Argentine ant Linepithema humile (Mayr, 1868), had a similar outcome (Kamiyama et al., 2021). This study aims to investigate the interactions between the native European ant C. scutellaris and the exotic egg parasitoids T. japonicus and T. mitsukurii in terms of the outcome on the efficiency of biological control of H. halys. We hypothesized that ants may attack H. halys egg parasitoids as they do with H. halys nymphs (Bulgarini, Castracani, et al., 2021;Castracani et al., 2017).In particular, we investigated two moments in the life of adult parasitoids in which they could be particularly vulnerable: the moment in which the female parasitoid lays her eggs, which requires her to stand still on the stink bug egg mass for an extended time, and the moment of emergence of the newly metamorphosed individuals, as they need time to break an opening in the stink bug egg to free themselves.We also tested whether parasitized eggs might be more susceptible to ant attack than non-parasitized eggs, which are usually not attacked, and whether their attractiveness to ants could vary over time. | Insect rearing and equipment Adults of Halyomorpha halys were collected during the spring and summer of 2020-2021 from urban parks in Modena and Reggio Emilia provinces (Emilia-Romagna, Italy) using the tree-beating technique.Stink bugs were set in BugDorm cages (17.5 × 17.5 × 17.5 cm) and placed in climatic chambers at 26°C and L16: D8.Each cage contained up to 50 adults with a sex ratio of 50:50.The stink bugs were fed twice a week with fresh organic fruits and peanuts.Sheets of filter paper were placed in the cage as egg-laying substrates.Freshly laid egg masses of H. halys (<24 h old) with 27-28 eggs were used for the experiments.Rarer egg masses with different numbers of eggs were excluded. Trissolcus japonicus and T. mitsukurii adults were obtained from field-collected H. halys egg masses and were reared in BugDorm cages (12 × 12 × 12 cm) in climatic chambers at 23°C and L16: D8, and fed with drops of a honey-water solution (70% organic honey solution). Every 3 days, freshly laid egg masses of H. halys (<24 h old) were offered to the parasitoids.The parasitized egg masses were individually transferred to empty vials and stored at 26°C and L16: D8 pending the emergence of the parasitoids.The newly emerged parasitoids of each species were mated (one female and one male) for 1 week in vials (Falcon 50 mL, the lid of which was replaced by a piece of pantyhose fixed with an elastic band) and supplied with drops of the honey-water solution.After the mating period, the females of each species were used in the experiments.All video recordings were performed using an HC-V380 Panasonic camera.A binocular microscope Zeiss Stemi 508 was used to verify if ants and/or parasitoids were alive after the experiments. | Experimental procedure We carried out three experiments.Experiment I aimed at verifying whether parasitized eggs and emerging parasitoids can be attacked and damaged by ants.Experiments II and III aimed at evaluating whether ants and adult parasitoids behave aggressively in a simplified context (one-to-one interactions in a Petri dish) and a more complex system (a parasitoid couple, a larger number of ants, and a plant) respectively.In the simplified context of Petri dishes, single workers of C. scutellaris retain their basic foraging behaviours, killing and carrying away prey insects (e.g.Giannetti et al., 2022;Schifani, Giannetti, & Grasso, 2023;Schifani, Peri, Giannetti, Alınç, et al., 2023;Schifani, Peri, Giannetti, Colazza, & Grasso, 2023).In all experiments, we counted the number of sting bugs and parasitoids that emerged from the eggs, and the number of surviving parasitoids. | Experiment I: Interactions between ants and parasitized eggs or emerging parasitoids To verify whether parasitized eggs attracted the interest of ants, we prepared egg masses in which parasitization of all eggs by T. japonicus or T. mitsukurii was established during preliminary observations.Specifically, after introducing a parasitoid female to each egg mass, its activities were video-recorded and the number of markings was checked.The following behaviours have been observed: probing the host, inserting the ovipositor and performing head-pumping movements and body vibrations associated with egg-release, partially extracting the ovipositor and sweeping it over the surface of the host egg with 'figure 8'-shaped movements, as described by Field (1998). Each egg mass was transferred in the centre of a Petri dish (⌀ = 9 cm), which was followed by the introduction of a single ant worker.The petri dish was then filmed for 40 min to collect behavioural data, after which the ant was removed.Egg masses were exposed to ants after either 0, 2, 4, 6 or 9 days after parasitization, to test the behaviour of ants towards parasitized eggs at different development stages, or during parasitoid emergence, to test ant behaviour towards emerging adults.Six replicates were performed for each developmental stage of each parasitoid species, both for the treatment (presence of the ant) and for the control (no ant). | Experiment II: 1 versus 1 interactions in petri dishes (40 min) Tests were conducted by placing a non-parasitized egg mass in the centre of a Petri dish (⌀ = 9 cm) and introducing a single female parasitoid.As soon as the parasitoid made its first contact with the egg mass, we introduced an ant worker.Once the ant was introduced, we filmed the petri dish for 40 min to collect behavioural data.No ants were introduced into the control replicates, and filming started as soon as the parasitoid made its first contact with the egg mass.At the end of each test, we checked under the microscope whether the ant and the parasitoid were still alive and if any of them had suffered visible injuries.We conducted 10 treatment replicates with ants and 10 control replicates (no ants) for each of the two parasitoid species. | Experiment III: Interactions in insect cages (24 h) Tests were conducted using a 30 × 30 × 30 cm insect cage.At the centre of each cage, we placed the following items: (i) a Capsicum annuum L. plant (approximately 15 cm tall); (ii) a Falcon vial containing a female and a male parasitoid of either T. japonicus or T. mitsukurii; (iii) a plastic jar (⌀ = 4 cm, height = 7 cm) containing a group of 50 ant workers, partially filled with small wood pieces, and with the inner upper edge covered with an ant repellent substance (50% glycerine oil, 50% petroleum jelly) to prevent their escape.To start the experiments, we performed the following steps: (i) on an apical leaf of each plant we clipped a 1 × 3 cm filter paper with a single egg mass previously attached with a glue stick; (ii) we placed a 12 cm wooden stick to connect the plant on one hand and the wood pieces in the plastic jar on the other, allowing the ants to get out of the jar and visit the plant; (iii) we opened the lid of the vial, allowing the two parasitoids to move freely inside the cage.Each experimental test lasted 24 h, after which we removed the egg masses and the parasitoids and checked whether the latter were alive or dead.The egg masses were incubated until they hatched, or parasitoids emerged. We conducted 24 replicates per parasitoid species (T.japonicus or T. mitsukurii), equally divided between replicates with ants and control replicates without ants. | Behavioural data The behaviour of ants and parasitoids was analysed by videorecording the experiments and analysing the resulting videos with the software Solomon Coder (https://solom on.andraspeter.com/). Concerning ants, we recorded the time between their entry into the experimental arena and their first contact with the eggs or parasitoids (contact latency), and the number of times the following six behaviours, directed towards the eggs or the parasitoids as targets, were observed: Concerning parasitoids, we recorded the number of times the following three behaviours were observed: (i) oviposition (including marking), which consists of probing the host, inserting the ovipositor, and making head-pumping movements and body vibrations associated with the egg-release, partially exerting the ovipositor and sweeping it across the surface of the host egg in ∞-shaped movements as described by Field (1998); (ii) chase-off, consisting in running directly towards the ant, sometimes lunging with raised wings, making contact and biting it as described by Field (1998); (iii) escape, i.e. moving away from the egg mass; (iv) resting, i.e. stopping the oviposition and standing immobile. | Statistical analyses Statistical analyses were conducted using the software R 4.1.2(R Core Team, 2020).We used Wilcoxon rank-sum tests to analyse differences between two groups, and Kruskal-Wallis tests followed by Dunn's post hoc tests with Benjamini-Hochberg p-value adjustment to analyse differences between multiple groups.Statistical tests were not run for behaviours occurring in less than 15% of the trials. | RE SULTS Data collected in all experiments are provided in Table S1. | Experiment I: Interactions between ants and parasitized eggs or emerging parasitoids Regardless of the Trissolcus species tested in the trials, ants never caused any noticeable harm to parasitized eggs, and never attacked emerging parasitoids. The number of emerged parasitoids did not differ among treatments (distinguishing between T. japonicus or T. mitsukurii, eggs exposed to ants 0, 2, 4, 6 or 9 days after parasitization, and eggs never exposed to ants before emergence) (0.09 < p < 1.000, Dunn's test). In trials with ants, antennation was always observed and no significant differences were detected between treatments with the two parasitoid species (p = 0.09 < p < 1.000, Dunn's test).The escape behaviour was recorded in 58% of the trials with emerging parasitoids and was not significantly different between the two parasitoid species (p = 0.23, Wilcoxon rank-sum test).The following behaviours were extremely rare across the 72 trials that were run (<15%): biting (10 trials), licking (1 trial), threatening with open mandibles (6 trials), gaster rising (0 trials) and threatening with the stinger (2 trials). | Experiment II: 1 versus 1 interactions in petri dishes (40 min) Ants were never observed to attack and harm either of the two parasitoid species.Approaching ants often caused T. mitsukurii females to temporarily leave the egg masses, slowing their overall parasitization rate.On the contrary, T. japonicus females remained on the egg masses even when ants touched them, and their parasitization rate was not affected by the ants' presence. The escape behaviour of parasitoids was significantly different based on treatment (p < 0.001; Kruskal-Wallis test): it was higher for T. mitsukurii in presence of ants compared to the other three treatments (0.035 < p < 0.001; Dunn's post hoc test), and more frequently observed for T. japonicus with ants compared to T. mitsukurii without ants (p = 0.038; Dunn's post hoc test), while no significant differences were detected in the remaining comparisons (Figure 1).Treatment affected the number of eggs that were parasitized (p = 0.033; Kruskal-Wallis test): in presence of ants T. mitsukurii parasitized a significantly lower number of H. halys eggs compared to T. japonicus trials with no ants (p = 0.023; Dunn's test), while no significant differences were detected in the remaining comparisons.The parasitoids managed to parasitize 11 eggs on average (39% of all eggs in the egg masses).Contact latency of ants approaching parasitized eggs was not significantly different between replicates with T. japonicus and with T. mitsukurii (p = 0.123; Wilcoxon rank-sum tests).Walking over was observed in half of the T. japonicus trials, threatening with the gaster in one T. japonicus trial, while both behaviours were not observed in trials with T. mitsukurii.Resting behaviour was only observed once per each parasitoid species.Gaster rising and chase-off behaviours were never observed. | Experiment III: Interactions in insect cages (24 h) Ants had no impact on the parasitization activity and mortality of either T. mitsukurii or T. japonicus. There were no statistically significant differences in the number of parasitoids hatched from the eggs in relation to the parasitoid species or the presence of ants (p = 0.821; Kruskal-Wallis test) nor any significant difference in the number of parasitoids found alive after the experiments (p = 0.424; Kruskal-Wallis test) (Figure 2). | DISCUSS ION In our experiments, ants never directly attacked Trissolcus parasitoids, including in the potentially vulnerable moment of their emergence.Furthermore, parasitization did not alter the ant's ability to attack H. halys eggs, suggesting that parasitized and non-parasitized eggs are equally unlikely to suffer any damage by this ant.Stink bug eggs are rarely successfully attacked by ants, and the few known examples refer to cases of relatively large ants capable of considerable biting force, while chemical cues (or their absence) may also contribute to avoiding ant attacks (Castracani et al., 2017;Schifani, Giannetti, & Grasso, 2023). Crematogaster scutellaris still affected the behaviour of T. mitsukurii in the confined space of Petri dishes, even if it did not perform any direct attack against the parasitoid.Notably, in the presence of an ant worker, female T. mitsukurii significantly more often stopped the egg-laying process and moved away, which diminished the number of stink bug eggs parasitized during the observation time, albeit not significantly.The same did not occur with T. japonicus, since the latter mostly ignored the approach of an ant, and even upon contact, it normally avoided abandoning the eggs.However, such interesting behavioural differences did not appear to play a role when ant-parasitoid interactions were observed in the more complex and larger cage environment, where T. japonicus and T. mitsukurii had similar parasitization success, regardless of the presence of C. scutellaris workers.Since C. scutellaris ants never harmed the parasitoids in direct encounters and had only a slight disturbance effect when artificially enclosed with T. mitsukurii in a very confined space, it is highly unlikely that interactions between C. scutellaris and Trissolcus parasitoids play a significant role under field conditions.Crematogaster scutellaris and the two non-native egg parasitoids T. japonicus and T. mitsukurii are currently co-occurring and rapidly spreading across the northern Italian regions invaded by H. halys (Zapponi et al., 2021). Multiparasitism laboratory experiments with T. japonicus and T. mitsukurii indicate that the order of arrival on the host's eggs is crucial to ensure the most successful parasitization, and that, competition between the two species did not result in reduced H. halys egg mortality (Costi et al., 2022). As biological control agents, ants are appreciated for their polyphagy, territorial aggressiveness, resistance to starvation, and the possibility to manipulate their behaviour (Choate & Drummond, 2011;Offenberg, 2015).Negative effects are mainly observed when ants have a mutualistic relationship with pest insects, usually, honeydewproducing hemipterans, which they can defend against predatory insects and parasitoids used to control them (e.g.Jiggins et al., 1993;Mgocheki & Addison, 2009).The relationship of ants with parasitoids of ant-mutualistic hemipterans is generally antagonistic but not always relevant to biocontrol (Schifani, Peri, Giannetti, Colazza, & Grasso, 2023), and there are a few exceptions of myrmecophilous parasitoids adapted to exploit ant's presence (Pierce & Mead, 1981;Völkl, 1992).However, as generalist predators, ants may also attack parasitoids that do not interact with their mutualistic networks F I G U R E 2 Most significant results of the experiments III, in which the effects of Crematogaster scutellaris ants on the behaviour of Trissolcus japonicus and T. mitsukurii females parasitizing Halyomorpha halys eggs were studied in the more natural setting of an insect cage with a plant, in which the insects were released for 24 h.(a) The number of stink bug eggs the parasitoids were able to parasitize; (b) the number of alive parasitoids verified at the end of the experiment.In both cases, no statistically significant differences between treatments were detected.[Colour figure can be viewed at wileyonlinelibrary.com] (Appiah et al., 2014).We have observed substantial neutrality between ants and parasitoids in our experiments.Crematogaster scutellaris is an ant that may play a useful role in pest management thanks to its common presence in agroecosystems and its predatory abilities against other pests such as the codling moth Cydia pomonella (L.), the ambrosia beetle Xylosandrus compactus (Eichoff, 1876) or the stink bug N. viridula (Giannetti et al., 2022;Schifani, Giannetti, & Grasso, 2023;Schifani, Peri, Giannetti, Alınç, et al., 2023).Notably, both stink bugs and parasitoids are attracted by sugary nectars, whose provision may serve the purpose of manipulating their behaviour or enhancing their efficacy as biocontrol agents (Colazza et al., 2022;McIntosh et al., 2020;Schifani et al., 2020). By revealing that ants do not interfere with egg parasitization nor they attack egg parasitoids, our study encourages the possibility that ants and parasitoids may be integrated in the control of H. halys, with a combined effect on both eggs and nymphs that needs to be evaluated in field assessments (Bulgarini et al., 2022;Campolo et al., 2015;Castracani et al., 2017;Offenberg, 2015;Wright & Diez, 2011). Crematogaster scutellaris (Olivier, 1792) were collected in the wild from Parma (Italy) and reared in plastic cages under the following conditions: T: 25° ± 1 C, RH: 55 ± 10%, L16: D 8.They were fed with the same honeywater solution used for parasitoids and with Tenebrio molitor Linnaeus larvae.Ants endured a 48 h starvation period prior to the experiments.All tests were conducted in a climatic chamber at 26°C and L16:D8 in the Laboratory of Applied Entomology of the University of Modena and Reggio Emilia. (i) antennation (making contact with the antennae); (ii) biting with mandibles; (iii) licking; (iv) walking over the female parasitoid; (v) threatening with open mandibles (assuming a motionless posture with open mandibles); (vi) threatening with the stinger by directing it in the direction of the target at close range, as typical of the spatulate stinger of Crematogaster ants; (vii) gaster rising, consisting in an alarm posture typical of Crematogaster ants in which the gaster is raised above in a position perpendicular to the body plane. F Most significant results of the experiments II, in which the effects of Crematogaster scutellaris ants on the behaviour of Trissolcus japonicus and T. mitsukurii females parasitizing Halyomorpha halys eggs were observed in the restricted setting of a petri dish for 40 min.(a) The number of times female parasitoids moved away from the egg-masses (escape behaviour); (b) the number of stink bug eggs the parasitoids were able to parasitize.According to pairwise comparisons, significantly different treatments are connected by black horizontal lines, and the above asterisks indicate significance levels (*p < 0.05; **p < 0.01; ***p < 0.001).[Colour figure can be viewed at wileyonlinelibrary.com]
2023-08-26T15:21:39.163Z
2023-08-23T00:00:00.000
{ "year": 2023, "sha1": "a4354ea7e88ee08a2d61f651b0c31c3cc427d3b0", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1111/jen.13179", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "f15e15c716545efbb971805e2752721429e6ded8", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
15509349
pes2o/s2orc
v3-fos-license
Performance of HPGe Detectors in High Magnetic Fields A new generation of high-resolution hypernuclear gamma$-spectroscopy experiments with high-purity germanium detectors (HPGe) are presently designed at the FINUDA spectrometer at DAPhiNE, the Frascati phi-factory, and at PANDA, the antiproton proton hadron spectrometer at the future FAIR facility. Both, the FINUDA and PANDA spectrometers are built around the target region covering a large solid angle. To maximise the detection efficiency the HPGe detectors have to be located near the target, and therefore they have to be operated in strong magnetic fields B ~ 1 T. The performance of HPGe detectors in such an environment has not been well investigated so far. In the present work VEGA and EUROBALL Cluster HPGe detectors were tested in the field provided by the ALADiN magnet at GSI. No significant degradation of the energy resolution was found, and a change in the rise time distribution of the pulses from preamplifiers was observed. A correlation between rise time and pulse height was observed and is used to correct the measured energy, recovering the energy resolution almost completely. Moreover, no problems in the electronics due to the magnetic field were observed. Introduction High resolution γ-ray spectroscopy based on high-purity germanium (HPGe) detectors represents one of the most powerful experimental tools in nuclear physics. The introduction of this technique led to a significant progress in the knowledge of nuclear structure. It has been recently proven that also strangeness nuclear physics can benefit from the same advantages: the energy resolution of hypernuclear levels has been drastically improved from 1-2 MeV to a few keV in FWHM [1; 2]. The success of such a technique has encouraged other groups working on FINUDA at DAΦNE [3] and PANDA at FAIR [4; 5] to investigate whether this technique could be extended and incorporated in their set-ups. The FINUDA and PANDA magnetic spectrometers have a cylindrical geometry and are built around the target region covering a large solid angle (Ω ≈ 4π sr). To maximise the detection efficiency HPGe arrays are to be mounted near the target region, implying the operation of these detectors in a strong magnetic field (up to B ≈ 1 T). At DAΦNE, collisions between electrons and positrons of 510 MeV lead to an abundant production of Φ(1020) mesons, decaying predominantly into low energy (∼ 16 MeV) K + K − pairs. The FINUDA spectrometer is centred around a set of eight thin (0.2 -0.3 g/cm 2 ) nuclear targets, surrounding the interaction point. Since the year 2003 Λ-hypernuclei produced by K − mesons stopped in these four targets have been studied [6]. At PANDA relatively low momentum Ξ − can be produced in pp → Ξ − Ξ + or pn → Ξ − Ξ 0 reactions. A rather high luminosity is anticipated to be achieved due to retaining the antiprotons in a storage ring which will allow the use of thin targets. The associated Ξ will undergo scattering or (in most cases) annihilation inside the residual nucleus. Strangeness is conserved in the strong interaction and the annihilation products contain at least two anti-kaons that can be used as a tag for the reaction. In combination with an active secondary target, high resolution γ-ray spectroscopy of double hypernuclei and Ω atoms will become feasible for the first time [5; 7]. The aim of the work described in this paper is to study the feasibility of using HPGe detectors in high magnetic fields and to study the associated effects on their energy resolution. The pulse shape distortion is also investigated. Germanium Detectors in Magnetic Fields Germanium detectors which are typically used in low energy nuclear spectroscopy (see e.g. [8; 9]) are seldom operated in magnetic fields [10] and their behaviour under such conditions is not well known. Generally the deflection of the charge carriers in the magnetic field and the Penning effect in the vacuum surrounding the semiconductor can play a substantial role for the operation of large volume semiconductor devices in high magnetic fields. For extended detector volumes and hence long drift paths the deflection of the charge carriers in the magnetic field may result in a larger rise-time of the signal due to longer drift paths or enhanced trapping and detrapping. By using standard shaping amplifiers the output signal reflects then the interplay between charge collection process and the transfer function of the amplifier. In practice an enhanced charge collection time will cause a reduction of the output signal even though the complete charge is collected eventually. In case of trapping the timescales involved may be significantly larger than the typical time constants of the electronic network and this reduction is referred to as a ballistic deficit [12]. Sometimes this term is also used in a more general meaning [13; 14] for any decrease of the output signal due to an enhanced signal risetime irrespective of its origin. In any case the associated larger fluctuations of the signal rise-time will deteriorate the energy resolution. Trapping of charge carriers and losses due to recombination depend on the type of the germanium. In the case of n-type germanium detectors which are studied in the present work trapping is expected to be less significant than in p-type germanium detectors. Nonetheless, a reduction of the signal may be particularly important in the case of major radiation damages of the crystal lattice (see e.g. [11]). The bending force of the magnetic field causes charged particles produced within the volume between the Ge crystal and the capsule to spiral around the field lines. The longer travel times of the rest gas ions may result in an enhanced Penning effect. The interaction of electrons with the residual gas within the capsule may cause secondary ionisation leading eventually to discharges. The generation of an electric field perpendicular to the magnetic field lines and the direction of the current (Hall effect) may affect small electronic components carrying large currents. While the wide spread use of silicon detectors and their associated readout electronics in tracking systems demonstrates the feasibility to operate highly integrated electronic devices in high magnetic fields (see e.g. [15; 16; 17; 18]), the effect on the electronics of high resolution devices like HPGe detectors has not been studied yet. Experimental Details To verify that HPGe detectors can be safely and efficiently operated in a high magnetic field, two different kinds of detectors have been tested: the EUROBALL Cluster detector [19] and the VEGA detector [20]. The EUROBALL Cluster Detector The EUROBALL Cluster detector consists of seven large hexagonal, n-type, closely packed, tapered Ge crystals housed in a common cryostat [19]. The crystals have a length of 78 mm and a diameter of 70 mm at the cylindrical back-end. To protect the sensitive intrinsic surface of the detectors and to improve the reliability each crystal is encapsulated. The capsules are made of aluminium with a thickness of 0.7 mm. The distance of the Ge surface to the inner wall of the capsule is only 0.7 mm which gives a distance of 3.0 − 3.5 mm between the edges of two neighbouring detectors in a cluster. Each capsule is hermetically sealed by electron beam welding of the capsule lid. The vacuum is maintained by a getter material which is active up to the temperature of 150 o C. The cold part of the preamplifier is mounted on the capsule lid. The detector has a typical energy resolution of 2.1 keV (FWHM) at 1.332 MeV for a 60 Co source using an AC coupled preamplifier. The AC coupling was chosen in order to operate the detector capsule on ground potential which facilitates the close packing and the cooling of several detectors in a common cryostat. Since the seven crystals in the cluster are identical, only three crystals were taken as a representative for the measurements. The segmented Clover Detector, VEGA The super-segmented-clover detector VEGA [20] consists of four large coaxial n-type Ge crystals which are four-fold electrically segmented. The crystals have a length of 140 mm and a diameter of 70 mm. They are arranged in the configuration of a "four-leaf" clover, and housed in a common cryostat. The core contact is AC coupled and the segments are DC coupled. The preamplifier used in the VEGA detector are similar to the one operated with the Cluster detector. Three crystals (B, C and D) and all four segments of one of the three (crystal B) were used in the present work. Prior to the studies in a magnetic field the energy resolution of the detectors was measured in the laboratory with a 60 Co source to be about 2.2 keV (FWHM) at 1.332 MeV. Experimental Set-up Two series of measurements were performed using the ALADiN dipole magnet [21]. For both series the HPGe detectors and a 60 Co γ-ray source, with an activity of 370 kBq, were positioned inside the magnet with the source placed in front of each detector. The ALADiN magnet aperture of 1.5 × 0.5 m 2 restricts to place a detector with its geometrical axis in the horizontal plane of the magnet. The direction of the magnetic field lines was perpendicular to the geometrical detector axis as shown in Fig. 1 (right). The magnetic field is maximal in the centre of the magnet and decreases along the z-direction as illustrated in Fig. 1 (left). The detector end-caps have been placed as close as possible to the centre of the magnet in order to expose the germanium crystals to the highest magnetic field (about 7 cm from the centre as shown in Fig. 1). The 60 Co source was placed at a distance of 27 cm and 20 cm away from the end-cap of the EUROBALL Cluster and VEGA detector, respectively. Fig. 1 shows the scheme of the crystals inside the Cluster detector in the right panel. For the measurements with the EUROBALL Cluster three channels (C, D, F) out of the seven Ge crystals of the Cluster were used. Channels C and D were fed with a voltage of 4000 V and channel F with 3500 V. For the measurements with the VEGA detector data from three (B, C, D) of the four Ge crystals were analysed. The geometry of the crystals of VEGA detector inside the magnet is shown in Fig. 2. They have been fed with voltages of 4000 V. The measurements can be divided in two groups: measurements done without magnetic field and those in which the magnetic field was tuned to 0.3 T, 0.6 T, 0.9 T, and 1.4 T for the EUROBALL Cluster detector and to 0.6 T, 1.1 T, 1.4 T, and 1.6 T for the VEGA detector. For each EUROBALL Cluster channel (C, D, and F) one of the preamplifier outputs was split in two signals. One signal was fed to a spectroscopy amplifier (Ortec 572) with 3 µs shaping time, which output was digitised by an Analog-to-Digital converter (ADC Silena 4418/V, 8 channel, 12 bit resolution) and the second signal was fed to a VME 100 MHz Flash-ADC (FADC SIS3300, 8 channel, 12 bit resolution). The other output from the preamplifier was sent to a Timing-Filter-Amplifier (TFA Ortec 474), whose output was discriminated by a Constant-Fraction-Discriminator (Ortec CF 8000) to be used as a timing signal. The trigger was formed by a logic OR of the outputs from the three CFD channels. The ADC was read out via CAMAC bus. Since the VEGA crystals are electrically segmented, the readout electronics differs from that corresponding to an EUROBALL Cluster in the trigger signal. One output of the preamplifier (core signal) for each channel was split in two branches as it was done for the EUROBALL Cluster set-up. Those preamplifier outputs corresponding to the the four segments of channel B were fed to the FADC. The trigger is formed from a coincidence of the logic OR of the CFD outputs of the four segments and an external trigger determined from the central core signal of channel B. Data Analysis and Results The analysis presented here is focused on the determination of the energy resolution from the pulse height spectra of both detectors by using conventional analogue electronics and on the study of the dependence of the pulse shape sampled by an FADC in magnetic fields. A detailed study on HPGe detectors operating with high rates in magnetic fields, based on the observation of pulse shapes, will be published in a forthcoming paper. The energy resolution was extracted from the pulse height spectra by parameterising the 1.332 MeV γ-ray full-absorption peak from a 60 Co source. From the parameterised line shape the value of Full-Width Half-Maximum (FWHM) was extracted. Two different methods were used to extract the energy resolution of the detectors depending on whether the spectra were measured in a magnetic field or not. Since in absence of a magnetic field the line shape of all detectors is very close to a single Gaussian distribution, the pulse height spectra have been parametrised by Gaussian function. In case the magnetic field was non-zero, the convolution of a Gaussian distribution and an exponential decay function was chosen to describe also the tail on the low-energy side of the peak. The observed line shape was fitted by a full absorption-peak superimposed on a quadratic background. Pulse height spectra taken with a shaping amplifier are presented in Fig. 3 for the 1.332 MeV 60 Co γ-ray line. Obviously, the line shape at B = 1.6 T (dotted histogram) is significantly different from the line shape measured without field (solid histogram). The influence of the magnetic field causes a tail on the low-energy side. The dependence of the energy resolution (FWHM) on the strength of the magnetic field is shown for the EUROBALL Cluster as well as for the VEGA detector in Fig. 4. The energy resolution of one of the EUROBALL Cluster crystals is worse than the resolution of the two other crystals of the same detector because of pick-up noise in its electronic read-out. In addition, the peak maximum of the γ-ray line shifts towards smaller energies with increasing field strength as shown in Fig. 5. In order to clarify the question whether the shifts of the energy peak are due to a loss of charge carriers or whether this shift is caused by the interplay between an increased charge collection time in the Ge detector and the transfer function we have studied the FADC signals. Fig. 6 shows the averaged pulse shape signals for 1.332 MeV γ-rays measured at zero magnetic field (solid line) and at B = 1. As can be seen already in Fig. 6, the pulse shape is modified by the magnetic field. Fig. 7 shows the distribution of the rise time (defined as the time it takes for the pulse to rise from 10 % to 90 % of its full amplitude) for different values of the magnetic field for the VEGA detector. A significant change of its mean value by approximately 200 ns and a broadening of the distribution can be observed. The ratio R between the mean rise time and the root mean square value decreases from values above 4 at low magnetic fields to 3.0 at B = 1.6 T. This behavior is opposite to what is expected for a purely diffusive motion of the charge carriers. In the latter case an increase of R proportional to the square root of the rise time would have been expected. The simultaneous large shift and the broadening of the rise time distributions on one hand, and the rather similar asymptotic values of the pulses seen in Fig. 6 on the other hand suggest that the incomplete signal integration by the main amplifier is the main origin of the observed energy shift. To verify the latter conjecture, the dependence of the amplifier transfer function on the variation of the rise time has been investigated with a pulse generator. This study confirmed that the rise-time variations are the main source of the energy shift seen in Fig. 5. In order to explore the possibility to correct the shift of the pulse height by measuring the rise time event-by-event we show in Fig. 8 the correlation of the deduced γ-ray energy and the rise time for different magnetic fields. These distributions have been obtained event-by-event for the 1.332 MeV γ-line of one of the segments of VEGA channel B. For all magnetic fields the low-energy tail of the γ-ray peak (see Fig. 3) is associated with an increased rise time. The strong correlations observed in Fig. 8 provides a characterisation of the amplifier at different values of the rise time. The fit of a parabolic function to the correlation (see dark lines in Fig. 8) enables the correction of the energy spectra event-by-event for different magnetic fields. The correction functions are similar for all measurements at non-zero magnetic field, but they are shifted relative to that of the measurement at B = 0 T. Presently the origin of this strong shift is not understood. Fig. 9 shows the corrected γ-ray energy spectra (dashed line) for VEGA channel B at 1.6 T in comparison to the one without correction (hashed). A significant improvement in the peak shape has been obtained. After applying this correction of the pulse height spectra, an improvement on the energy resolution for VEGA channel B as well as for EUROBALL Cluster channel C has been achieved as shown in Fig. 4. The open triangles in Fig. 4 represent the energy resolution for both detectors after this correction. Discussion and Conclusion Two important effects have been observed by operating HPGe detectors in high magnetic fields: a small degradation of the energy resolution and a change of the pulse shape of the preamplifier signal. All crystals of both detectors show a similar behaviour in the magnetic field with an energy resolution degradation of about 1-2 keV. However, the resolution is still sufficient to perform γ-ray spectroscopy on hypernuclei. The asymmetry of the line shape appears as a low-energy tail in the pulse height spectra in Fig. 3. Moreover, the mean value of the energy spectrum exhibits a shift to low energies as shown in Fig. 5. The measurements have been performed over a period of two days without observing any problems in the electronics or sparking effects. After the measurements the original energy resolution was recovered. A significant shift and broadening of the rise times distribution has been observed in the presence of a high magnetic field. This observation reflects the effect of the magnetic field on the charge collection process itself. The observation of a strong correlation between the measured pulse height measured with analogue electronics and the rise time measured by a FADC for various magnetic field strengths reveals the change in rise time as the major contribution to the degradation of the energy resolution. Employing this correlation for a rise time correction of the energy allows to almost recover the original energy resolution. The remaining degradation at fields larger than 1 T amounts to approximately 0.5 keV. One could expect a dependence on the orientation of the detector with respect to the magnetic field. In our case a complete test of the orientation of the detector could not be carried out because of technical limitations [22]: the aperture of the ALADiN magnet does not allow to freely rotate the axis of the detector. Nonetheless, the geometry of the setup used in the present study reflects the operating conditions of HPGe detectors in the future FINUDA experiment, since the detectors will be set up almost perpendicular to the direction of the magnetic field. On the other hand, the set-up of the HPGe detectors in the future PANDA experiment requires a further test with the magnetic field orientation almost parallel the detector axis. The HPGe detectors were found to operate well in magnetic field conditions similar to those to be expected in future hypernuclear experiments at FINUDA and at PANDA . However, the detectors used in the present work are coupled to huge dewars for their cooling with liquid nitrogen. These dewars have been the main obstacle for the study of the dependence of the energy resolution on the orientation of the detector in the magnetic field of ALADiN. Furthermore they affect the detector integration in both the FINUDA and PANDA magnetic spectrometers. In order to circumvent these problems an electromechanical cooling system coupled to few encapsulated HPGe crystals is currently under development. . Two γ-ray energy spectra for 60 Co measured with VEGA channel B at maximum value of the magnetic field. The hashed spectrum presents the pulse height spectrum for the measurement without correction and the dashed line the corrected pulse height spectrum at a field of 1.6 T. Both spectra have been individually calibrated.
2014-10-01T00:00:00.000Z
2006-06-16T00:00:00.000
{ "year": 2006, "sha1": "1f7a35a69ed845c1e53e55b86f094947d2f523e1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/nucl-ex/0606022", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "93c1ad7282f7e7b7059a256aae3547726e4c6c16", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
220429181
pes2o/s2orc
v3-fos-license
Application of Bioelectrical Impedance Analysis to Detect Broiler Breast Filets Affected With Woody Breast Myopathy Woody breast (WB) myopathy in modern broilers is causing major meat quality issues and consumer complaints. The poultry industry is sorting out WB filets through the inconsistent manual hand-palpation method. Bioelectrical impedance analysis (BIA) method was evaluated as a rapid and objective WB detection method. Freshly deboned broiler breast filets (15 filets × 2 categories × 3 trials) were sorted (hand-palpation) into severe woody (SW) and normal (N) categories were analyzed for BIA values, cook loss, texture (BMORS) method. SW filets had significantly (P < 0.05) higher resistance and reactance compared to N indicating BIA can be used to detect WB filets. In another experiment, we determined the ability of the BIA to differentiate between four WB severity levels using the whole filet. Significant differences were observed in resistance and reactance of normal and other WB categories, however, there were no significant differences among mild, moderate and severe WB categories. Segmental BIA of those filets indicated that BIA can be used to separate cranial, medial and caudal region of the breast filet based on the presence of WB myopathy. Accidental discovery of spaghetti breast in the samples demonstrated the significance of compounding different factors in analyzing WB meat using BIA. INTRODUCTION Woody breast (WB) myopathy in modern broilers is causing major meat quality issues and consumer complaints and this problem is further exasperated by industry not having a reasonable objective means to detect it. Convenience, versatility, variety, and health benefits have increased the rate of consumed chicken in the United States from 36 lbs/capita/year in 1965 to approximately 94 lbs/capita/year in 2018 (National Chicken Council, 2020a). To satisfy the growing demand, the broiler industry has developed fast-growing big-broiler strains (>6 lbs live wt.) that culminated in over 42 billion lbs in 2018 of ready-to-cook poultry meat (National Chicken Council, 2020b). The fast-growing broiler strains have developed a degenerative muscle myopathy in the pectoralis major termed as woody breast myopathy WB which was first reported by Dr. S. Bilgili at Auburn University in 2013 (Bilgili, 2013). Tijare et al. (2016) reported that the incidence of WB in United States broilers was 96.1%, with 48% exhibiting mild WB, 28% exhibiting moderate, and 20% exhibiting severe WB. Filets with WB are hard (very dense) to the touch with varying degrees of hardness due to collagen infiltration along the ventral area of the breast. WB condition is difficult to detect because it can vary in the degree of hardness, be focal or diffuse within the breast and, be randomly distributed within a broiler flock (Tijare et al., 2016). Histology of WB indicates muscle fiber fragmentation, hyalinization, swelling of myofibers, necrotic muscle fiber replacement with connective tissue (fibrosis), macrophage infiltration, and presence of irregular patches of adipose tissue (Bilgili, 2013;Velleman and Clark, 2015). Collagen infiltration is a major causes of WB hard texture and the infiltration pattern differs among broiler strains (Sihvo et al., 2014;Velleman and Clark, 2015;Tijare et al., 2016). Biochemical analysis of filets indicates higher moisture (4.5%) lower protein (4%) content than normal filets (Wold et al., 2017). Woody breast is apparent to average consumers as well as to culinary experts. Breasts exhibiting WB characteristics have a hard texture and are both visually and texturally unappealing leading to low overall consumer acceptance . Filets affected with WB have approximately 50% lower marinade uptake and approximately 27% higher cook loss compared to normal breast meat . As a result, poultry processors have to either deal with rejected orders and complaints or if WB is identified prior to sale, offer the product as lower quality product for a reduced price and absorb significant profit losses Petracci et al., 2015). Woody breast myopathy is a meat quality issue faced by the global poultry industry. Although established and identified in the United States, WB is now being detected in other countries including the Italy, Denmark, United Kingdom, and Finland (Sihvo et al., 2014;Trocino et al., 2015;Brot et al., 2016;Larsen et al., 2016;Tijare et al., 2016). A study from Italy (Trocino et al., 2015) observed that males from two broiler genotypes had 3-times higher occurrence of WB than females. Detection methods used by industry are primarily hand palpation which are subjective, problematic, and unreliable. Typically, a plant will train employees to hand palpate the breast filets as they are passing on a conveyor to sense filet hardness and subsequently classify them into ranked severities on processor specific thresholds making industry wide protocols non-existent (Table 1). Personal observations and discussions with United States poultry processors indicate that handpalpation is a subjective evaluation method, can give falsepositive/negative WB scores, pass-on WB to customers, and remove high-valued normal filets thus affecting the profits and quality. Hand-palpation is also costly since 8-10 additional trained personnel will have to be employed per line to sort WB filets. A standard objective measure is needed for industry. Other technologies are available that can provide data rich objective measures of products at the cellular level and these may bridge the gap needed for industry to utilize an industry wide objective measure. Recently, Wold et al. (2017) reported the successful application of near infrared spectroscopy as an on-line method to detect WB in processing plants. Bioelectrical Impedance Analysis (BIA) is a technology that has been proven to measure many different properties at the cellular level and has been used extensively on a number of organisms ranging from fish to humans and may (Kyle et al., 2004). Although initially designed as a human medical device, BIA has been used to extensively to non-invasively measure proximate composition, health, and freshness of fish and meats (Swatland, 2002;Cox and Hartman, 2005;Chevalier et al., 2006). A new application of BIA may be to detect WB myopathy. The objective of this study was to conduct a proof-of-concept research to evaluate if WB leads to alteration in bioelectrical properties such that BIA can be used to detect filets. Since processors differentiate breast filets based on varying severity levels of WB myopathy, experiments were conducted to determine if BIA could differentiate between those severity levels. It would also be beneficial for the processors to detect which segment of the breast filet has WB so that it can be excised, and the rest of the filet sold at a higher price. Segmental BIA analysis of the cranial, medial and caudal segments of the intact filets was conducted to determine if BIA can differentiate between normal and WB categories. Proof-of-Concept Freshly deboned breast filets were sorted into normal and severe WB meat and analyzed for bioelectrical impedance (resistance and reactance), cook loss, and texture. Broiler Breast Meat Freshly deboned (2-3 h post-mortem) butterfly breast filets from 8-wk broilers (all male; Ross 708; and 8-9 lbs live weight) were obtained from a local broiler processor. All samples were transported to the Department of Poultry Science, Auburn University under refrigeration and analyzed (except texture) within 2-3 h. The left filet from each the butterfly filet was cut and were immediately separated into normal (no WB) and severe WB using hand-palpation method (Tijare et al., 2016). Bioelectrical Impedance Analysis Left-side filet was used to measure bioelectric properties using the hand-held BIA equipment (Seafood Analytics, Clinton Town, MI, United States) on the dorsal side of the filet (Figure 1). The BIA unit consists of two signal electrodes and two detecting electrodes that introduce an 800 µA, 50 kHz, AC current capable of voltage changes between 3.75-10.60 V. The four electrodes are connected to the product using food grade stainless compression electrodes (RJL Systems, Detroit, MI, United States). A fourelectrode array is used to approximate parallel field lines within the tissue, negate product electrode interfaces, and approximate a cylindrical shape. Once the electrodes are in contact with the product, the circuit is connected, and the device takes two measures, resistance and reactance. Data was collected on the dorsal surface (feather side) of the filets weighing 464 ± 66 g. Cook Loss Left-side filet used for BIA measurement was further used for cook loss analysis immediately (<15 min) after BIA measurement (approximately 5 h post-mortem). Cook loss is expressed as weight loss after cooking the thigh relative to its initial weight. Briefly, individual filet were weighed, placed on a raised stainless steel wire rack in a stainless-steel pan (53.02 × 32.54 × 10.16 cm; Vollrath Co., LLC, Sheboygan, WI, United States), covered with aluminum foil and cooked in a pre-heated (176.6 • C) forced air convection oven (Vulcan HEC5D, Troy, OH, United States) to an internal temperature of 74 • C measured using a stainless-steel digital thermometer (Taylor 1470FS Digital cooking thermometer and Kitchen Timer, Las Cruces, NM, United States). After cooking, the filets were cooled to room temperature (22 ± 2 • C) in the covered pans Texture Analysis The cooked filet used for cook loss analysis was placed individually in separate zip-loc bags and stored overnight at 4 • C. The filets were tempered at room temperature for 3-4 h and analyzed for texture using the Blunt Mullenet-Owens Razor Shear (BMORS) method Texture Analyzer (Model TX-XT2i, Technologies, Scarsdale, NY, United States; Lee et al., 2008;Morey and Johnson, 2019). Statistical Analysis Breast filets (15 filets/severity/trial) belonging to normal and severe WB category were obtained and the experiment was conducted in 3 separate replicate trials (15 filets × 2 severity levels × 3 replicate trials = 90 filets) were used throughout the study. Replications were tested using one-way ANOVA with Tukeys HSD (p < 0.05). to determine significant differences between replications. In the absence of significant differences in replications, the data from all replications was combined and analyzed together. Data was analyzed one-way ANOVA with Tukeys HSD to determine significant differences at p < 0.05. Experiment 2 Segmental Bioelectrical Impedance Analysis Freshly deboned chicken breast filets were randomly collected (without sorting into WB categories; all male; Ross 708; 8-9 lbs live weight) from the same poultry processor. The filets were stored for 18 h at 4 • C prior to the analysis. Each filet was sorted into one of the WB categories (0, 1, 2, and 3) and the cranial segment was pinched to evaluate the turgor to determine the presence/absence of spaghetti meat. The filet was then analyzed for bioelectrical impedance (resistance and reactance) as per the method above. Each filet was visually divided into three segments, cranial, medial, and caudal region and hand-palpated to determine if the segment was normal (no perceivable hardness) and woody (perceived hardness). Each segment was then analyzed for bioelectrical impedance (resistance and reactance) using two sets of small (5.08 cm) compression electrodes with 1 cm between signaling and receiving electrode (Figure 2). Statistical Analysis Freshly processed 120 random filets were collected on three separate processing days and analyzed for bioelectrical impedance. Data was combined from all processing days and analyzed for differences in resistance and reactance of different WB categories of the whole filet and the segments with and without spaghetti meat using one-way ANOVA with Tukey's HSD to separate means at p < 0.05. The segmental BIA data (resistance and reactance) on the cranial and caudal region with and without spaghetti meat was analyzed using linear discriminant analysis (LDA) with 60% of the data used for training and 40% was used for validation. The prediction accuracy (%) and error (%) of the model to differentiate each segment into normal and woody was determined (Wold et al., 2017). Experiment 1 The differences in the quality characteristics and the bioelectrical impedance parameters is given in Table 2. Breast filets affected with severe woody (SW) breast myopathy WB have a significantly higher (p ≤ 0.05) cook loss (41.17%) compared to normal breast filets (35.16%) indicating that WB leads to reduced water holding capacity of the meat ( Table 2). BMORS texture analysis indicated that WB meat had significantly higher (p ≤ 0.05) peak counts (10.77) compared to normal breast meat (5.45). However, the peak shear force and total energy for normal meat were higher (p ≤ 0.05) than severe WB meat. Bioelectrical properties (resistance and reactance) were measured using the hand-held CQ Reader ( Table 3). However, there were not significant differences (p > 0.05) in the resistance between mild, moderate and severe WB. No significant differences in reactance were observed between normal, mild and moderate WB ( Table 3). Similar data trends in resistance and reactance were observed when the data of whole filets without spaghetti meat was removed ( Table 3). A total of 73 filets out of 360 had spaghetti breast myopathy. Overall, the normal and WB occurrence in the cranial region was 27.78 and 72.22%, respectively, medial region 40.56 and 31.67%, respectively, and caudal regions was 50 and 50%, respectively. Spaghetti breast was prevalent mostly (24-32%) in the normal segments while its occurrence was lower in the woody segment (10-18%). Contrary to the whole filet resistance and reactance, the normal segments had a lower resistance compared to the woody segments. Resistance and reactance of the cranial and medial segments, respectively, were significantly different while they were not significantly different for the caudal region. Removal of the resistance and reactance data from spaghetti meat segments did not alter the differences in normal and WB meat (Table 4). However, LDA of the cranial segment data using resistance and reactance could accurately predict normal and woody segments at 68.69 and 57.75%, respectively. However, when the spaghetti meat data was removed, the prediction accuracy of both normal and woody segments increased by approx. 3%. When resistance and reactance data from the cranial segment affected by spaghetti meat was analyzed, the model was able to predict normal and WB at 52-54% accuracy ( Table 5). Experiment 1 Woody breast myopathy is a major meat quality issue in the poultry industry. At the macro level, raw WB meat is hard to touch while Aguirre et al. (2018) reported that taste panelists described cooked WB meat crunchy and fibrous. At the molecular level, WB is characterized with the conformational changes in muscle protein lead to higher extra-myofibrillar water in WB compared to normal meat (Tasoniero et al., 2017). Previous research conducted in our lab using 7-T Magnetic Resonance Imaging of woody and normal breast filets showed that the WB filets had had significantly higher interstitial water compared to normal filets (Kennedy-Smith et al., 2017) which potentially alters the bioelectrical properties of breast filets. The differences in the intra-and extra-cellular water can impact meat quality parameters such as cook loss. Higher cook loss in WB filets may also indicate higher levels of free water that can be easily removed from meats due to cooking. Higher levels of free water can be a result of increased accumulation of collagen in the WB filets preventing the binding of free water to the myofibrillar proteins (Kennedy-Smith et al., 2017). Textural differences in WB ( Table 2) can be attributed to the alternation in muscle architecture. Histological analysis of WB indicates fibrosis, perimysial thickening, proliferation of connective tissue and a higher collagen content (Soglia et al., 2016) which can potentially contribute to differences in texture of the WB meat. Similar to the current study Solo (2016) also observed that the total shear energy increased as the severity of the filets increased from normal to severe WB. Moreover, the author stated that the normal filets had a peak count of 5.73 which was significantly Cook loss and texture analysis are destructive methods to differentiate between normal and WB. Moreover, these methods are laboratory intensive and are not suitable for in-plant application. Hence the hand-held BIA equipment was used to determine if there are bioelectrical differences between normal and WB meat such that those difference can be further used to detect WB from normal breast meat. Resistance (R s ) measures the ability of a substance to conduct electricity while reactance (X c ) measures its ability to hold a charge (Lukaski, 1987) which are influenced by the biochemical composition of food (Kyle et al., 2004;Hafs and Hartman, 2011). As observed in the cook loss data (Table 2), the changes in the WB muscle architecture can influence the disposition of water within the tissue which can result in alteration in the electrical properties of the meat. It was observed that normal breast filets had lower Rs and Xc (72.18 and 28.04 , respectively) compared to severe WB filets (78.27 and 37.54 , respectively). The resistance is impacted by intra-and extracellular water while reactance mainly arises from cell membranes (Kyle et al., 2004). Cells do not conduct electricity at low frequencies and hence act as insulators forcing the current to pass through the extracellular fluid, which was higher in severe WB (see section "Cook Loss" Table 2), thus increasing the resistance of the muscle (Kyle et al., 2004). Alternatively, Kyle et al. (2004) also state that increase in the suspended non-conducting material will increase the resistance of the conducting water. In case of WB breast myopathy, the nonconducting material could be connective tissue infiltration and granulation tissue (Sihvo et al., 2014;Velleman, 2015;Soglia et al., 2016) which increases the resistance of the meat. These changes in the muscle combined with differences in intra-and extra-cellular water influence the differences in the resistance and reactance of WB compared to the normal meat. The proof-of-concept research indicates that BIA has the potential to be used as an effective tool to detect severe WB filets at a processing plant. Tijare et al. (2016) noted that based on the severity, WB can be classified into normal, mild, moderate and severe ( Table 2). Poultry processors have favored the 4-tier classification as they can remediate their losses by staggering the price or the utilization of breast meat depending on the severity level. It would be further beneficial for the poultry processors to determine which section of the breast filet has woody characteristics so that they can salvage the remaining breast filet and sell it at a higher cost. Since hand-palpation is very laborious and subjective, and that based on Experiment 1, we had demonstrated that BIA can be used to differentiate between severe and normal WB, a study was conducted to determine if BIA can be used to detect varying WB severities as well as segments of the breast filet that are affected by WB. Experiment 2 When compared to experiment 1, the normal breast filet in experiment 2 had similar resistance values, however, WB resistance reduced to approx. 69 . This difference could be explained by the differences in the experimental setup. Compared to experiment 1 where the filets were analyzed within 6-h postslaughter, in experiment 2 the filets were stored in the refrigerator for 18 h. Given the fact that the WB filets have higher extracellular water and modified muscle architecture, it exhibits higher drip loss during the first 24 h compared to the normal meat Tasoniero et al., 2017;Sun et al., 2018) resulting in lower resistance in the stored WB meat in experiment 2. Significantly higher resistance of normal breast filet, similar to experiment 1, indicates that normal meat had a higher water holding capacity even after storage compared to WB. Higher reactance in WB meat can be attributed to the connective tissue and fibrosis, acting as insulators (Kyle et al., 2004). Accidental findings during the project indicated that presence of spaghetti breast myopathy (loose muscle fibers) can influence the resistance of the meat indicating that the lose muscle fibers in spaghetti meat act as insulators thus increasing the resistance values. The overlap in the varying severity levels can be explained by the design of the CQ Reader which has 2-sets of electrodes at set distances wherein each set takes replicate readings which are averaged and then presented as the data for the whole filet. However, with the varying severity WB levels, different regions in the breast filet may or may not have woody tissue and averaging the data for the entire filet can mask the effect of the differences in electrical properties of those regions. BIA measurement of segments of breast filet can provide a clearer picture of the presence of WB in each segment. The other major reason for the overlap could also be the difficulty in accurately detecting WB severities by hand-palpation especially after 18 h post-slaughter storage which impacts texture and inter-and extra-cellular water of the meat (Sun et al., 2018). Segmental BIA of the cranial segment had the lowest percentage (27.78%) of normal breast followed by caudal (50%) and medial segments (40%). This shows that if the woody segment in the cranial region is accurately detected, the processor can remove that segment and still utilize the remaining breast filet. Significant differences were observed between normal and woody segments in terms of resistance and reactance. Contrary to the whole breast filet BIA, segmental BIA indicates higher resistance for woody than normal while the reactance pattern remains the same. Segmental BIA measures electrical properties of a localized area and had higher reactance indicating increased non-conducting material (fibers, connective, and granular tissue; Sihvo et al., 2014;Velleman, 2015;Soglia et al., 2016) in the water which would have resulted in higher resistance. Discriminant analysis conducted using resistance and reactance values indicated that BIA can be used to differentiate between the normal and woody -cranial and medial segments. Although the resistance and reactance data with and without spaghetti were similar for each segment, removal of data of the spaghetti meat affected cranial and medial segments increased the prediction accuracy of normal and woody meat by approximately 3% (Table 5). Accidental finding of spaghetti breast meat and its electrical properties could be of high interest to the poultry industry. Significant differences in the BIA properties of breast meat with and without spaghetti meat indicate that the new myopathy should be taken into consideration while developing predictive models for WB myopathy using bioelectrical properties. Bioelectrical impedance analysis can be used as a tool to differentiate between normal and severe breast meat. However, additional efforts are needed to further define and increase the accuracy of BIA to differentiate between varying severity levels. Increased sample number can potentially improve the discrimination ability of the model. Segmental BIA could be used as a tool to detect woody segments in the filets thus providing more granular data as well as better understanding of the spread of the myopathy in the muscle. The accidental finding on the interference of spaghetti breast meat in detecting WB can open new research areas to explore the ability of BIA to detect spaghetti breast myopathy. The research also demonstrates that BIA can alter with the freshness of the filets and each processor must develop the resistance and reactance threshold values based on their process. In its current state, the hand-held device can be used as a near-line technology by quality assurance departments to detect WB prevalence. Moreover, the data obtained by the processors can be used to study WB prevalence between flocks, different nutrition regime, as well as management practices. Further processors can use BIA technology to separate WB meat from the normal meat. Overall, the hand-held BIA technology can help in reducing consumer compliants due to WB. DATA AVAILABILITY STATEMENT The datasets generated for this study are available on request to the corresponding author. AUTHOR CONTRIBUTIONS AM is the lead PI who conceptualized the idea of using bioelectrical impedance analysis to detect woody breast, secured funding, and actively conducted the research. AS conducted the proof-of-concept experiment (Experiment 1) mentioned in the manuscript. LG conducted experiments, collected data and analyzed it. MC is the co-developer of the patented CQ Reader used for BIA experiments throughout the project, and supported with data analysis, technical knowledge, and writing manuscript. All authors contributed to the article and approved the submitted version.
2020-07-10T13:15:31.470Z
2020-07-10T00:00:00.000
{ "year": 2020, "sha1": "de393de96b4240505bfe161a2bf3d328b987b148", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2020.00808/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "de393de96b4240505bfe161a2bf3d328b987b148", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
246578241
pes2o/s2orc
v3-fos-license
Ontological Ship Behavior Modeling Based on COLREGs for Knowledge Reasoning : Formal expression of ship behavior is the basis for developing autonomous navigation systems, which supports the scene recognition, the intention inference, and the rule-compliant actions of the systems. The Convention on the International Regulations for Preventing Collisions at Sea (COLREGs) offers experience-based expressions of ship behavior for human beings, helping the humans recognize the scene, infer the intention, and choose rule-compliant actions. However, it is still a challenge to teach a machine to interpret the COLREGs. This paper proposed an ontological ship behavior model based on the COLREGs using knowledge graph techniques, which aims at helping the machine interpret the COLREGs rules. In this paper, the ship is seen as a temporal-spatial object and its behavior is described as the change of object elements in time spatial scales by using Resource Description Framework (RDF), function mapping, and set expression methods. To demonstrate the proposed method, the Narrow Channel article (Rule 9) from COLREGs is introduced, and the ship objects and the ship behavior expression based on Rule 9 are shown. In brief, this paper lays a theoretical foundation for further constructing the ship behavior knowledge graph from COLREGs, which is helpful for the complete machine reasoning of ship behavior knowledge in the future. Introduction Ship behavior refers to the movement of the ship in response to the traffic situation, which usually reflects the intention of the officer on watch (OOW) at present and influences the trajectory of the ship in the future.Hence, the recognition of ship behavior is the key to judging the intention of the OOW and predicting the movement of ships in dangerous encounters, which benefits the safety and efficiency of autonomous navigation and traffic management [1].From the perspective of traffic management, the vessel traffic service operators (VTSO) need to judge the development of the situation based on the analysis of the ship behavior and identify the near-miss as early as possible; from the perspective of ship navigation, the OOW or intelligent systems need to infer the intention of other ships and predict their trajectories based on the observed ship behavior before taking evasive actions [2].In brief, to improve the intelligence level of VTS and ships, the study of ship behavior has become an essential topic. In order to help the machine understands the behavior of the ship based on COLREGs, the techniques from the knowledge graph are introduced and the methodology of ontological ship behavior modeling is developed by using Resource Description Framework (RDF), function mapping, and set expression methods.The concept of ship object and ship behavior described in COLREGs rules are incorporated in the proposed method.The ship is seen as a temporal-spatial object containing attribute elements and relational elements; the behavior, then, is described as the changes of the elements in timespatial scales.Based on these techniques, the proposed method can be used to identify the intentions of the ships and their violation behavior, which has the potential of improving the autonomy level of the ships and decision support system in VTS. In summary, the main contribution of this paper is developing a knowledge model of ship behavior according to the rules from COLREGs, which could be used to realize ship behavior knowledge expression in the machine.The rest of this paper is organized as follows: the studies on ship behavior modeling are overviewed in Section 2; Section 3 introduces the definitions of ship objects, attribute elements, and relational elements, followed by a conceptual model of ship behavior and the formal expression of ship behaviors according to the COLREGs in Section 4; case studies, discussion, and conclusions are addressed in Sections 5-7, respectively. Literature Review Studies on ship behavior modeling fall into the following two categories: data-driven behavior modeling and knowledge-driven behavior modeling.In addition, due to the recent focus on rule-compliant collision avoidance, many researchers studied ship behavior in encounters, which are also overviewed. Data-Driven Behavior Modelling Data-driven behavior modeling usually utilizes ship trajectory data to learn the ship's behavior.A group of researchers proposed to learn the characteristics of ship behavior from traffic data from a certain region and use the characteristics to predict the trajectory of the ship in the future [3].Specifically, researchers obtained ship motion trajectories from AIS data [4], analyzed characteristics of trajectory [5], and concluded the distribution of ship state in history that reflects the characteristics of ship behavior [6].The characteristics of ship behavior, then, are used to predict the trajectories of the ships.Some typical methods to predict the trajectory are Kalman filter [7], Long Short-Term Memory Neural Network (LSTM) [8], Bayesian networks [9], backpropagation neural network (BP) [10], etc.Some researchers focus on the identification of abnormal behavior of ships by learning historical trajectory data.Patroumpas et al. [11] designed a method to identify the flow of ship events through AIS data, and on this basis, performed cognitive inferences on abnormal behavior of ships.Zouaoui et al. [12] introduced the Hidden Markov model and formal language for analyzing the ship movement data in the harbor to get the normal ship behavior and abnormal ship behavior.Lei et al. [13] proposed the MT-MAD framework, which can automatically detect abnormal behavior based on the evaluation of the ship's historical sub-trajectory data, and defines the ship's activity space, behavior sequence, and behavior characteristics. Another group of researchers concentrates on ship behavior prediction.Zissis et al. [14] used machine learning, especially artificial neural networks, as a tool to increase the predictive ability of ship behavior.The developed systems can learn and accurately predict in real-time the future behavior of any ship, in a relatively low computing time, which can be used as the basis of prediction for various intelligent systems, e.g., ship collision prevention, ship route planning, ship operation, etc. Perera et al. [15] proposed a ship behavior recognition module in autonomous ships using historical ship trajectory data, which is also used to predict ship trajectory in the future. In short, the data-driven ship behavior model is usually based on the observed traffic data, e.g., AIS data, etc., which are used to predict the trajectories of ships based on the characteristics of the majority and identify "abnormal behaviors" that are different from the majority.However, it is not easy for these models to infer the behavior of the ship that is rule-compliant or not (i.e., reasoning the knowledge of ship behavior).In particular, the machine lacks knowledge about rule-compliant behavior. Knowledge-Driven Behavior Modeling Knowledge-driven behavior modeling accepts that the ship cannot move freely but follows certain regulations/rules (i.e., prior knowledge).Thus, researchers intend to gain knowledge of ship behavior from semantic knowledge.Expert systems [16], expression logic [17], semantic network [18], the Resource Description Framework (RDF) [19], ontology [20], etc. are popular methods to construct knowledge and realize knowledge reasoning. The semantic network is a popular tool to describe ship behavior in recent years.The information loss is inevitable when the researchers use trajectory data only for recognizing ship behavior [21]; thus, some researchers tried to enrich the semantic information of the trajectory.Parent et al. [22] proposed the semantic modeling method and defined the semantic model of the ship trajectory. In addition, the ontology model of ship behavior becomes popular, which can realize knowledge expression for machines.Nogueira et al. [23] used ontology tools to combine the ship's trajectory motion characteristics, such as velocity and acceleration, to express the ship's trajectory.Lamprecht et al. [24] used the ontology's knowledge organization ability and reasoning function to realize the cognition of the conceptual modeling of ship behavior.Wen et al. [25] introduced a dynamic Bayesian network, combined with a semantic network to carry out dynamic uncertainty reasoning and knowledge expression of ship behavior in port waters.Huang et al. [26] combined machine learning and semantic behavior for pattern recognition.Adibi et al. [27] predicted ship behavior, analyzed and discovered ship behavior at the semantic level, and improved maritime supervisors' understanding of water traffic.However, these semantic models lack consideration of the influence of environmental disturbance and do not fully consider the constraints of COLREGs on ship behavior. The knowledge-driven approach presents tools to model behaviors for behavior inference.The reasoning process uses techniques such as rule-based systems, case-based reasoning, and ontological reasoning to produce activity models.Knowledge-driven approaches can represent the context of the environments at multiple levels of abstraction to create generalized and personalized behavior modeling.In particular, ontologies have been widely used to represent semantic concepts and their relationships in a structural manner.Advantages of ontologies include the ability to express knowledge in a clearly organized and structured manner, machine-readable expression, and the expressive power to support the reasoning process. Behavior Modeling of COLREGs To our best knowledge, traditional methods basically considered some key rules from COLREGs and designed the rule-based expert system that helps the machine to recognize the traffic scene and apply certain reaction rules [28]. Some researchers use a question-and-answer method to construct an expert system of ship collision avoidance and give an avoidance plan in the form of question and answer.Others focus on quantifying the COLREGs rules.Many descriptions from COLREGs are ambiguous, vague, and unquantified, which made them difficult for the machine to use in practice.Thus, many researchers proposed quantification methods that quantified the conditions for each encounter [29] (e.g., heading, crossing, and overtaking) and addressed the link between encounters conditions and reaction rules with the help of captains and fuzzy theory [30].Xu et al. [31] clarified the concepts of "head-on ship", "give-way ship", "overtaking", "crossing" and "heading" according to COLREGs, set up a corresponding reward function for each concept and designed the reward function.In the deep learning algorithm, the optimal collision-avoidance strategy is finally obtained.He et al. [32] put forward the COLREGs quantitative model by combining the ship domain model and the ship heading control system based on the four-stage theory of the ship encounter process.Eriksen H et al. [33] introduced a three-layer hybrid collision-avoidance (COLAV) system for surface unmanned boats, which complies with Articles 8 and 13 to 17 of the COLREGs.The performance of the COLAV system is tested by numerical simulations of three different challenge scenarios (i.e., heading, crossing, and overtaking). These studies can be used to develop the MASS that follows the rules inputted by developers, but it is challenging to enumerate all the possible scenarios and reaction rules.To develop a practical rule-compliant ship, the developers need to enumerate the scenes that one ship might encounter and design the reaction rules.However, it is almost impossible to address all the scenes one ship might encounter.Thus, adding additional reaction rules become necessary.For example, in a crossing encounter, one ship that is on the portside of another ship is usually seen as a "give-way" ship, whereas if the first ship is a fishing ship, the ship becomes the "stand-on" ship.To handle this exception, additional reaction rules would be needed, which address the special arrangements when the ship encounters fishing ships.However, it is hard to list the endless exceptions. In this paper, we propose another way to handle this issue.Instead of humans adding patches for exceptions, we proposed the ontological knowledge model helping the machine to deconstruct the conditions and reactions, extract the common concepts, and define the relationships among concepts.With the help of the ontological model, the machine not only can perform the reactions based on the explicit rules but also can infer the implicit rules, i.e., interpretation of rules from COLREGs.It offers a new line of thought to develop a rule-complaint MASS. Conceptual Modeling of Ship Object from COLREGs The COLREGs, formulated by the International Maritime Organization (IMO), define different types of ships, different scenes one ship might encounter, and obligations of the ship in these scenes [34].The ship is the core concept, and the formal expression (i.e., formulaic and structured expression) of the ship object introduced in this section is a prerequisite for the machine to understand the ship behavior described by COLREGs. Conceptual Modeling of Ship Object Ships usually have many spatiotemporal characteristics, e.g., velocity, course, position, etc., which implies that the ship is a spatiotemporal object.Thus, in this paper, the ship object is defined as Definition 1: Definition 1. Ship object is a spatiotemporal object with the characteristics in time and space scales, which can be expressed in the form of data, models, rules, logic, or knowledge by computers in cyberspace. In general, one ship has many characteristics helping us to distinguish one ship from another ship, and these characteristics are usually named as an "attribute" of the ship.By the type and values of the attributes, one can distinguish the ship from different objects. Among these attributes, the attributes that describe the characteristics of the ship independent from the surrounding objects, e.g., the ship name, position, velocity, types, etc., are named as "attribute elements" in this paper, whereas other attributes rely on surrounding objects to express its characteristics, e.g., the bearing of objects, relative distance between objects and the relative speed, etc., are named as "relational elements".The formal definitions of attribute elements and relational elements are shown as Definition 2 and Definition 3: Definition 2. The ship's attribute elements are the expression of the specific characteristics of the ship object that are independent of other objects., e.g., ship name, velocity, course, flag state, etc. Definition 3. Ship's relational elements are to describe the association relationship between objects (e.g., ship objects and environment objects), e.g., relative velocity, relative heading, relative location between the ship and the environment or between one ship and another ship, etc. In order to facilitate the understanding of the definitions in the paper, we have made Figure 1 to show that the entities (e.g., ships, channels, etc.) in the physical space are extracted and modeled in cyberspace, named as objects.Each object has attribute elements and relational elements that help us distinguish one from another.These elements might vary as time moves on, such as the course, the velocity of ship A, the relative distance, and relative bearing from ti−1 to ti+1.According to Definitions 1-3, the ship object has attribute elements and relational elements that might vary as time moves on or with the changes of positions.For instance, in an encounter scenario, relational elements (e.g., relative distance) of the ship would change as time moves on; in a curved channel, attribute elements (e.g., course) of the ship will be diverse according to the curvature of the channel.In brief, the values of attribute elements and relational elements have a time or spatial "stamp".Thus, each ship object can be expressed in the form of a triple-element model: where shipObject represents the ship object, Attribute elements represents the attribute elements of objects, Relational elements represent the relational elements of objects, and Time_Space represents the time and space scales. Each element of the ship object can be formally expressed by a cell containing "Type", "Value", and "t", named as "Object.parameter"and defined as: . ( , , ) , ( ) where Object.parameterrepresents the smallest unit describing the elements of the specific ship object (say "Object"), "Type" represents the type of attribute elements or relational elements of the specific ship object; "Value" represents the value of the "Type", and t represents the moment when the"Type"has the "Value". Based on these definitions, all characteristics of one object (with attribute elements and relational elements) can be collected in a set of the Object.Parameter, i.e., Example 1.Take the scene in Figure 1. as an example.The Parameter of ship A can be expressed as formula as: Attribute Elements According to COLREGs, the ship object has various attribute elements, and these attribute elements might influence the role of the ship and its obligations in a certain traffic scene.According to the feature of these elements, attribute elements can be categorized into two types, namely static attribute elements and dynamic attribute elements, see Figure 2. The static attribute elements describe the attributes that are usually relatively invariant, such as ship name, ship type, ship size, etc., while the dynamic attribute elements are the attributes that might change over time, such as the ship's position, heading, velocity, ship's draft, etc. Relational Elements According to COLREGs and Definition 2, the ship also has many relational elements; some relational elements, such as the position and relative distance between two ships, can be used to determine the encounter scene of the two ships (overtaking, crossing, and heading scenes).Additionally, the obligation of one ship might change as the relational element changes.For example, when two ships are in a crossing scene, one of the ships has the obligation to give way to the other ship.When the two ships pass by, this obligation is relieved. The relational elements between objects are categorized into three types, namely, spatial relations, temporal relations, and semantic relations. (1) Spatial relational elements The spatial relations among the objects in COLREGs include topological, bearing, and distance relations.The regional link calculus model [35] has been introduced to describe the topological relation between objects, e.g., ship object-ship object, ship objectarea object, and area object-area object.The topological relation includes separation, inclusion, intersection, coincidence, inscribed, and circumscribed, which are shown in Figure 3a-f. According to the statement from the COLREGs, the topological relation between two ship objects includes separation and circumscribe.The topological relation between one ship object and one area object includes the following four types: separation, inclusion, inscribed, circumscribe, and intersection.The topological relation between two area objects includes the following six types: separation, inclusion, inscribed, circumscribed, intersection and coincidence.The bearing relation mainly describes the relative bearing between two ships.This paper constructs the ship coordinate system, which forms four directional regions by the intersection of the ship's headline and the ship's transverse line.For example, the coordinate system of ship A and ship B is shown in Figure 4. Ship B is in front of the starboard transverse 45° of ship A, while ship A is in front of the port side transverse 30° of ship B. The distance relation describes the distance between two ship objects, including quantitative expression and qualitative expression. The quantitative expression refers to the Euclidean distance between two ship objects, as shown in Equation ( 6). (( ) ( ) ) where D represents the distance between ship A and ship B, (xA,yA), (xB,yB) represents the position coordinates of ship A and ship B. According to COLREGs (Rule 7, Rule 8, Rule 13, Rule 15), the relative distance is divided into the following four stages: safety distance, urgent situation, risk of collision, and collision.The criteria for dividing these stages are depending on the encounter scenes.For the readers interested in the studies on the quantitative analysis of these criteria, the readers are encouraged to see the paper [36].Although the quantitative analysis of the scenes is not the focus, the qualitative result, i.e., the stage of the encounter, is crucial for the subsequent deduction.Thus, a qualitative expression of the relative distance is introduced: where D represents the distance between ship objects, Dt is a qualitative expression of "D", and Dn, Dn, Dl are the threshold that defines the distance between ship objects. (2) Time relational elements The time relation is the expression of the ship's behavior and events in the time scales, which usually contain two forms, namely points and periods.The time point describes a specific moment.For instance, the time point when the ship performs a left turn, the time when two ships collide, etc.The time period is a range of time.For instance, when the ship is anchored at the anchorage, the ship passes through the narrow space, the time of the ship in the waterway, etc. In Rule 13 of COLREGs, the definition of the two ships overtaking scene is given as follows: "A vessel shall be deemed to be overtaking when coming up with another vessel from a direction more than 22.5 degrees abaft her beam, that is, in such a position with reference to the vessel she is overtaking, that at night she would be able to see only the stern light of that vessel but neither of her sidelights."In this rule, there is actually a time relationship.For instance, the overtaking "begins at" the moment of catching up with the previous ship and "ends at" the time when the two ships pass by.In COLREGs, we can conclude the time-related concepts into five types, namely "earlier than", "later than", "between", "beginning at", and "ending at", which can be described by time points or time periods.The details see Table 1.Semantic relational elements are used to describe the semantic relational elements between ship objects.For example, for the message that the name ship A is "007", there is a relationship ("hasName") between ship A and"007".We call "hasName" is a semantic relational element, ship A is the domain of the semantic relational element, and "007" is the value range of semantic relational elements.The semantic relationship is described as a triple structure <domain, relation, range> using the Resource Description Framework (RDF). The COLREGs contain many semantic relations, and some typical semantic relations from COLREGs are concluded in Table 2. Conceptual Modeling of Ship Behavior and Its Expression Ship behavior is another important concept from the COLREGs.Specifically, COLREGs address the promoted and non-promoted behavior in different traffic scenes with different ship objects.According to Section 3, the ship entity in COLREGs is expressed as a ship object, and its element composition is expressed as attribute elements and relational elements for the machine.Based on that, ship behavior can be defined as the changes of elements in time and space scales, and the formal expression of ship behavior is presented in this section. Conceptual Modeling of Ship Behavior In general, "behavior" refers to the activities of spatiotemporal objects caused by external influences or internal action.In order to clearly classify and model the behavior of ship objects, and further express and reason about ship behavior, the definition of the ship behavior is introduced as Definition 4: Definition 4. Ship behavior refers to the change of the ship object's attribute elements and relational elements in time and space scales. Based on Definition 4, the ship behavior can be defined as ship behavior can be divided into attribute behavior and relational behavior, the definitions are introduced as Definition 5 and Definition 6 The ship behavior is formulated as: Similarly to Equation (2), each characteristic of ship behavior (either attribute behavior or relational behavior) can be expressed by a cell, named as "Behavior.parameter": . ( , , ) Behavior parameter dType dValue T = where "dType" represents types of changes in specific object elements, "dValue" is the amount of change in the value of the same element at different times, the value of "dValue" can be calculated by Valueti-Valueti−1, T represents the period when the "dType" has the "dValue", T can be represented by where g(•) is the function that input the "dType" that has non-empty "dValue" and output the semantical meaning of the behavior (BehaviorSemantic), see Table 3. The object.Parameter can be expressed as example 2: Example 2. Take the scene in Figure 1 as an example.The shipA.Parameter is expressed: according to Equation (11), the behavior of ship A can be expressed as: Equation ( 13) means that the ship A is accelerated from the time ti−1 to ti. Formal Expression of Ship Behavior Since the machines can only understand characterized, formulaic, and structured knowledge, it is necessary to express the knowledge of ship behavior in the way machines can "read", and such process is named as "formal expression".Thus, the definition of formal expression of ship behavior is shown as Definition 7. Definition 7. Formal expression of ship behavior is a formulaic and structured expression of ship behavior using methods, such as functions and sets. Attribute Behavior According to Definition 5, attribute behavior is the change of the attribute elements, which include the changes of ship's position, velocity, course, and signal, etc.Some typical attribute behaviors are shown as follows: - The change of velocity attribute implies the acceleration or deceleration attribute behavior; - The change of course attribute can be divided into turning left and right steering attribute behavior; - The change of signal attribute behavior refers to the signal number, color, and shape that will be changed in time scales Based on Equation ( 11), the attribute behavior can be formulated as: . ( where Object.Parameter attribute represents the smallest unit describing the attribute elements of the specific ship object, "Type" represents the type of attribute elements or relational elements of the specific ship object; "Value" represents the value of the "Type", represents the moment when the "Type" has the "Value". Relational Behavior In COLREGs, the relational behavior (e.g., variable relative distance and bearing) of ship objects are mainly used to determine the criteria of certain scenes and ships obligations.Some typical relational behaviors are shown as follows: 1.The change of relative distance relation implies the "near" or "far away" relation behavior; 2. The change of relative bearing relation can be divided into the angle of bearing turning smaller and the angle of bearing turning bigger; Based on Equation ( 12), the relational behavior can be formulated as: where [Object1, Object2] Parameter Relation represents the smallest unit describing the relational elements of the specific ship object, "Type" represents the type of relational elements of the specific ship object; "Value" represents the value of the "Type", represents the moment when the "Type" has the "Value". Case Analysis In order to demonstrate the proposed models, Rule 9 (the Narrow Channel clause) from COLREGs is introduced (the content of Rule 9 is shown in the Table A1), and the ontological behavior model based on Rule 9 is used.The Narrow Channel clause (Rule 9) addresses the promoting or non-promoting behavior when the ship object (e.g., _ ) enters, leaves, and navigates in a narrow channel. Ontological Expression of Ship Object Based on Rule 9 By analyzing the text information from Rule 9, there are two types of objects, namely the ship object and the waterway object, specifically, sailboats, ships less than 20 m in length, vessels engaged in fishing, narrow channel, etc., that are shown in Table 4.For the ship object, the attribute elements contain static attributes and dynamic attributes, which are listed in Table 5. 1.The static attributes include ship's type, call sign, size, etc. 2. The dynamic attributes include some time-varying attributes, such as position, velocity, course, draft, sound signal, etc. For the waterway object, the attribute elements also include static attributes and dynamic attributes, which are shown in Table 5. 1.The static attributes of narrow water channels are the name of the narrow water channel, the center position of each water depth; the width of the navigable water area; the boundary information of the narrow water channel.2. The dynamic attributes of narrow water channels are the flow velocity, flow direction, and visibility of narrow water channels. Table 5. Attribute elements between water traffic objects in the Narrow Channel clause. Ship Static attribute (ame_Ship,h,ti) "Ship's name is "h" at ti" (MMSI,i,ti) "Ship call sign is "i" at ti" (Size,j,ti) "The value of ship size is "j" at ti" (Type_ship,k,ti) "The value of ship type is "k" at ti" Dynamic attribute (Location,a,ti) "Ship's location is "a" at ti" (Velocity,b,ti) "Ship's velocity is "b" at ti" (Course,c,ti) "Ship's course is "c" at ti" (Draft,d,ti) "Ship's draft is "d" at ti" (Sound,e,ti) "Ship's sound is "e" at ti" Narrow Channel (NC) Static attribute (Name_NC,l,ti) "The value of Narrow channel name is "l" at ti" (Boundry_NC,m,ti) "The value of boundary position of the narrow channel is "m" at ti" (Width_NC,n,ti) "The value of navigable water width of the narrow channel is "n" at ti" (Location_NC,o,ti) "The value of the center position of each water depth area of the narrow channel is "o" at ti" Dynamic attribute (Visibility,f,ti) "Visibility in narrow channel is "f" at ti" (Flow velocity,g,ti) "Flow velocity in narrow channel is "g" at ti" According to Section 3.2.2, the relational elements among these objects (ships and the waterway) can be analyzed from the following three aspects: time, space, and semantic.Table 6 lists different objects, the relationships between objects, and the semantic expressions of the relationships. 1.The time relations between the ship and the narrow water channel include the time before the ship enters the narrow water channel, after entering the narrow water channel, and when the ship moves in the narrow water channel. 2. The spatial topological relationship includes the ship outside the narrow water channel and the ship in the narrow water channel.Ships are in narrow channel elbow waters or boundary waters, etc. 3. The semantic relations include ships avoiding anchoring and crossing in narrow channels.Specific numerical values express the spatial position relationship and spatial distance relationship between ships and ships; semantic relations include the ship attempting to overtake another ship, the other ship agrees or suspects overtaking, and sailboats and ships less than 20 m in length should not interfere with ships that can only navigate safely in narrow channels.Vessels engaged in fishing shall not hinder any ships that navigate safely in narrow channels, etc. (Semantic.Avoid_impede,1,ti) "Ships less than 20 m in length should not impede ships that can only navigate safely in narrow channels" [Shipfishing, Shipin] (Semantic.Avoid_impede,1,ti) "Vessels engaged in fishing shall not impede any vessel navigating safely in the narrow channel" Formal Expression of Ship Behavior Based on Rule 9 The text information of Rule 9 addresses the attribute and relational elements of objects.Table 7 lists the attribute elements of one ship at different moments in time.By comparing the attribute elements at different moments, the ship's attribute behavior is inferred, and the attribute behavior is concluded in the last column of the table.Based on Table 7, the machine can reason about the behavior of the ship by analyzing or comparing the values of position, velocity, heading, and other ship attributes in a narrow channel at different moments.Specifically, the machine can judge whether the ship has moved, accelerated, decelerated, and turned in the period between the two moments., the ship and the waterway).By comparing the relational elements at different moments, the ship's relational behavior is inferred, and the relational behavior is expressed semantically.Based on Table 8, the machine can reason about the behavior of the ship by analyzing the topological relationship, and the spatial topological behaviors including sailing in, sailing out, and crossing can be inferred.By analyzing the spatial bearing relationship and spatial distance relationship between ships, the pursuit and crossing behavior between ships in the narrow channel can be inferred. Objects Relational Elements Relation Behavior Reasoning Based on the Proposed Method Based on the above formal expression of the behavior of ships in the narrow channel terms of COLREGs, a formal expression of ship behavior can be applied in conjunction with AIS data and nautical chart data. In Figure 5, we introduce a scene where two ships encountered in a narrow channel.Ship B is navigating in the starboard channel and move towards the north; Ship A is navigating in the port channel and move towards the south. By analyzing the changes of the attribute elements and relational elements of ship A and ship B at the moments of time t1,t2, and t3, and expressing the attribute behavior and relational behavior of the ships formally in this way, the machine can finally judge whether the ship behavior complies with the COLREGs.According to the above research on the expression of ship objects and ship behavior, the attribute elements, relational elements of ship objects, and the ship's attribute behavior and relational behavior can be expressed as follows: (a) The expression of the attribute elements of the ship A (b) The expression of the attribute elements of the ship B . ( According to the changes of the velocity and course of ship A, the semantics of the ship behaviors are expressed as "accelerate" and "keep course" from the time t1 to t2, "decelerate" and "turn starboard" from the time t2 to t3.From time t2 to t3, the course of ship A is perpendicular to the total flow direction of the narrow channel, which means a spatial topological behavior of "crossing" between ship A and the narrow channel.Therefore, it violates the COLREGs rule that "Ships should avoid crossing narrow channel". (g) The expression of the attribute behavior of ship B . ( According to the changes of the velocity and course of ship B, the semantics of the ship behaviors are expressed as "decelerate" and "keep course" from the time t1 to t2, "keep velocity" and "keep course" from the time t2 to t3.Ship B is "anchored" in the narrow channel from the time t2 to t3.Therefore, it violated the COLREGs stipulation that "ships should avoid anchoring in the narrow channel". (h) The expression of the relational behavior of ship A and ship B According to the changes of the relative distance and relative bearing between ship A and ship B, the semantics of the ship behaviors are expressed as "near" and "move to bow" from the time t1 to t2, "far away" and "move to stern" from the time t2 to t3. (i) The expression of the relational behavior of ship A and Narrow channel According to the changes of the topology relation between ship A and the narrow channel, the semantics of the ship behaviors are expressed as "sailing in" from the time t1 to t2, "keep topology" in the narrow channel from the time t2 to t3. ( According to the changes of the topology relation between ship B and the narrow channel, the semantics of the ship behaviors are expressed as "keep topology" in the narrow channel from the time t1 to t2, "keep topology" in the narrow channel from the time t2 to t3. According to the above-mentioned expression of the attribute behavior of ship A and ship B, and the relational behavior between ship A and ship B, ship A and the narrow channel, and ship B and the narrow channel at the time from t1 to t3.Based on these expressions, we can clearly judge whether the ship behavior complies with COLREGs, see Table 9. "keep topology = 1" Yes Discussion With the development of knowledge engineering, knowledge expression has been widely explored and utilized in multiple knowledge-driven tasks, which significantly improves their performance.In this section, we first give a summary of this research then summarize the advantages and disadvantages of the method of this research. Discussion on Case Study In this paper, we provide a broad overview of currently available techniques, including RDF, function mapping, and set expression methods.The proposed method imitates human understanding ability, which makes it possible to incorporate prior knowledge to assist machine recognizing. In Section 3, we abstractly express the ship objects in COLREGs as attribute elements and relationship elements, and in Section 4, we express the dynamic changes of the ship object's attribute elements and relationship elements over time as ship behavior.The expression method through RDF, function and collection is similar to human thinking, which is more in line with our COLREGs ship behavior ontology knowledge modeling.Based on the ship behavior ontology method in Section 3 and 4, we use COLREGs (Rule 9) for example verification in Section 5, and the results show that our method can formally express the ship behavior of COLREGs. However, this research is only the initial work for realizing ship behavior knowledge reasoning to the machine.Based on this research, in the future, the ship behavior knowledge graph, COLREGs knowledge graph, and the knowledge graph of water traffic scene can be further constructed to realize the autonomous recognition of water traffic scenes, judge water Traffic situation, reason about the violations of COLREGs by ships, and support decision making of MASS. Advantages and Disadvantages of the Proposed Method (1) Advantages of this method In this paper, the ship behavior, based on COLREGs, is modeled as the change of entity elements in time and space scales by using RDF, function mapping, and set expression methods.The advantages of this method are as follows: first, it can capture hidden semantic information in COLREGs; second, it can improve the accuracy of knowledge recognizing significantly; finally, it can simulate human recognizing ability, which makes it possible to incorporate prior knowledge to assist in recognizing. (2) Disadvantages of this method On the basis of Sections 3-5, we realize the formal expression of the ship behavior ontology model in COLREGs, but the ontology model still has some deficiencies.The knowledge model of ship behavior established in this paper is still in the enlightenment stage in the maritime industry, which has not yet formed a unified industry standard.Its disadvantage is that it has not solved a series of problems such as dependence on domain experts and poor generalization ability.On the one hand, this method requires manual modeling of ship behavior knowledge, and its modeling efficiency is low.On the other hand, semantic calculation and reasoning methods are still missing. Future Work The formal expression of ship behavior is the basis for developing autonomous navigation systems that support the scene cognition, the intention inference, and the rulecompliant actions of the systems.This paper studies the formal expression of ship behavior based on COLREGs.However, there is still a certain distance for the machine to truly realize the autonomous recognition of the navigation scene, the autonomous reasoning of the ship's intention, and the autonomous judgment of the ship's behavior in compliance with the COLREGs rules.Based on the research in this paper, we give several directions for future research, as follows: (1) Constructing the ontology of ship behavior Ontology plays an important role in enriching the semantic information of things and realizing knowledge sharing.Based on the formal expression of ship objects and ship behavior in this paper, the ship behavior ontology is further constructed to form a knowledge base with semantic information, and the custom SWRL rules are input into the ontology inference engine to realize the machine's autonomous cognition of ship behavior. (2) Constructing the ontology of traffic scene COLREGs are the norms of ship behavior in different traffic scenarios.According to different traffic scenarios, ships should take corresponding behaviors, the traffic scene ontology is constructed based on COLREGs.The custom SWRL rules are input into the ontology inference engine to realize the machine's autonomous cognition of traffic scenarios. (3) Constructing the knowledge graph of ship behavior Based on the formal expression of ship behavior in this article, and the ship behavior ontology and traffic scene ontology constructed in future research, the knowledge graph of ship behavior can be further constructed in the future.Then, the machine can be queried, and it can be inferred that the actions whether the acctions are COLREGs-compliant or not in different scenarios. Conclusions For developing rule-compliant maritime autonomous surface ships (MASS), understanding the Convention on the International Regulation for the preventing Collision at Sea (COLREGs) is the foundation for the machine.The existing expert systems for MASS did not teach the machine to understand the COLREGs rules but list condition-and-reaction rules for endless exceptions.To handle this issue, this paper proposed an ontological method to model the ship behavior and try to build the first step to help the machine to interpret the COLREGs in a manner of humans. The attributes of the ship are categorized into "attribute elements" and "relational elements", and the ship behaviors then are defined as the changes on "attribute elements" (i.e., attribute behavior) and "relational elements" (i.e., relational behavior).Based on these definitions, the attribute elements, relational elements, attribute behavior, and relational behavior are formally expressed by using the Resource Description Framework (RDF), function mapping, and set expression methods.By introducing Rule 9 from COLREGs, this paper demonstrates the performance of the proposed method, which has laid a theoretical foundation for structural modeling and semantic understanding of ship behavior. The proposed method addressed a novel way to develop the rule-compliant machine, which is promising in the development of MASS.This paper is the first step for the rulecompliant MASS, and the proposed model is still at the conceptual and logical levels.Thus, it is necessary to construct the ship behavior ontology further, construct the knowledge model driven by the ship behavior, and use it in actual cases in the future. Figure 1 . Figure 1.Abstract schematic diagram of Ship entity. where Type represents the type of Object.parameters,such as velocity, course, relative distance, relative bearing, etc.Additionally, the parameters relating to the attribute elements are collected in Object.Parameter attribute , and the parameters relating to the relational elements between Object and Object2 are collected in [Object1,Object2].Parameter relation .Thus, Equation (3) can be expressed as: Parameter of object can be expressed as Example 1: ., name, type, size of ship, etc e.g., position, heading, speed of ship, etc Figure 2 . Figure 2. Attribute elements of ship entity. Figure 4 . Figure 4. Bearing relational elements of ship objects. Definition 5 . Ship's attribute behavior refers to the change of ship object attribute information, e.g., ship's position, course, velocity and light type, denoted as Behavior Attribute .Definition 6. Ship's relational behavior refers to changes in ship relational elements over time, including spatial relationships, temporal relationships, semantic relationships, also including the generation, change, and demise of relationships, denoted as Behavior Relation . where [ti−1, ti].Similarly to Equation (3), ObjectBehavior.Parameter is a set of Behavior.parametersthat change their values during Ts, which is formulated as: ObjectBehavior.Parameter represents a set of Behavior.parameters,f(•) is the function that finds the "dType" that "dValue" is non-empty from ti−1 to ti.Then, the Object.Behavior can be expressed by the following formula: ( Before,1,ti) "Before the ship enters the narrow channel" (Time.After,−1,ti) "After the ship enters the narrow channel" (Time.Between,2, [ti,ti+1]) "The time period during which the ship is sailing in the narrow channel" Spatial topological relation (Topology.Separation,−1,ti) "The ship is outside the narrow channel" (Topology.Inclusion,1,ti) "The ship is in the narrow channel" (Topology.Inclusion_starboard,12,ti) "The ship is in the narrow channel on its starboard side" (Topology.Inclusion_elbow,13,ti) "The ship is driving in the waters of the elbow of the narrow channel" Semantic relation (Semantic.Avoid_anchoring,1,ti) "Ships avoid anchoring in the narrow channel" (Semantic.Avoid_crossring,1,ti) "Ships avoid crossing narrow channel" Relative bearing,a,ti) "The bearing relation between ship A and ship B" (Relative distance,b,ti) "The distance relation between ship A and ship B" Semantic relation (Semantic.Overtaking_Port,1,ti) "Ship A attempts to overtake the port side of Ship B" (Semantic.Overtaking_Starboard,2,ti) "Ship A attempts to overtake from the starboard side of Ship B" (Semantic.Agree_Overtaking,3,ti) "Ship B agrees to ship A overtaking" [Shipsailing, Shipin] Semantic relation (Semantic.Avoid_impede,1,ti) "Sailing boats should not impede ships that can only navigate safely in the narrow channel" [Ship ≤ 20 m, Shipin] Figure 5 . Figure 5. Application of formal expression of ship behavior in narrow channel scenarios. ( d ) The expression of the relational elements between ship A and Narrow channel The expression of the relational elements between ship B and Narrow channel The expression of the attribute behavior of ship A (j) The expression of the relational behavior of ship B and Narrow channel ) Table 1 . Time relational elements of ship objects in COLREGs. between Between ( , ) beginning at Begin with ending at End with (3) Semantic relational elements Table 2 . Time relational elements of ship objects in COLREGs. Table 3 . The semantics of behavior. Therefore, it is necessary to input multiple attribute element values at different times for the f(•) function, and Object.Parameter attribute can be formally expressed as: Table 4 . Objects under the Narrow Channel clause. Table 6 . Relational elements of objects in the Narrow Channel clause. Table 7 . Relational elements of objects in the Narrow Channel clause. Parameter Attribute ) Attribute Behavior (Object.Behavior Attribute ) Table 8 lists the relational elements of one ship w.r.t.other objects (i.e. Table 8 . Ship relational behaviors in the Narrow Channel clause. Table 9 . Behavior of objects in the narrow channel.
2022-02-06T16:31:37.217Z
2022-02-02T00:00:00.000
{ "year": 2022, "sha1": "d09c7864b4e68dab7b4ef7a60e718ca1114414ba", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-1312/10/2/203/pdf?version=1644551785", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "117eb603398967299f6560cfa9601b960bc74dc8", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [] }
239618735
pes2o/s2orc
v3-fos-license
Comprehensive Analyses of the Spatio-Temporal Variation of New-Energy Vehicle Charging Piles in China: A Complex Network Approach This study collects data on electric vehicle (EV) charging piles for various provinces in China and analyzes the development of the network of EV chargers from the perspective of a complex network. Features of the distribution of EV charging piles for the period from May 2016 to April 2019 and the spatio-temporal variations across provinces are thus analyzed. The study then transforms time-series data of the EV charging piles into a complex network by applying a visibility graph, uses several clustering methods to categorize different provinces, and predicts the future development of the network of EV charging piles in China. Additionally, the distribution of EV charging piles across time is analyzed for a combination of national policies and new-energy vehicles. The results of the study will guide provincial governments in creating policies that develop relevant industries progressively and promote the sustainable development of EVs and green-energy industry. INTRODUCTION Electronic vehicles (EVs) are universally recognized as a practical solution to the problems of reducing carbon emissions and improving air quality in the global sector of transportation [1]. Many powerful economies are shifting their vehicle preference to electronic vehicles for eco-friendly purposes, and the development of the supporting infrastructure of EVs is rapidly progressing. However, short traveling distances and limited battery volumes due to current technical barriers are holding back the expansion of EVs, and the construction of EV chargers is thus considered the most effective way of promoting the adoption of EVs [2]. In recent years, European countries, along with the United States, have expanded their distributions of EVs and EV charging piles [3], as shown in Figure 1. From the viewpoint of the owners of EVs, the locations of EV charging piles are critical for the convenience of recharging EV batteries. Home chargers have the lowest cost, while public charging piles are becoming a necessary option for off-home charging [4]. Furthermore, public charging piles are mostly high power and provide faster charging in urban areas, which is more suitable for high-power charger installation than homes [5]. While wealthy countries are developing their EV infrastructure, China, a country with a large population and massive land area, is also creating a nationwide distribution of EVs. The charging pile industry is in full development with the expansion of the investment blueprint of new infrastructure in China. As new-energy vehicles are being promoted in China, the construction of charging piles, as important infrastructure, has gradually attracted attention. The central government, provinces, and cities have successively introduced preferential policies and measures that promote the development of the charging pile industry, and the construction of charging piles in China has undergone explosive growth, from 33,000 piles in 2014 to 777,000 piles in 2018, which is growth of more than 200% in 4 years. Statistics show that the 2017 new-energy vehicle ownership, public charging pile number, car pile ratio compared with before 2012 decreased, but the rate of construction of charging piles is not keeping up with the manufacture of new-energy vehicles. China has built 55.7% of the world's new-energy charging piles, but the shortage of public charging resources and user complaints about charging problems continues. Additionally, there are many other problems; e.g., the layout of the charging pile is unreasonable, there is an imbalance between supply and demand, and the time required for investment to turn into profit is uncertain. This paper gives a new perspective of complex network to study the growing distribution of EVs and charging piles in China. This study investigates the historical development of China's new-energy vehicles and charging piles from May 2016 to April 2019 and how local policies have affected the distribution of EVs in China. The data are analyzed by adopting time series visualization, complex networks, and several clustering methods. Combined with the model results, policies and characteristics of provinces, it is believed that the results of this study will provide a reference for the rapid development of charging piles in China. The remainder of the paper is organized as follows. Literature Review reviews previous research on new-energy vehicles and piles. Data and Model presents the models and methods used in the paper, including the methodology and data collection. Results and Discussion presents the estimation results and analysis. Results and Discussion presents empirical research and analyzes features and reasons that lead to these results. Conclusion presents what we do in this study, our findings and expectations. LITERATURE REVIEW Environmental problems have become a major concern in recent years. Many papers have suggested that the cause of environmental problems, such as environmental deterioration and frequent haze, lies in automobile exhaust emissions, which has encouraged the development of new-energy vehicles and their related industries [6,7,8]. The traditional-automobile industry is driven by oil and consumes many precious resources. Therefore, the promulgation of appropriate policies that promote the innovative development of the new-energy vehicle industry will greatly help solve environmental problems. However, there are many problems to be solved in developing new-energy vehicles. One problem is the development of new-energy charging technology while another is the gulf between the rate of manufacture of new-energy vehicles and the rate of construction of new-energy vehicle charging piles, which continues to grow. Scholars have found that the construction of charging pile facilities plays a positive role in the development of new-energy vehicles. Policies supporting EV construction cultivate the EV market, with technical advances and subsidies in China promoting future progress of the EV industry [9]. [10] found that improving the supporting infrastructure has a more obvious effect on the market promotion of new-energy vehicles than factors of technological progress [11]. showed that the construction of charging pile infrastructure provides a stronger incentive to the new-energy vehicle market than government subsidies for vehicle companies. Improvements to charging piles and the supporting facilities of charging stations can affect the customer's intention to purchase new-energy vehicles [12][13][14][15][16]. There is a lack of relevant empirical studies in the literature, with most studies considering simulated scenarios. The situation in the simulated scenario tends to be more or less different from the actual situation. Additionally, most studies have focused on different factors and perspectives of the planning layout of the site selection, operation mode, and system improvements of charging piles whereas there have been few tracing studies on actual construction or studies providing a macroscale or comparative perspective. This paper adopts real-world data to conduct a visual network analysis of the overall development of new-energy vehicles and charging piles based on the Chinese background of the development of new-energy vehicle charging piles. In addition, considering the formulation of new-energy vehicles and charging pile development policies by province, complex network clustering analysis is conducted on data of the development of public charging piles in 31 provinces and cities in China. And Figure 3 shows the process of construction of the time-series network and extraction of features. DATA AND MODEL Data Data are collected from the National Bureau of Statistics of China and the China Electric Vehicle Charging Infrastructure Promotion Alliance 1 . Eliminating the missing data and outliers, this study Time series statistics of EVs in China are processed and generated in MATLAB algorithms, and Gephi has been applied to output the visibility graph and relevant coefficients. Table 1 shows that important relevant policies were launched before and after some of these peaks. China's ratio of new-energy vehicles to charging piles still does not meet the requirements of the development guide. Accelerating the planning and implementation of the reasonable construction of charging piles is the cornerstone of further development. Principles of Time-Series Visualization A time series is a series of data points indexed by the observation time. Common tasks of time-series data mining are dimension reduction, similarity measurement, classification, cluster analysis, pattern discovery, and visualization. Different time-series analysis methods, such as chaos analysis, fractal analysis, recursion graph, complexity measurement, multi-scale entropy, and timefrequency representation, have been developed. In the past decade, scholars have increasingly adopted complex networks to analyze dynamic systems based on time series, such as investigating USA's electricity market, stock prices and even global efforts against terrorism [17][18][19]. The VG method adopted in this paper is based on the complex network model proposed by [20]. Time series can be divided into univariate time series (UTSs) and multivariate time series (MTSs) according to the number of variables. Traditional methods such as K-Shape and K-MS can be used for the rapid and accurate clustering and classification of UTS data sets but they are unsuitable for MTS data mining [21]. Used different frequencies as multiple variables to construct complex networks, detect community structure characteristics, and analyze the relationship between the clustering coefficient and system evolution. By constructing a common projection axis as the prototype of each cluster, the tail removal algorithm Mc2PCA of the method is given and its time complexity is analyzed. The constructed new graph inherits essential properties and inherent features of the time-series data, allowing scholars to conduct analyses and further interpret the original data with the application of theoretical methods of complex networks and graph theory. The VG algorithm transforms a time series {x i }, i 1,. . ., n into a VG G (V,E), where V(G) {v i }, i 1,. . ., n is a set of vertices with vertex v i corresponding to data point x i . E(G) is the edge of the graph. We define A {a i,j }, i, j 1,. . ., n as the adjacent matrix of the VG with a i,j 1 for connected vertices and a i ,j 0 for disconnected vertices. The element a i,j 1 when the geometrical criterion is fulfilled. The principle of the transition is stated as follows. The graph is deemed as a set of zeniths, which are nodes linked to each other by lines called edges. The numbers of new-energy vehicles and charging piles are first counted according to the set time. Statistical histograms are then produced accordingly. The height of the histogram reflects the volume at each time point or month from May 2016 to April 2019. The bottom line is the criterion determining whether two points are set as being connected. The prerequisite for connecting the two points in the network is whether the peaks of the two histograms can be seen from each other (i.e., whether a straight line can connect the peaks without crossing all the histograms). It can then be transformed into the corresponding relationship between the two pairs, which is shown as the connection of each time points on the time dimension. We next acquire the adjacency matrix by working on the timeseries nodes and edges, and we calculate the various features of the subject networks. By analyzing the original data, we visualize the time series data and obtain the complex network characteristics of the newenergy vehicles, all charging piles, and public charging piles. As previously mentioned, we transform the time series data into complex network forms. The network parameters are given in Table 3. We find that among the three networks, the number of edges and average degree are largest for the public charging piles, reflecting that there are more peaks and troughs for these piles. The diameters of the networks of the new-energy vehicles, all charging piles, and public charging piles are respectively 3, 4, and 4; these values are the lowest number of edges between the two time points with the longest distance. The average path lengths are respectively 2.075, 2.111, and 2.129 for the three networks; these values indicate the average number of edges between any two time points. Analysis of Centrality Using data for the period from May 2016 to April 2019, we conduct a quantification analysis of the daily networks, including analyses of the degree centrality, betweenness centrality, eigenvector centrality, strength, and clustering coefficient of the complex network. Degree of Centrality The degree of centrality is the most direct measurement in the analysis of a network and is the simplest measure characterizing the connectivity properties of a single vertex in theory [22]. The calculation of the degree of centrality of a node is simply the counting of the number of edges associated with the subject node n. The degree of centrality of a node is positively correlated to the importance of the node in the network. In network N with k nodes, the degree of centrality of node n, denoted D c (n), is expressed as In graph theory, Θ(V 2 ) and Θ(E) are respectively the complexities of calculating the degree of centrality in the dense adjacency matrix and sparse adjacency matrix, where V is representative of all nodes and E makes reference to all edges. The definition of centrality can be extended from the node to the graph [22]. Assume that n* is the node with the highest degree of centrality in network N. X: (Y,Z) is then defined as the maximum: (3) The centrality of network N is defined as If a node is linked to all other nodes in network W and all other nodes only link to this central node, H of network W (which will be a star graph) reaches a maximum [22]. Here, , and the centrality of network N can be simplified as Betweenness Centrality The extent to which the location of a node is within the scope of other nodes on a graph is a measure of betweenness. Nodes that have higher betweenness centrality values are on the shortest path from other nodes. According to Sanjiv and Purohit [22], in network N with n nodes, the betweenness centrality B c (n) of node n is calculated as follows. First, all the shortest paths of each node pair (p, q) are calculated, it is then evaluated whether node n is on the shortest path of each node pair (p, q), the results are finally cumulated. This process can be simplified as where β pq is the number of shortest paths between nodes p and q and β pq (n) is the number of shortest paths passing through node n. Considering the scale of the network, it can divide the number of pairs without node n to normalization. It is (k − 1) (k − 2) for the directed graph and (k − 1) (k − 2)/2 for the undirected graph. The computations of the betweenness and closeness centrality are based on the computations of the shortest distance. In the search for the shortest path for each node pair, the modified Floyd-Warshall algorithm has complexity of Θ(W 3 ). On a sparse On an undirected graph, weighted edges should not be considered in the calculation of the betweenness and closeness centrality of nodes. More importantly, the norm of graph processing is not to use rings or weighted edges to render relationships simple. In these circumstances, adopting the Brandes algorithm will halve the ultimate centrality owing to the double calculation of the shortest path. Closeness Centrality According to graph theory, closeness is a measure of the complex centrality of a node. Shallower nodes (i.e., nodes having shorter geodesic distances) have higher values of closeness. The nodes that are more central have higher closeness values, and the closeness thus represents the minimum path length in the network. Additionally, closeness is often related to other measurements. The closeness centrality is the average geodesic distance (e.g., shortest path) from node n to other accessible nodes [22]: where k ≥ 2 is the distance of access section N from node n in the network. The closeness centrality is a measure of the time that it takes for a given node to propagate information to other reachable nodes in a network. The closeness centrality C C (n) of node n is defined as the reciprocal of the sum of the geodesic distances to all other nodes N [22]: Closeness can be obtained using different methods and algorithms. Dangalchev [23] modified the definition of closeness so that it can be applied to a non-connected graph and is easier to calculate: Eigenvector Centrality Most hub nodes are found in line with the integral structure of the network, and the eigenvector centrality is then measured. The dimensions of the distance between nodes are acquired through factor analysis. Each node in a network has a relative index value based on the principle that the contribution of a high-index node connecting to a node is more than that of a low-index node [24]. Let p i be the (exponential) value of node i and A i,j be the adjacency matrix of the network. When node i is the neighboring node of node j, A i,j 1, or conversely, A i,j 0. Generally, like the case of a random matrix, each term of A can be a real number representing the connection strength. For node i, the centrality is proportional to the index sum of the nodes connecting with it. Thus, where M(i) is the set of nodes connected to node i, N is the number of nodes, and λ is a constant. The matrix form is P 1 λ AP, whereas the characteristic equation is AP λP. Strength In a directed and weighted network, the strength denotes the total weights of the edges connecting to one node [25]. In this paper, the strength is a measure of the number of EVs in different provinces. The strength is calculated as where N i is the set of nodes connected to node i and w i,j is the weight of the edge from node j to node i. Clustering Coefficient The clustering coefficient describes the characteristics of the graph (or network). A graph G consists of a number of vertices V and a number of lines (called edges) E between vertices. Two adjacent vertices are called adjacent points. The clustering coefficient of a network is defined as [25]: 3 × number of triangles in the network number of connected triples of vertices . Clustering Applying complex network theory to the primal data, the relations within the data are represented on a graph as nodes and edges. In this way, there is a great advantage over the traditional static method in that we can capture the dynamic features and community structures. Furthermore, the nodes and edges can be clustered into different groups. The clustering is an analyzable phenomenon in that the correlation between the set of nodes in a subgroup is higher than that outside the group. In an attempt to obtain an effective clustering result, we adopt a widely used principle proposed by Newman [26]. The modularity Q is formularized as where K is the number of subgroups, L is the number of edges, and l in i and d i l in i l inter i are respectively the number of edges in the corresponding subgroup and the total number of sides of the cluster i. L inter on behalf of the total number of intergroup edges. The modularity Q is calculated as follows. We first calculate the Analysis of Centrality Following the analysis of the degree of centrality of the complex networks, three network centricities are calculated and six indicators are presented in frequency diagrams; these are the eccentricity, closeness centrality, harmonic closeness centrality, betweenness centrality, strength, and clustering coefficient. A comparison of the three eccentricities reveals that in the complex network of new-energy vehicles, the eccentricity is low and the distribution is relatively uniform. The eccentricity of the network nodes of charging piles is mainly around 3 or 4, and the distribution of all charging piles is especially concentrated. This reflects the rapid manufacture of new-energy vehicles, whereas the rate of manufacture of charging piles is relatively stable. The three networks have similar distributions of closeness centrality and betweenness centrality, and it appears that they all have the right-hand bias. In the field of topology and related mathematics, closeness is an elementary concept of the topological space. Intuitively, when two sets are arbitrarily close, they are said to be tight. This concept is easy to adopt in a metric space that defines the distance between elements within a space, but it is difficult to extend to a topological space without a specific metric distance. In network analysis, closeness represents the minimum path length, which means that in the development of the three networks over the 3 years, there is a high possibility of there being extreme quantitative values. Additionally, the strengths and clustering coefficients of the three networks are similar, with medium values having the highest frequency, which indicates that the development of the network of EV charging piles is steady and provinces in China are well connected and coordinated in the advance of EV infrastructure. The results match those of the analyses in the first part above. Figure 4 shows that the centrality distributions of newenergy vehicles and charging piles are somewhat similar and that they are in the process of coordinated development. An increase in the penetration rate of new-energy vehicles requires a foundation of a sufficient number of public charging piles. Small World As explained earlier, we use a fast modular method to cluster nodes in the networks. The results are shown in Figure 5. The densities of the three networks and the number of subgroups are similar. Although the numbers are largely similar, we find that there are more subgroups in the network of new-energy vehicles than in the network of charging piles. This is because when we conduct the clustering, if data are relatively flat with a lower peak value, the network distance increases over time, such that the clustering results have more subgroups. Figure 5 shows that, in the three clustering networks, N8 is distinct, which is consistent with the peak in December 2016 for the underlying trend. At this time, the government issued a notice on accelerating the construction of charging piles and supporting facilities for EVs in residential areas. Overall, however, the results of clustering in these three networks reveal that the development of the network of EVs and that of the network of the piles are in fact inconsistent. The nodes in each subgroup are largely different, and there is therefore still much to do for the pace of manufacture of piles to catch that of EVs. Distribution of the Degree of Centrality The power law distribution is a common statistical phenomenon. Fitting parameters for power law distributions are given in Table 4. Figure 6 and Table 4 show that the distributions of the degree of centrality of new-energy vehicles, total charging piles, and public charging piles follow power laws, with more time nodes and fewer connected edges, and the number of nodes decreases with an increase in the degree of centrality. This clearly indicates that the networks of newenergy vehicles and charging piles are small-world networks and that the manufacture of new-energy vehicles and charging piles will be greatly affected by external factors at critical moments. The sales of fuel-powered cars were in the middle of a slump in 2018, but China's new-energy automobile market grew in 2018 relative to sales in 2016 and 2017. This contrast is closely related to a number of new-energy vehicle subsidies (e.g., a tax exemption for the purchase of new-energy automobile vehicles, tariff cuts of reversed transmission enterprise technology upgrades, and double integral policy). This correspondence shows the importance of promoting government policy. The visualization of the monthly increase in the number of public charging piles for China's new-energy vehicles in Figure 8 shows that the clustering results for China's provinces can be divided into three categories. The first category includes Anhui Province, Beijing, Fujian Province, Gansu Province, Guangdong Province, Hainan Province, Hebei Province, Henan Province, Hubei Province, Jiangsu Province, Qinghai Province, Shandong Province, Shanxi Province, Shanghai, Tianjin, Yunnan Province, Zhejiang Province, and Chongqing. The results show that the closeness centrality and degree of centrality of these provinces, which are accelerating development areas in Guidance on the Development of Electric Vehicle Charging Infrastructure 2015-2020 issued by the National Development and Reform Commission of China in 2015, are relatively high. These provinces have a good foundation for the development of EVs in that they have a large population base and a high population density and require intensive haze control. The local governments of these provinces have formulated and implemented relevant policies earlier and more frequently to guide the development of new-energy vehicles and charging piles. In addition, Beijing, Tianjin and Hebei, and the Yangtze River Delta and the Pearl River Delta are three key areas for haze prevention and control. In particular, Beijing, Shanghai, Jiangsu, and Guangdong have formulated many policies of promoting new-energy vehicles and charging piles for different application scenarios. In Figure 9, Beijing is at the top of the number of public charging piles. In April 2019, the number of public charging piles in Beijing reached 930,000, in particular because of the serious air pollution and the urgent need for governance. Local government in Beijing has issued a series of policies related to car purchase subsidies and welfare for new-energy vehicles. As an example, there is a lottery for the purchase of new-energy vehicles, which drives the use of new-energy vehicles. Additionally, Beijing considered the construction and use of charging piles earlier and more thoroughly than other provinces. In Shunyi District of Beijing, construction units of public charging facilities that meet the requirements of the state and municipality may apply for government subsidies, and new-energy vehicles using public charging piles are given a charging service fee subsidy. Overall, Beijing's new-energy development is policy driven. Meanwhile, the use of new-energy vehicles and charging piles in Guangdong Province, which ranks the second, is technology driven. Guangdong Province is home to many high-tech newenergy car manufacturers, such as China's leading new-energy car company, BYD, which is headquartered in Shenzhen. The local technological atmosphere supports the development of newenergy cars and charging piles in the province. Qinghai Province, Gansu Province, and Yunnan Province have weak industrial foundations, insufficient research and development capacities, insufficient promotion policies, lagging infrastructure, and unimproved market environments. However, they all have a place in the industrial chain of energy resources and new-energy vehicles and charging piles. These provinces are energy-driven. Although the development of charging piles in Qinghai Province started relatively late, the province's cleanenergy resources have broad application prospects in the province's new-energy vehicle charging service business, and there are thus similarities between Qinghai Province and areas of accelerating development in terms of the overall development trend. Additionally, Yunnan Province has unique advantages because of its energy resources. The local government of Yunnan Province attaches importance to the active development of smart services, closely follows the pace of development in the region, formulates and implements development plans, and is committed to combining the tourism resources of the province with the development of new-energy vehicles. For instance, at tourist distribution centers and key scenic spots, tourist buses, shared cars, and self-driving camps (bases) will be built, and regional charging networks will be created to realize intelligent travel, "a mobile phone to travel in Yunnan". Therefore, although there is a gap between the number of charging piles and the provinces in the eastern region, the trend of development is fast. The second category includes the Xinjiang Uygur Autonomous Region, Tibet Autonomous Region, Inner Mongolia Autonomous Region, Shaanxi Province, Liaoning Province, Jilin Province, and Heilongjiang Province. These areas are mainly northwestern and northeastern provinces in which the development of the charging pile network started relatively late, mostly from 2017 to 2019. We take the northeastern provinces as an example. Many automobile industry bases were set up in northeastern China when new China was first founded. However, the technologies of fuel energy are now somewhat backward, and the existing industrial base in northeastern China has resulted in a slow conversion from old to new ways of generating kinetic energy. In addition, as established industries are important to local employment, the local government pays little attention to new industries, and the development of new-energy vehicles and charging piles has been slow. The northwestern provinces and regions, such as the Xinjiang Uygur Autonomous Region, Tibet Autonomous Region, and Inner Mongolia Autonomous Region, are characterized by a vast areas of land and sparse populations. Moreover, there are many ethnic minorities and strong ethnic traditions. The new-energy vehicle market space is small and the costs of constructing charging piles are high in these regions. The cities have weak development potential except for the provincial capitals and some larger cities. In contrast with accelerating the construction of charging piles in developing regions, the main purpose of constructing public charging piles in the regions of the second category is to further improve the convenience of transportation, thus strengthening connectivity, accelerating regional development, and gradually building a national inter-city fast-charging network based on expressways. The third category includes Sichuan Province, Guizhou Province, and the Guangxi Zhuang Autonomous Region. The development of the new-energy vehicle charging pile network began reasonably early, around 2016, in each of these three provinces. However, none of the provinces has advantages in the industrial chain, and the automobile industry is weak in these provinces. At the same time, owing to the renewal of new-energy vehicles in the eastern regions, old fuel-based vehicles have been eliminated, which have emission specifications superior to the existing ones in the western region. These old vehicles are therefore flowing into the western market because their second-hand transaction prices are lower, squeezing the already insufficient space for EVs in the car market. In addition, the geographical conditions of these provinces are a major disadvantage to the adoption of new-energy vehicles and charging piles. Rugged terrain accelerates the power consumption of new-energy vehicles, and the planning and construction efficiency of charging piles is limited. There is thus little motivation to purchase new-energy vehicles, and the overall development of the EV network is slow relative to the development of the economic base. events, such as policy implementations, and it is thus crucial to formulate policies that can be effectively implemented. China is in the background of "New Infrastructure". We carried out cluster analysis on provincial data of public charging piles after time-series visualization, considering that relevant industrial development policies are mostly formulated by provincial governments. The results of the research are summarized as follows. 1) Regional factors play a dominant role in the development of networks of new-energy vehicles and charging piles. A basic regional characteristic of China is that the eastern provinces are more developed than the western provinces, which is obviously reflected in the clustering results. The eastern developed provinces, with a high degree of urbanization, high population densities, and superior economic foundations, have good application conditions for the development of networks of new-energy vehicles and charging piles in that they have a broad market space and rapid socioeconomic development. 2) Whether a province occupies a place in the industrial chain of new-energy vehicles and charging piles and how important it is in the industrial chain strongly affect the situation of local construction. In the case of the upstream provinces, the reserves of energy resources and the difficulty of collection are important. In the case of the downstream provinces, the ability to conduct research and development and the technical level of relevant enterprises are important. 3) National and local industrial policies play an important role in the development of networks of new-energy vehicles and charging piles. In 2018, various ministries and commissions in China issued a series of policies that promoted the rapid development of networks of new-energy vehicles and charging piles. As an example, the Ministry of Industry and Information Technology issued the Notice on Strengthening the Administration of The Catalogue of Newenergy Vehicles Exempted from Vehicle purchase Tax (Draft for Comments). Additionally, the Ministry of Finance, Ministry of Science and Technology, and Development and Reform Commission issued "On the adjustment to perfect the new energy automobile application finance subsidy policy notice", a national department would raise technical threshold requirements, perfect the subsidy standard requirements, and adjust the operating range to the new-energy vehicles subsidies. Overall, the outlook of the domestic new-energy vehicle market in China remains good, and the development potential is extremely large. With the replacement of social energy and on the basis of the good development prospects of China's newenergy vehicles, charging piles will inevitably be adopted broadly as the supplemental energy infrastructure of new-energy vehicles. Provinces that are developing rapidly need to further improve the efficiency of charging pile construction. The construction of charging piles of new-energy vehicles and the development of new-energy vehicles promote and restrict each other. To further develop the network of new-energy vehicles, the premise must be to reduce the ratio of the vehicle pile. Provinces that are developing slowly need to upgrade their industrial structures in light of local conditions and enhance the conditions for newenergy applications. Figure 10 shows the similarity of EV development between provinces in China. We suggest that government have an in-depth understanding of the local application basis, geographical factors, cultural factors, and other application conditions before making future policies. The government should set reasonable development goals, actively implement a subsidy policy for the construction of new-energy vehicle charging piles, and scientifically guide the construction of EV charging infrastructure. We suggest that in the future construction of charging piles, enterprises consider reasonable construction that balances supply and demand and combines with new intelligent infrastructure and the Internet of things. As an example, the sharing mode can be combined with the operation of charging piles. Through win-win cooperation among the government, enterprises, and users, it will be possible to promote the rapid development of the EV industry and create a better air environment. The present study adopted a single index to analyze China's use of new-energy vehicles and charging piles owing to the limitations of the data breadth, scale, and accuracy. In future studies, we will further collect data of relevant indicators and use complex networks in coupled analysis to explore in detail the reasons for variations in development across provinces and cities. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
2021-10-25T13:22:47.563Z
2021-10-25T00:00:00.000
{ "year": 2021, "sha1": "78d5f1253f2130d1fac16ea971e21516ba4b1cd2", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphy.2021.755932/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "78d5f1253f2130d1fac16ea971e21516ba4b1cd2", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Economics" ], "extfieldsofstudy": [] }
4376251
pes2o/s2orc
v3-fos-license
Development of a consensus core dataset in juvenile dermatomyositis for clinical use to inform research Objectives This study aimed to develop consensus on an internationally agreed dataset for juvenile dermatomyositis (JDM), designed for clinical use, to enhance collaborative research and allow integration of data between centres. Methods A prototype dataset was developed through a formal process that included analysing items within existing databases of patients with idiopathic inflammatory myopathies. This template was used to aid a structured multistage consensus process. Exploiting Delphi methodology, two web-based questionnaires were distributed to healthcare professionals caring for patients with JDM identified through email distribution lists of international paediatric rheumatology and myositis research groups. A separate questionnaire was sent to parents of children with JDM and patients with JDM, identified through established research networks and patient support groups. The results of these parallel processes informed a face-to-face nominal group consensus meeting of international myositis experts, tasked with defining the content of the dataset. This developed dataset was tested in routine clinical practice before review and finalisation. Results A dataset containing 123 items was formulated with an accompanying glossary. Demographic and diagnostic data are contained within form A collected at baseline visit only, disease activity measures are included within form B collected at every visit and disease damage items within form C collected at baseline and annual visits thereafter. Conclusions Through a robust international process, a consensus dataset for JDM has been formulated that can capture disease activity and damage over time. This dataset can be incorporated into national and international collaborative efforts, including existing clinical research databases. IntrOduCtIOn Juvenile dermatomyositis (JDM) is associated with significant morbidity and mortality. [1][2][3] To better understand this rare disease, 4 international collaboration is essential. This is feasible with the development of national and international electronic web-based registries and biorepositories. 5 6 For good clinical care and to aid comparison of data between groups, it is crucial to have a common dataset that clinicians and researchers collect in a standardised way, with items clearly defined. The International Myositis and Clinical Studies (IMACS) Group [7][8][9] and Paediatric Rheumatology International Trials Organisation (PRINTO) [10][11][12] JDM core sets were developed predominantly for research studies. Existing myositis registries include partially overlapping but different dataset items, making comparison between groups challenging. 13 This study aimed to define optimal items from existing datasets that would be useful to collect in routine practice, within accessible disease-specific registries, that, when measured over time, would help capture disease outcome/treatment response, which would facilitate both patient care and translational research. MethOds The study protocol and background work have been published. 13 14 The study is registered on the Core Outcome Measures in Effectiveness Trials initiative database. 15 The Core Outcome Set-STAndards for Reporting standards for reporting were followed. 16 The study overview is shown in figure 1. background work A steering committee (SC) developed a prototype dataset by scrutinising all items within existing international databases of juvenile-onset myositis (JM) and adult-onset myositis, 1 17-19 informed by a literature search and detailed analysis of the UK Juvenile Dermatomyositis Cohort Biomarker Study and Repository (JDCBS). 13 19 Leading representatives of each partner organisation 9 12 17 20 21 detailed in the study protocol 14 approved the template/ provisional dataset. stakeholder groups This study design aimed to employ representation from healthcare professionals with experience in myositis working as physicians, allied health professionals or clinical scientists in paediatric or adult medicine within rheumatology, neurology or dermatology 14 and consumers (patients with JM and their parents or carers). Clinical and epidemiological research healthcare professional delphi process A two-stage Delphi process was undertaken. 14 Items contained within the prototype dataset were listed and further modified by the SC to ensure clarity. The items were formatted into a custom-made electronic questionnaire, piloted before distribution. After modifications, the Delphi template included 70 items with an additional 53 conditional on previous response (detailed in online supplementary table S1). Participation was invited via membership lists of IMACS, Childhood Arthritis and Rheumatology Research Alliance (CARRA), Juvenile Dermatomyositis Research Group (JDRG) UK and Ireland, Paediatric Rheumatology European Society (PReS) JDM working party and PRINTO Centre Directors. These are representative of international paediatric rheumatology and myositis specialty groups, capturing opinion of clinicians, scientists and allied health professionals. The estimated membership of these groups totals more than 1000. However, the majority of members belong to more than one organisation and membership lists include retired/non-active members or specialists working in adult-onset myositis potentially less inclined to answer a paediatric-specific survey. 14 Participants were asked to rate the importance of each item for clinical practice and separately for value in research, using a scale of 1-9: 1-3 (of low importance), 4-6 (important but not critical) and 7-9 (critical). 14 An option of 'unable to score' was given and free text comments were allowed. Delphi 2 was sent to participants who scored 75% or more of the items in round 1 of the Delphi. Each participant was asked to re-score each item, having been shown the distribution of scores for the group as a whole and their own score. Patient and parent survey The healthcare professionals' survey was modified into separate parent and patient questionnaires as per protocol, 14 formatted for computer or paper format completion. The questionnaires and age-appropriate information leaflets were reviewed by patient and public involvement coordinators and by parent/ young people's focus groups. 14 The focus groups also reviewed patient/parent-reported outcome measures (PROMs) used for JDM and other rheumatology conditions, [22][23][24][25][26][27] and opinions were summarised (online supplementary table S2). Thirty items were included in patient/parent questionnaires; 23 from adaptation of the Delphi (combining or simplifying items from the healthcare professional questionnaire and selecting items particularly relevant to patients/parents), 2 additional questions added by the SC to determine patient/parent perspectives on collecting and storing information, plus 5 questions suggested by patients/parents within focus groups (online supplementary table S1). The scoring system was simplified into three categories of 'not that important', 'important' and 'really important'. An option of 'unable to score' was given and free text comments were allowed. Participation was open to any patient with JMchild/adult, or any parent/carer of a child with JM. Patients with adult-onset myositis (onset ≥18 years) were excluded. Information leaflets and questionnaires were in English only; translators could be used if available. Patients/parents were signposted to the study via email distribution lists/websites of North American and UK patient support groups (Cure JM and Myositis UK), 28 29 the lead of the JDRG patient/parent groups and JDRG coordinator. 20 In addition, following site-specific ethics approval, UK centres participating in the JDCBS 19 30 and a Netherlands site invited patients/parents to participate. data analysis For each item, the number and percentage of participants who scored the item and the distribution of scores (grades 1-9) were summarised for each stakeholder group. Consensus definitions were applied as 'consensus in' versus 'equivocal' or 'consensus out' according to predefined consensus definitions (table 1). Consensus meeting Eighteen voting delegates were invited to a 2-day consensus meeting, led by a non-voting facilitator (MWB). International representatives were experts in myositis from paediatric rheumatology/myositis groups and professionals who care for patients with myositis including neurologists, dermatologists, adult rheumatologists and physiotherapists. Prior to the meeting, delegates were sent a summary of results to review. During the consensus meeting, Delphi 2 results and patient/parent results were presented for each item-as shown in online supplementary figure 1. Items achieving 'consensus in' within the Delphi and patient/parent questionnaires were voted on immediately. Those not achieving 'consensus in' were discussed by nominal group technique. Consensus was defined a priori as ≥80% (table 1). Discussion and re-voting allowed refinement of items or associated definitions. The process continued until consensus was reached or until it was clear that consensus would not be reached. testing in practice The proposed dataset was formatted into three sections (forms A, B and C) and tested in clinical practice. Members of the expert group were asked to test the dataset themselves and/ or delegate a member of their department unfamiliar with the dataset. Clinicians completed patient-anonymised data on one to two patients under their care and a feasibility questionnaire (online supplementary table S3). Feedback was considered by the SC and refinements made. The dataset was sent to the expert group, including representatives of partner organisations (IMACS, CARRA, PRINTO, PReS JDM working group, JDRG, Euromyositis) for comment. results Two hundred and sixty-two healthcare professionals accessed the system (26% of the estimated total membership of specialty groups). 181/262 (69%) completed ≥75% of Delphi 1 (June-September 2014). One hundred and sixty-five agreed to take part in Delphi 2 (November 2014-January 2015); from these, 146 replies were received (12% attrition). One hundred and seventy-two participants provided full demographic data in round 1 showing that survey responses were received from Europe (44%), North America (34%), Latin America (12%), Asia (6%), Australia/Oceania (0.5%), Middle East (3%) and Africa (0.5%). Respondents primarily were paediatric or adult rheumatologists (85%) or had an interest in rheumatology (8%), but also included clinical academics (specialty not defined, 4%), dermatologists (0.5%), neurologists (0.5%), physiotherapists (1%) or other professionals (1%). The majority of respondents had substantial experience in the specialty (74% with ≥10 years of experience) and worked within paediatrics/mainly paediatrics (82.5% vs 17.5% of respondents working with adults). Responses were summarised as percentages of participants ranking items as critical for decision-making (score 7-9) for each item (clinical/ research), shown in online supplementary table S1. Availability of investigations to clinicians within clinical practice was also summarised from responses received in Delphi 1 (online supplementary table S4 and online supplementary figure S1). Patient/parent surveys In total, 301 surveys were completed (198 from parents, 103 patients). To allow time for sufficient data capture for parent/ patient questionnaires, data collection continued after the consensus meeting. At the consensus meeting, data were available from 16 completed patient surveys and 22 parent surveys. Decisions made at the consensus meeting with 38 responses still held true in the final analysis of 301 replies. Responses were received from Europe (53%), North America (44%) and other continents (3%). Patients completing the questionnaire were a median of 15 years of age (IQR 12-17). Parents completed questionnaires for children who had a median age of 11 years (IQR 7-15). Overall, there was good agreement between patient/ parent surveys and the healthcare professionals' Delphi and items agreed at the consensus meeting (online supplementary table S1). Key exceptions are summarised in table 2. Consensus meeting and output All invited experts (n=18) attended the consensus meeting (Liverpool, March 2015), representing Europe (n=10), North America (n=6), Latin America (n=1) and Asia (n=1). Specialties included paediatric rheumatology (n=13), adult rheumatology (n=2), paediatric dermatology (n=1), paediatric neurology (n=1) and physiotherapy (n=1). Parents/patients were not included. Output from the consensus meeting is shown in online supplementary table S1. A set of recommendations for first visit, for each visit and for annual assessment was made. Refinement took place following the consensus meeting via three rounds of SurveyMonkey, principally to better define myositis overlap features and disease damage items (shown in online supplementary table S1), with the same members of the expert group (100% response rate). testing the dataset in practice Glossaries of definitions/instructions to aid completion, along with muscle strength-testing sheets, were formulated into appendices, approved by the SC. Twenty clinicians tested the dataset (October 2016-April 2017); eight were present at the consensus meeting, three had completed the Delphi and nine were new to the dataset. Time taken to complete the dataset in clinical practice ranged from 5 to 45 min (median time 15 min). Were myositis-specific antibodies tested at diagnosis? If positive, asked to select all that apply (eight options) 10 Were myositis-associated antibodies tested at diagnosis? If positive, asked to select all that apply (nine options) Treatments received prior to diagnosis of JDM Clinical and epidemiological research In addition, 15/20 (75%) found the dataset helpful in practice. Feedback was reviewed in detail by the SC and refinements made. Completed optimal dataset The resulting optimal dataset is summarised within tables 3-5 representing three forms. dIsCussIOn An internationally agreed JDM dataset has been designed for use within a clinical setting, with the potential to significantly enhance research collaboration and allow effective communication between groups. The accompanying glossary of definitions may be particularly helpful to those in training or physicians less familiar with JDM and for standardisation of the information. Key items are included within the dataset that allow documentation of disease activity and damage with the ability to measure change over time. If adopted widely, the dataset could enable analysis of the largest possible number of patients with JDM to improve disease understanding. It is anticipated that further ratification of the dataset will take place when incorporated into existing registries and national/international collaborative research efforts. It is acknowledged that updates may be needed in the future to incorporate advances in JDM. When tested in practice by a small number of clinicians, the forms took between 5 and 45 min to complete. The wide range is likely to be due to some respondents interpreting this question as time taken to complete the actual forms, while others may have documented time taken to complete all the tasks within the forms, including clinical examination. It is likely that completion time will be reduced as clinicians become familiar with the questions over time and employment of electronic data entry systems. The dataset does not encompass every aspect of a clinic consultation. Other factors such as adverse effects to medication or details of pain (ranked important by patients/parents) should be covered as part of standard care. This study has benefited from the enormous contribution of patients and parents. It is interesting that patients do not necessarily perceive items such as shortness of breath, chest pain and abdominal symptoms as important in JDM whereas for clinicians, major organ involvement has important implications for prognosis and treatment choices. [31][32][33][34][35][36][37] Likewise, growth and pubertal parameters were rated less important by patients/ parents but retained due to impact of active disease and corticosteroid treatment on growth. 38 39 Self-assessment is allowable to make pubertal assessment more acceptable to patients. 40 Notable discrepancies in healthcare professional and patient/ parent opinion included the use of PROMs capturing function and health-related quality of life (HRQOL). The benefits and limitations of individual tools have been described. 22 27 Within this study, comments from patient/parent surveys and focus groups suggested a dislike of 0-10 cm scales used in VAS measurements (data not shown). It is possible that a pain/general VAS is not adequate to capture the complexity of pain or overall feelings for a patient, particularly due to the variability of the disease. Despite this caveat, clinicians recognise the need to have outcome-driven data that include measures of activity, participation, pain and HRQOL. 27 Patients with JDM have been found to have significant impairment in their HRQOL compared with healthy peers. 41 PROMs used within the IMACS and PRINTO core sets, including the Childhood Health Assessment Questionnaire and Child Health Questionnaire, are not designed specifically for JDM but have been evaluated and endorsed for use in juvenile myositis. 22 The Juvenile Dermatomyositis Multidimensional Assessment Report (JDMAR) is a multifunctional tool that includes function, quality of life, fatigue and adverse effects of medications that has been specifically developed for JDM. 23 It is currently undergoing further validation. Fatigue, rated as important by parents in this work, is included within the JDMAR. During the consensus meeting, it was not possible to define a single agreed PROM for function (activity) or HRQOL (participation) despite taking into consideration results of the healthcare professionals' Delphi, patient/parent surveys and feedback from patients within a UK focus group (online supplementary table S2). The difficulty of PROMs being internationally accepted was discussed and noted. Specifically, items within tools developed in Europe/North America may not be relevant in economically less developed countries. It was agreed that the dataset would include a recommendation to use 'an age-appropriate patient/ Clinical and epidemiological research parent-reported outcome of function' and 'an age-appropriate patient/parent-reported measure of quality of life'. More work is needed to make PROMs acceptable to patients/parents and applicable to their disease. 42 43 This study is limited by the fact that patient/parent questionnaires were available in English only, reducing the number of countries that could contribute; hence, there is low patient participation outside of Europe and the USA. Complete data from patient/parent surveys were not available at time of the consensus meeting. However, reanalysis of outcomes after the close of the patient/parent survey showed that decisions made at the consensus meeting still held. Initial response rate to Delphi 1 was low (estimated at 26% of potential specialty group membership). However, not all members of the respective organisations contacted would be expected to answer a paediatric-specific survey as described previously. Response rates and attrition between Delphi 1 and 2 were as expected from paediatric rheumatology studies with similar methodology. [44][45][46] Despite inclusion of neurology and dermatology experts in the consensus meeting, the participants of this study were primarily rheumatologists. Considerable discussion took place during the consensus meeting regarding the assessment of cutaneous disease in myositis. There are many tools available, 22 but no single tool has been universally accepted. It can be difficult to define skin activity versus damage, particularly without a skin biopsy. After voting on individual skin items and comparing two tools endorsed in JDM, the abbreviated Cutaneous Assessment Tool (aCAT) and Disease Activity Score (DAS) skin score, 22 agreement was reached to use items within the aCAT as disaggregated skin manifestations. These items are recognised to reflect cutaneous lesions associated with disease activity and damage in juvenile and adult myositis. 22 Within the item 'periungual capillary loop changes', 'measure of nailfold capillary density if available' was added in recognition of nailfold density relating to prognosis. 47 48 A direct comparison of all available skin tools was outside the remit of this study. Recent published work evaluating the Cutaneous Dermatomyositis Disease Area and Severity Index (CDASI) and the Cutaneous Assessment Tool Binary Method (CAT-BM) in JDM confirms the reliability of both tools when used by paediatric dermatologists or rheumatologists. 49 The consensus-driven dataset developed in this study, like IMACS and PRINTO core sets, includes physician and patient/ parent global activity, each of which is included in recently defined response criteria for minimal, moderate and major improvement in JDM. 8 IMACS measures muscle strength using Manual Muscle Testing, whereas CMAS is used within the PRINTO core set. Both were retained in the consensus dataset. Both tools have been found to have very good inter-rater reliability (when summary scores are used) 22 and either is allowed in the recently defined American College of Rheumatology/ European League Against Rheumatism-approved response criteria. 8 The overlap between the IMACS/PRINTO core sets and items contained within the consensus dataset is unsurprising as all core sets aim to capture and measure disease activity and damage over time. A key difference is that the consensus dataset does not use specific tools to record disease activity, such as the Myositis Disease Activity Assessment Tool or the DAS, but rather uses disaggregated items, each of which has been evaluated by a multistage consensus-driven process that considered value for both clinical use and research. The dataset was developed with a key aim for it to be incorporated into existing registries, allowing comparison of data between groups. The already available web-based Euromyositis registry, www. euromyositis. eu, is free to use in clinical practice and for research and includes a JDM proforma, which will be modified where needed to include items in this new dataset. Likewise, at the time of writing, the CARRA Registry is in the final stages of adding JDM (https:// carragroup. org/) and will include the items contained in this consensus dataset. The JDCBS (h ttps ://www. juveniledermatomy osit is. org. uk/) aims to incorporate this dataset as far as possible. Research priorities defined during the consensus meeting included the need to further develop skin assessment tools that are practical within a busy clinical setting, develop an abbreviated muscle assessment tool that removes redundant items from a combined Childhood Myositis Assessment Scale and Manual Muscle Testing and to further develop PROMs so that they are applicable to JDM and acceptable to patients. COnClusIOn Through a robust international consensus process, a consensus dataset for JDM has been formulated that can capture disease activity and damage over time. This dataset can be incorporated into national and international collaborative research efforts, including existing clinical research databases and used routinely while evaluating patients with JDM. Author affiliations 1 department of paediatric rheumatology, Alder Hey Children's nHS Foundation trust, Liverpool, UK 2 department of paediatric rheumatology, Great ormond Street Hospital for Children nHS Foundation trust, London, UK membership who contributed to this work and the IMACS Scientific Committee for guidance, particularly dr rohit Aggarwal, dr Frederick Miller and dr dana Ascherman. We would like to acknowledge the prInto directors who contributed to this work and particularly acknowledge dr nicola ruperto, representing prInto. We acknowledge the support of oMErACt and in particular, would like to thank professor Maarten Boers for his advisory role. We would like to thank the UK trainees Group and individuals selected by the steering committee for piloting the questionnaire before distribution. We would like to acknowledge our collaborators within the CoMEt group, in particular, Heather Bagley, patient and public Involvement Coordinator. We would also like to acknowledge the help and advice of professor Bridget Young, olivia Lloyd and Helen Hanson and the Clinical Studies Group consumer representatives and BSpAr parent Group, particularly Sharon douglas. We would like to thank young people from nIHr Young person's Advisory Group and the JdM Young person's Group, for their advice on patient questionnaires and information leaflets and Hema Chaplin, patient and public Involvement and Engagement Lead for the ArUK Centre for Adolescent rheumatology for facilitating these groups. We acknowledge the support and collaboration of patient/parent support groups, Cure JM and Myositis UK. In particular, we would like to thank these organisations for promoting the patient and parent questionnaires via their websites/email lists. We would like to thank Katie Arnold, Lawrence Brown, Kath Forrest and Karen Barnes for their administrative support for this work. We would also like to thank Eve Smith and nic Harman for note keeping during the consensus meeting. We would like to thank the following people for testing the dataset in practice: Bianca Lang, Silvia rosina, Heinrike Schmeling, Krystyna Ediger, olcay Jones, Latika Gupta, Maria Martha Katsica, Kiran nistala, parichat Khaosut, Ceri turnbull, Joyce davidson and Megan Curran. Contributors LJM has led all parts of the study including background work, preparation of the protocol, ethics submissions, content of surveys, planning of consensus meeting, testing the dataset and writing the manuscript. CAp, AMH, Ar and LrW, as members of the steering committee, have provided intellectual input and practical help into all parts of the study including background work, protocol development, delphi survey, planning of the consensus meeting, reviewing results, refining the dataset and preparing the manuscript. AMH, Ar and CAp also tested the dataset in practice. dA developed the bespoke delphi system and provided It support for the study including data analysis. JJK participated in the study design, was responsible for testing the delphi system, performed the statistical analysis, helped prepare for the consensus meeting, and was involved in reviewing results and preparing the manuscript. prW has provided expert advice on study design and delphi methodology and analysis. AA, LC-S, tC, BMF, IL, SM, pM, rM, LMp, AMr, LGr, AvrK, rr and SS attended the consensus meeting and had intellectual input into the study. In addition, AA, AvrK, LMp, LGr and rr tested the dataset in practice. MWB has been responsible for intellectual and financial overview of the study, input into the protocol development and as a member of the steering committee has provided intellectual input into the delphi survey and consensus meeting, reviewing results, facilitating the consensus meeting, refining the dataset and preparing the manuscript. Funding this work was supported by Arthritis research UK (grant number 20417); January 2014-July 2017. the UK JdM Cohort and Biomarker study has been supported by generous grants from the Wellcome trust UK (085860), Action Medical research UK (Sp4252), the Myositis Support Group UK, Arthritis research UK (14518), the Henry Smith Charity. LrW's work is supported in part by Great ormond Street Children's Charity, the GoSH/ICH nIHr funded Biomedical research Centre (BrC) and Arthritis research UK. the JdM Cohort study is adopted onto the Comprehensive research network through the Medicines for Children research network (www. mcrn. org. uk) and is supported by the GoSH/ICH Biomedical research Centre. LGr was supported in part by the Intramural research program of the national Institute of Environmental Health Sciences national Institutes of Health.
2017-11-24T21:46:00.105Z
2017-10-30T00:00:00.000
{ "year": 2017, "sha1": "7bb7c04b1a7c9f1e86b903d99a4dde53d478a6f9", "oa_license": "CCBY", "oa_url": "https://ard.bmj.com/content/annrheumdis/77/2/241.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7bb7c04b1a7c9f1e86b903d99a4dde53d478a6f9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235300093
pes2o/s2orc
v3-fos-license
Perihematomal Edema and Clinical Outcome in Intracerebral Hemorrhage Related to Different Oral Anticoagulants Background: There is a need to examine the effects of different types of oral anticoagulant-associated intracerebral hemorrhage (OAC-ICH) on perihematomal edema (PHE), which is gaining considerable appeal as a biomarker for secondary brain injury and clinical outcome. Methods: In a large multicenter approach, computed tomography-derived imaging markers for PHE (absolute PHE, relative PHE (rPHE), edema expansion distance (EED)) were calculated for patients with OAC-ICH and NON-OAC-ICH. Exploratory analysis for non-vitamin-K-antagonist OAC (NOAC) and vitamin-K-antagonists (VKA) was performed. The predictive performance of logistic regression models, employing predictors of poor functional outcome (modified Rankin scale 4–6), was explored. Results: Of 811 retrospectively enrolled patients, 212 (26.14%) had an OAC-ICH. Mean rPHE and mean EED were significantly lower in patients with OAC-ICH compared to NON-OAC-ICH, p-value 0.001 and 0.007; whereas, mean absolute PHE did not differ, p-value 0.091. Mean EED was also significantly lower in NOAC compared to NON-OAC-ICH, p-value 0.05. Absolute PHE was an independent predictor of poor clinical outcome in NON-OAC-ICH (OR 1.02; 95%CI 1.002–1.028; p-value 0.027), but not in OAC-ICH (p-value 0.45). Conclusion: Quantitative markers of early PHE (rPHE and EED) were lower in patients with OAC-ICH compared to those with NON-OAC-ICH, with significantly lower levels of EED in NOAC compared to NON-OAC-ICH. Increase of early PHE volume did not increase the likelihood of poor outcome in OAC-ICH, but was independently associated with poor outcome in NON-OAC-ICH. The results underline the importance of etiology-specific treatment strategies. Further prospective studies are needed. Introduction In light of the aging population with increased cardiovascular comorbidity, the use of oral anticoagulation (OAC) is steadily expanding [1,2]. The incidence of oral anticoagulation-related intracerebral hemorrhage (OAC-ICH) is growing, due to the increasing variety of medical treatment options [1,3]. The optimal treatment strategy is still uncertain, and prognosis in OAC-ICH is often associated with greater morbidity and mortality compared to non-oral anticoagulation-related intracerebral hemorrhage (NON-OAC-ICH) [1][2][3][4]. Over the temporal course of the initial insult, formation of perihematomal edema (PHE) may develop, expressing secondary brain injury [5]. There is evidence from both clinical and experimental studies to suggest that in the context of OAC, early PHE formation is altered [6][7][8]. In light of novel alternative OAC treatment options, the generalizability of these results needs to be validated, as these findings were restricted to vitamin-K-antagonist [1,9]. At the same time, PHE is gaining increasing attention as a promising surrogate marker not only for secondary brain injury, but also clinical outcome [10][11][12][13]. In light of these reports, a better understanding of PHE formation in OAC-ICH patients could ultimately help to improve clinical treatment, when utilizing established PHE quantification methods. We hypothesized (1) lower early PHE in patients with OAC-ICH compared to NON-OAC-ICH (2) and therefore a less predictive value for clinical outcome. To test and evaluate this hypothesis, we present a two-phase analysis: First, computed tomography (CT)-derived imaging markers for early PHE were calculated in patients with OAC-ICH and NON-OAC-ICH (absolute PHE, relative PHE (rPHE), edema expansion distance (EED)). A further subgroup analysis for differences between NOAC and vitamin K antagonists (VKA) was performed. In a second approach, a logistic regression model was established to identify differences in independent predictors of clinical outcome in OAC-ICH and NON-OAC-ICH. Study Population We retrospectively parsed the databases of two German tertiary stroke centers for patients with spontaneous ICH aged >18 years between January 2016 and April 2019. (University Medical Center Hamburg-Eppendorf, Germany; and Charité University Hospital Berlin, Germany). As inclusion criteria, we defined (1) primary acute ICH confirmed on NCCT (Non-contrast Computed Tomography), with or without CT angiography (CTA), (2) with a symptom onset within 12 h. Both databases excluded patients with head trauma, brain tumor, vascular malformation, primary intraventricular hemorrhage, or secondary ICH from hemorrhagic transformation of ischemic infarction. Clinical data for OAC and antiplatelet therapy were documented. Type of OAC medication was documented if available. Clinical parameters also included vascular risk factors (arterial hypertension and diabetes mellitus), time difference from symptom onset to NCCT, both Glasgow Come Scale (GCS) and National Institutes of Health Stroke Scale (NIHSS) on admission, clinical outcome defined by modified Rankin scale (mRS) at 90 days, and surgical procedures (craniectomy, extra-ventricular drainage (EVD) placement) from patients' clinical records were additionally obtained. Patients were dichotomized into patients with OAC-ICH and NON-OAC-ICH. This multicenter retrospective study was approved by the ethics committee (Ethik-Kommission der Ärztekammer Hamburg, Ethik-Kommission der Charité Berlin) and written informed consent was waived by the institutional review boards. All study protocols and procedures were conducted in accordance with the Declaration of Helsinki. Patient consent was not needed (because of the retrospective nature of the study). The data that support the findings of this study are available from the corresponding authors upon reasonable request. Image Acquisitions NCCT scans were performed using standard clinical parameters, with an axial <5 mm section thickness. All datasets were inspected for quality and excluded in case of severe motion artifacts. In detail the images were acquired on the following scanners: 256 slice scanner (Philips iCT 256, Philips, Amsterdam, Netherlands) with 120 kV, 280-320 mA, <5.0 mm slice reconstruction and <0.5 mm in-plane resolution and CTA with 100-120 kV, 260-300 mA, 1.0 mm slice reconstruction, 5 mm MIP (maximum intensity projection) reconstruction with 1 mm increment, 0.6-mm collimation, 0.8 pitch, H20f soft kernel, 80 mL highly iodinated contrast medium and 50 mL NaCl flush at 4 mL/s; scan starts 6 s after bolus tracking at the level of the ascending aorta. Eighty-slice scanner (Toshiba Aquilion Prime, Toshiba, Tokyo, Japan) with 120 kV, 280 mA, <5.0 mm slice reconstruction and <0.5 mm in-plane resolution and CTA with 100-120 kV, 260-300 mA, 1.0 mm slice reconstruction, 5 mm MIP reconstruction with 1 mm increment, 0.5-mm collimation, 0.8 pitch, H20f soft kernel, 60 mL highly iodinated contrast medium, and 30 mL NaCl flush at 4 mL/s; scan starts 6 s after bolus tracking at the level of the ascending aorta. Image Analysis Data were retrieved in Digital Imaging and Communications in Medicine (DICOM) format from the local picture archiving and communication system (PACS) servers and anonymized in compliance with the local guidelines. Two experienced neuroradiologists (JN and SE) assessed and documented the following imaging features on admission and follow-up NCCT scans: (1) intraventricular hemorrhage; (2) ICH location; (3) craniectomy or EVD placement in the follow-up CCT scans. ICH locations were classified as basal ganglia, thalamus, lobar, brainstem/pons, and cerebellar. In the following process, ICH and PHE were segmented semi-automatically on the basis of the original NCCT images [14,15]. Regions of interest (ROIs) were delineated using Analyze 11.0 Software and ITK-SNAP 3.8.0 Software (University of Pennsylvania, Philadelphia, PA, USA and University of Utah, Salt Lake City, UT, USA) [15][16][17]. The ROI histogram for ICH was sampled between 20 and 80 Hounsfield units (HU) to exclude voxels that likely belong to cerebrospinal fluid or calcification. The ROI histogram for PHE was sampled between 0 and 30 HU to exclude voxels that likely belong to leucariosis [14]. Consensus ROIs were derived based on overlapping segmentations of both readers. Both readers were blinded to all clinical information and bleeding location. Discrepancies were settled by joint discussion of the 2 readers and a third and fourth reader, FS and UH. (JN, SE, and FS: 4 years clinical experience in diagnostic neuroradiology in an academic full-service hospital; UH: 8 years clinical experience in diagnostic neuroradiology; research with focus on clinical applications of image processing and predictive modelling; JN, SE, FS, and UH: Research with focus on clinical applications of image processing and predictive modelling). Perihematomal Edema Measurements Studies evaluating PHE have used several varying parameters and definitions to assess PHE [18]. These studies also exhibited variabilities in the timing (single time point vs. peak) and method of assessing PHE progression (absolute increase vs. percent change vs. rate or speed). The various parameters used in this study were based on markers described for NCCT admission imaging ( Figure 1): • PHE ABSOLUTE : Refers to the absolute perihematomal edema volume on admission (PHE) [15,19]. • PHE RELATIVE : Relative perihematomal edema volume (rPHE) refers to the ratio absolute perihematomal edema volume (PHE) compared to absolute ICH volume [3,11,20,21]. • Edema Extension Distance: Edema extension distance (EED) refers to the difference between the radius of a sphere equal to the absolute PHE volume and the radius of a sphere equal to the ICH volume alone. In brief, PHE volume corresponds to the absolute edema volume and π corresponds to the radius of a sphere equal to the combined volume of PHE and ICH and π the radius of a sphere equal to the volume of the ICH alone [18,22]. This can be calculated using the following formula: Clinical Outcomes The primary outcome was poor outcome, defined by the modified Rankin scale (mRS) at 90 days. The mRS 90 was analyzed as a dichotomous variable, as this has been the standard in ICH clinical trials, and defined as mRS 0-3 being a good and mRS 4-6 a poor outcome [23][24][25]. Raters (JN and SE) were trained in the use of the mRS 90 and blinded to each other, imaging, and non-relevant clinical data. Statistical Methods Data were tested for normality and homogeneity of variance using histogram plots and a Shapiro-Wilk test. Descriptive statistics are presented as counts (percentages (%)) for categorical variables, mean (standard deviation (SD)) for continuous normally distributed variables, and medians (interquartile range (IQR)) for non-normal continuous variables. Unadjusted differences in baseline and imaging characteristics (NON-OAC-ICH versus OAC-ICH) were evaluated using the Fisher exact test (2-tailed), Kruskal-Wallis test, or unpaired t test, as appropriate. A statistically significant difference was accepted at a p-value of less than 0.05. Exploratory Analyses An exploratory approach was conducted for patients with a documented type of OAC medication. A one-way analysis of variance (ANOVA) was conducted to evaluate differences in PHE formation in patients within the three groups: (1) NON-OAC-ICH, (2) OAC-ICH with NOAC, (3) and OAC-ICH with VKA. The assumption of homogeneity of variances was tested using Levene's test. Post hoc comparisons using the Tukey HSD test was selected in case of a significant effect for the overall ANOVA. A statistically significant difference was accepted at a p-value less than 0.05. Regression Analyses We performed a univariable and multivariable logistic regression analyses to identify covariates associated with poor outcome (mRS 4-6 at 90 days) in patients with OAC-ICH and NON-OAC-ICH. Multivariable model building proceeded as follows: first, covariates with p < 0.1 in univariable analyses were included; second, universal confounders (age and sex) were force entered; third, covariates with p > 0.1 were backward eliminated; fourth, collinear covariates, as expressed by a variance inflation factor (VIF) > 3, were identified, and 1 covariate was removed from the model [26]. Specific location (lobar versus deep) was included as a covariate. For all statistical analyses, a 2-sided p of 0.05 was set as the significance threshold, and 95% CIs (confidence intervals) were reported for all odds ratios. Statistical analyses were performed using the IBM SPSS Statistics 21 software package (IBM Corporation, Armonk, NY, USA). Missing data regarding basic characteristics, neuroimaging, or outcome led to exclusion of patients ( Figure 2). Clinical Parameters Our analysis included NCCT images of 811 patients with acute primary ICH who fulfilled the inclusion criteria. A total of 212 (26.14%) patients were grouped as OAC-ICH and 599 (73.86%) as NON-OAC-ICH. A detailed patient flowchart is given in Figure 2. Patients with OAC-ICH were significantly older, with a median age of 77 years (IQR 70-82) compared to a median age of 70 years (IQR 58-77) in patients with NON-OAC-ICH, p-value < 0.001. There were no differences in sex between both groups, p-value 0.61. Patients with OAC-ICH had a higher percentage of arterial hypertension, with 83.49% in 177 patients and diabetes mellitus with 18.87% in 40 patients compared to patients with NON-OAC-ICH, p-value 0.008 and 0.048, respectively. Use of antiplatelet medications was higher in patients with NON-OAC-ICH, with 26.7% in 160 patients compared to 14.15% in 30 patients with OAC-ICH, p-value < 0.001. Median time from symptom onset to imaging was almost similar between both groups, with 2.74 (IQR 1.56-12.65) h in patients with OAC-ICH compared to 2.81(IQR 1.28-13.57) h in patients with NON-OAC-ICH, p-value 0.592. Both GCS and NIHSS on admission were not statistically different between both groups, with a GCS of 11 and NIHSS of 6 (IQR 1-14) in OAC-ICH, and a GCS of 12 (IQR 5-14) and NIHSS of 8 (IQR 1-15) in NON-OAC-ICH, p-value 0.315 and 0.44, respectively. Surgical Procedures and Clinical Outcome Surgical procedures including supratentorial or suboccipital craniectomy or EVD placement did not differ in both groups (details displayed Table 1) Radiological Parameters Absolute ICH volume and absolute PHE volume did not differ between both groups, p-value 0.767 and 0.091, respectively. Both rPHE and EED were significantly lower in patients with OAC-ICH, with a mean rPHE of 1.08 (SD 2.09) and a mean EED of 4.14 cm (SD 2.22) compared to a mean rPHE of 1.25 (SD 2.5) and a mean EED of 4.72 cm (SD 2.58) in patients with NON-OAC-ICH, p-value 0.001 and 0.007, respectively. Intraventricular hemorrhage (IVH) was almost similar, with 44.81% in OAC-ICH and 46.58% in NON-OAC-ICH, p-value 0.799. ICH location in the basal ganglia was higher in patients with NON-OAC-ICH, with 39.7% compared to 32.1% in OAC-ICH, p-value 0.05. There were no statistically significant differences regarding the other ICH locations (Table 1). Exploratory Analyses In an exploratory analysis, differences in PHE formation depending on the type of oral anticoagulation medication were analyzed. From the 811 included patients, the type of OAC medication was documented for NOAC and VKA in 174 cases (29.05%). NOAC included dabigatran, rivaroxaban, and apixaban. VKA included phenprocoumon as their pharmaceutical agent. A one-way analysis of variance was conducted to evaluate differences in PHE formation in patients within the following three groups: (1) NON-OAC-ICH (n = 599), (2) OAC-ICH with NOAC (n = 93), (3) and OAC-ICH with VKA (n = 81). Results for independent variables are displayed in Table 2. Since the ANOVA was significant for EED at a p-value < 0.05 level for the three groups, a post hoc test was computed. The Tukey HSD test indicated that the mean score for EED in OAC-ICH with NOAC was significantly lower than the EED in NON-OAC-ICH (Mean 0.65, SD 0.28, 95% CI 0.01-1.31, p-value 0.05). There was no statistical difference for EED between OAC-ICH with NOAC and VKA (Mean 0.27, SD 0.38, 95% CI −0.62-1.17, p-value 0.76). Table 2. Baseline demographic and clinical characteristics by patients with non-oral anticoagulation associated intracerebral hemorrhage (NON-OAC-ICH) and oral anticoagulation associated intracerebral hemorrhage with non-vitamin K antagonist oral anticoagulation (NOAC-ICH) and vitamin K antagonist associated intracerebral hemorrhage (VKA-ICH). Legend: ICH indicates intracerebral hemorrhage; EED, edema extension distance; PHE, perihematomal edema; rPHE, relative perihematomal edema; NOAC-ICH, non-vitamin K dependent oral anticoagulation associated intracerebral hemorrhage; NON-OAC-ICH, non-oral anticoagulation related intracerebral hemorrhage; SD, standard deviation; and VKA-ICH, vitamin K antagonists associated intracerebral hemorrhage. Please note: the number of patients (n = 599) displayed refers only to those with documented type of oral anticoagulation (n = 212 out of n = 811 were excluded). Perihematomal Edema Based Clinical Outcome Prediction Univariate and multivariate logistic regression analyses were performed in a separate approach for OAC-ICH and NON-OAC-ICH to analyze predictors of poor outcome (mRS 4-6 at 90 days), with special regard to individual prognostic effects of PHE. Independent variables included age; sex; arterial hypertension; diabetes mellitus; antiplatelet medication; time from symptom onset to imaging; both GCS and NIHSS on admission, both absolute ICH and PHE volume; rPHE, EED, IVH, ICH location; and craniectomy (Supplementary Tables S1 and S2). The remaining independent variables in multivariate model for (1) OAC-ICH were NIHSS and GCS (Table 3). Lower GCS significantly increased the likelihood of poor outcome (odds ratio (OR) 0.76 for 1-point increase; 95% CI 0.63-0.91; p-value = 0.003), and a higher NIHSS increased the likelihood for poor outcome (per 1point increase; OR 1.62; 95% CI 1.02-1.22; p-value 0.10). (2) The remaining independent variables in multivariate model for NON-OAC-ICH were sex, GCS, NIHSS, PHE volume, EED, IVH, and ICH location (Table 4). Female sex significantly decreased the likelihood of poor outcome (odds ratio (OR) 0.52; 95% CI 00.283-0.955; p-value = 0.035). Lower GCS significantly increased the likelihood of poor outcome (OR 0.76 for 1-point increase; 95% CI 0.68-0.85; p-value < 0.0001), and a higher NIHSS significantly increased the likelihood of poor outcome (per 1-point increase; OR 1.14; 95% CI 1.08-1.20; p-value < 0.001). Higher absolute PHE volume significantly increased the likelihood of poor outcome (OR 1.01 for 1 mL increase; 95% CI 1.00-1.02; p-value < 0.027). Higher EED did not increase the likelihood of poor outcome; p-value < 0.843). Presence of IVH significantly increased the likelihood of poor outcome (OR 2.8; 95% CI 1. 44-5.45; p-value < 0.002), and presence of supratentorial ICH significantly decreased the likelihood of poor outcome (OR 0.38; 95% CI 0.20-0.72; p-value < 0.003). ICH volume was excluded from the multivariate regression model in both groups as a strong collinear covariate of PHE volume, with a VIF > 5. Discussion In this study, we analyzed differences of early PHE formation in patients with OAC-ICH in comparison with NON-OAC-ICH, and in particular related to different types of OAC treatment options. The main finding of our study is that quantitative markers of early PHE are significantly lower in patients with OAC-ICH compared to NON-OAC-ICH. To further elucidate potential differences in PHE formation within different types of OAC, we analyzed quantitative markers of PHE in NOAC and VKA associated OAC-ICH in comparison with NON-OAC-ICH. A conclusive secondary finding of this study was that early PHE volumes were not independently associated with poor outcome in OAC-ICH, but were significantly associated with poor outcome in patients with NON-OAC-ICH. To our knowledge, this is the first study to add findings on PHE formation in NOACand VKA-associated ICH. In our study, mean EED levels were significantly lower in NOAC-ICH compared to NON-OAC-ICH. However, the actual difference in the mean scores between groups was quite small, based on Cohen's conventions for interpreting effect size; as the type of oral anticoagulation medication was not documented in all cases [27]. Nevertheless, it is believed that within a larger effect size other parameters of PHE formation, i.e., rPHE, may have also differed between the groups. These results may be attributed to the biochemical properties of the NOACs and their differences from the mechanism of action of VKAs. Thrombin in particular has been identified as a potent stimulator and key link for early PHE formation and therefore might be inhibited to a stronger level in NOAC than in VKA [28][29][30] and hence contribute to a stronger attenuation of PHE formation. Each of the proposed PHE parameters in our study has advantages and limitations. PHE is strongly related to the size of the underlying ICH and, therefore, alone may not account for this intermixed relationship [18]. rPHE can be disproportionally large in a smaller ICH, which may render it unsuitable for examining the relationship with outcome in some cases [18,20,31]. EED is independent of the concomitant influence of ICH volume on PHE and therefore has major advantages from a clinical trial perspective [22,32]. In line with this, EED may be capable of discerning the relationship of PHE with OAC and NON-OAC-ICH, especially those treated with NOACs. The use of NOAC is expected to increase as randomized controlled trials provide solid evidence for the favorable risk-benefit profile of NOACs compared to VKAs [9,33,34]. In addition, the expected approval of different reversal agents will further increase its use [34,35]. With lower levels of EED in NOACs compared to NON-OAC-ICH, their use might further increase the safety of OAC, as the edema's impairment might be less. A better understanding of the pathophysiology of NOAC and VKA-related ICH may allow us to further adapt treatment regimens. In this sense, patients with NON-OAC-ICH might beneficially profit from innovative and promising medications targeting early PHE, whereas patients with OAC-ICH might benefit primarily from potential treatments for the cessation or control of hematoma volume and expansion [1,3]. Such ongoing research into new, alternative treatment regimes in ICH remains crucial, as current treatment options have failed to provide promising, and ultimate, improvements in functional outcome and mortality. Increasing studies have therefore addressed PHE formation as a potential new therapy target. Animal experimental studies demonstrated that fingolimod could reduce PHE formation, cell apoptosis, and cerebral atrophy following ICH [36]. Similar results were observed in a clinical study [37]. Likewise, dexamethasone was shown to reduce cerebral cell apoptosis and inhibit brain inflammation [38]. However, discrimination between patients with OAC and non-OAC was omitted and may have been a potential limiting factor in previous randomized controlled trials (RCT) in further studying the treatment effects of antiedematous drugs in ICH: a RCT with 128 supratentorial ICH patients observed that mannitol failed to significantly improve the outcome at the end of 30 days and decrease mortality [39]. Our study adds preliminary conclusions that patients with NOACs may be less affected by early PHE formation. This conclusion does not only offer interesting pathophysiological insights into the potentially different impact on thrombin formation of new OAC in comparison to VKA-related ICH, but underlines the importance of etiology-specific treatment strategies in ICH. The conclusions of our study are clearly limited to the early phase of PHE formation. Nevertheless, medical treatment for secondary inflammation and reducing PHE only has a short-of-effective time window and is most effective if instituted early [40], as clinical studies in humans have suggested rapid PHE growth within the first 24 h following ICH [10]. Future clinical studies should therefore also elucidate the dynamics of PHE over time to tailor etiology-specific treatment strategies in patients with ICH. Such treatment strategies could be stratified in future RCT and could include early antidot treatment for OAC and anti-ICH expansion treatments in patients with OAC; and on the other hand the inclusion of antiedematous drugs in patients with NON-OAC-ICH at an early stage of diagnosis, monitored by quantitative assessment of PHE via EED as a surrogate marker. As ICH volumes in OAC-ICH tend to be initially larger and expand more extensively, we therefore assume that the detrimental effects of a larger ICH volume in OAC-ICH by far outweigh the potential protective effects of thrombin deficiency. Figure 1 shows an illustrative example of a patient with OAC-ICH and the fluid levels, a marker for coagulopathy and hematoma expansion (HE) [41]. Follow-up CT in this patient revealed a massive hematoma expansion with IVH, although acute PHE on the admission NCCT was comparably low. Although OAC has been described as a clear independent risk factor of major bleeding events, the distribution of poor outcome and mortality did not differ in both cohort groups in our study. In clinical trials of novel oral anticoagulants, namely the oral direct thrombin inhibitors and factor Xa inhibitors, major bleeding rates were generally low and comparable to those with LMWH or VKA-related ICH [42,43]. A meta-analysis examined the safety and efficacy of novel oral anticoagulants compared with warfarin for the prevention of stroke and systemic embolism in atrial fibrillation, and ICH was reduced in patients receiving novel oral anticoagulants (RR 0.49, 95% CI 0.36-0.66) [44]. In addition, six goodquality RCTs compared NOACs (2 DTI studies, 4 FXa inhibitor studies) with Warfarin. In patients with atrial fibrillation, NOACs decreased all-cause mortality, including fatal hemorrhages [43]. When looking at our patient cohort, a large proportion of patients in our study cohort were under medication with NOACs (n = 93) in comparison to patients treated with VKA (Marcumar; n = 81). These findings may be explanatory for the non-significant differences of poor outcome and mortality between the mRS of patients with NOAC-ICH and OAC-ICH. The strengths of our study were the large sample size and the use of two multicenter cohorts. Nevertheless, our study has several limitations. First, we lacked clinical data on the medication of therapeutic options available for OAC reversal (i.e., NOAC antidote) and hemostatic therapy [39]. Patients with OAC-ICH were significantly older and displayed a higher percentage of arterial hypertension, which are both known to be common in this patient subset [1,3]. These factors might have contributed to poor clinical outcome and limit the generalizability of our results. However, arterial hypertension and age were no significant variables in the multivariate regression analysis. Patients with a symptom onset greater than three hours were included, arguably mitigating the effect OAC had on solely the early stage of PHE formation, yet median time from symptom onset was below three hours in both groups [13]. We assessed PHE volume at baseline, from the first CT scan. As follow-up imaging was not available for all patients, the measurement of change in PHE volume over time was not possible. Rate of change in PHE volume could be associated with functional outcome independently of ICH growth rate, and our present study was unable to determine this. Conclusions Quantitative markers of early PHE (rPHE and EED) were lower in patients with OAC-ICH compared to those with NON-OAC-ICH, with significant lower levels of EED in NOACs compared to NON-OAC-ICH. Increase of early PHE volume did not increase the likelihood of poor outcome in OAC-ICH but was independently associated with poor outcome in NON-OAC-ICH. The results underline the importance of etiology-specific treatment strategies. Further prospective studies are needed. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/jcm10112234/s1, Table S1: Univariate analysis of predictors of poor outcome (modified Rankin Scale 4-6) in patients with oral anticoagulation associated intracerebral hemorrhage (OAC-ICH) at discharge, Table S2: Univariate analysis of predictors of poor outcome (modified Rankin Scale 4-6) in patients with non-oral anticoagulation associated intracerebral hemorrhage (NON-OAC-ICH) at discharge. Informed Consent Statement: Written informed consent was waived by the institutional review boards due to the retrospective nature of the study. Data Availability Statement: The data that support the findings of this study are available from the corresponding authors upon reasonable request and accordance with the institution's data security regulations.
2021-06-03T06:17:16.066Z
2021-05-21T00:00:00.000
{ "year": 2021, "sha1": "38d165ffc549b8c276fdc06375a5e9fe166836d1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/10/11/2234/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "32817230be118efbb202f132b61582aed56798bf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270899050
pes2o/s2orc
v3-fos-license
Association of litter size with sex hormones and body measurements of Iraqi Awassi ewes Birth type is the most important reproductive trait in sheep production, which is influenced by the ovulation rate, hormones, and fecundity genes. Ewe reproductive performance is known to be affected by body measurements. Therefore, this study aims to investigate the possibility of an association between the litter size with sex hormones and body measurements in Iraqi Awassi ewes. A total of 224 sexually mature ewes, non-pregnant, non-lactation and healthy (124 ewes with single birth and 100 ewes with twin birth), aged between 2.5 to 5 years were included in the present study. Blood samples were collected from the sheep, and then the serum was separated from blood to determine the concentrations of the estradiol and progesterone. Body measurements and Live body weight were determined for each ewe. The result of this study refers that the live body weight of Awassi ewes was significantly (P<0.05) influenced in the litter size. The association analysis of litter size with body measurements indicated that chest girth, the neck length and height at the hip were different for the type of birth of Awassi ewes. The highest positive correlation (P < 0.01) was recorded among litter size with live body weight (r=0.698, P=0.001), with height at shoulder (r=0.242, P=0.031) and height at hip (r=0.309, P=0.005).In conclusion, the litter size of Awassi ewes is influenced by the other phenotypic traits. The ewes with heavier live body weight and higher body measurements are more favourable to having more lambs than ewes with a single birth. Introduction. Sheep is one of the most important livestock animals that have an impact on the economics of people (Farrag, 2019).The economic characterization of sheep is necessary for livestock development and breeding programs (Abd-Allah et al., 2019).Awassi sheep play a significant role in the economy because of high performance in lamb and milk production (Galal et al., 2008). in all sheep production systems and high economic value, reproductive traits are the most important traits (Yavarifard et al., 2015).Litter size (LS) is the most important economic trait in sheep production (Janssens et al., 2004;Notter, 2008), which is influenced by ovulation rate, a few hormones and the fecundity genes (Ekiz et al., 2005).Litter size is directly related to ovulation rate and on the number of the ooc00yte that was released from follicles during ovulation (Williams & Stanley, 2008).The oocyte is surrounded by granulosa cells and theca cells that are essential for ovulation by secreting the hormones estrogen and progesterone, and a non-cellular material layer called the zona pellucida (Kumer et al., 2017).Awassi breed is mono-ovulatory species (Iber & De Geyter, 2013) with a very low incidence of twinning (Al-Sa'aidi et al., 2018;Kridli et al., 2018), compared to rodents and pigs which have high ovulation rates (Montgomery et al., 2001).Moreover, the litter size of sheep differs between breeds; it ranges from single birth in Texel and Suffolk to twin birth in the prolific Booroola Merino breed (Souza et al., 2001).Besides the variation in litter size between breeds, several factors like age, season, management, nutrition, genetic effect, body score, and environmental conditions affect on sheep litter size in sheep (Kumer et al., 2017).Youssef et al., (2014) refer to the relationship among body length, age, body weight and other body measurements with litter size for Damascus and Zaraibi goats.Based on the above consideration, no researches yet on the association of the litter size with the sex hormones and body measurements have been reported in Iraqi Awassi ewes.Thus, the current studies aimed to evaluate the association of litter size with sex hormones and body measurements in Iraqi Awassi ewes. Materials and Methods. Blood examination and Animals. The present study was conducted according to regulations of the international recommendations for the care and use of animals under Al-Qasim Green University's approval (Agri,No. 015,3,12), at the College of Agriculture /Department of Animal Resources for the period from January 2019 to August 2019 on Awassi ewes.A total of 224 healthy and sexually mature ewes, non-pregnant, non-lactation and healthy (124 ewes with single birth and 100 ewes with twin birth), aged between 2.5 to 5 years were included in this study.Animals were collected randomly from two stations for raising sheep (Babylon and Karbala).They were fed ad libitum on seasonal grass, concentrate food about 2.5% of their live body weight daily, comprising a mixture of barely (59%), bran (40%), salt (1%) concentrates, and freshwater.From the sheep, blood samples were collected, using vacutainer tubes with EDTA.From blood samples, serum was separated by centrifugation at 3,000 rpm for 15 min at room temperature where it was kept frozen at -20°C to determine hormonal assay.Estradiol and progesterone were measured by using Bioassay Technology Laboratory company ELISA kit (sheep estradiol Elisa kit catalogue number E0047Sh and sheep progesterone Elisa kit catalogue number E0015Sh).The condensations of the estradiol and progesterone in the serum were determined by using the standard curve. Body measurements and live body weight. Live body weight (kg) of Awassi ewes was done in the morning before the animals were grazing using a suspended spring balance.Body measurements were obtained according to (Abd-Allah et al., 2019) by using the measuring tape calibrated in centimetres (cm) including body length (BL) was the distance from the point of shoulder to the base of the tail; head length (HL) was the distance measured from the upper lip of the animal to nodule of the horn; neck length (NL) was the distance from the lower jaw to the point of the shoulder; height at shoulder measured vertically from the thoracic vertebrae to the ground; height at the hip was measured in between hip to the sole of the hoof; chest girth the circumference of the chest and chest width was measured as the width of the rib cage between the forelegs. Statistical analyses. The significant effect of litter size on the various parameters studied was construed using Statistical Package for the Social Sciences (SPSS) software version 23.0., with general linear model: Y ijkl = μ + L i + P j + A k +e ijkl where Y ijkl = phenotypic traits, μ = overall mean, L i = fixed effect of i th litter size (i = single birth, twin birth), P j = fixed effect of j th parity (j = 1, 2, 3, 4), A k = fixed effect of k th age group (2.5-3.5,>3.5-5),and e ijkl = random error associated with Y ijkl observation and assumed to be NID (0, σ 2 e).Means were compared using the Tukey-Krammer test with a significance level of (P<0.05).Preliminary statistical analysis indicated the effect of factor interaction, season and station did not have a significant effect on phenotypic traits, so they are not matched in the general linear model. Results and Discussion. Association analysis of litter size with live body weight and sex hormones of Awassi ewes. Association analysis of birth type refers to the physiological changes occurs in this study.Table 1 shows the least-square means of sex hormones and live body weight in Awassi ewes.The live body weight of Awassi ewes was significant (P<0.05)influenced in the litter size, while no statistically significant difference was observed for the sex hormones (P>0.05)(Table 1).The result refers to the presence of significant differences (P< 0.05) in live body weight between Awassi ewes with single and twin births.Sheep live body weights have been reported to influence on reproductive performance and litter size (Akhtar et al., 2012).The ewe ovulation rate and litter size are affected by live body weight of the sheep, with heavier sheep be more favourable to having more lambs than ewes with single birth (Pettigrew et al., 2019).Accretion the pre-mating weight of ewes could accretion the pregnancy rate and twins births (Aktas & Dogan, 2014).In agreement with Aktas et al., (2015), the twinning ratio was reported to increase with the live body weight of the ewes.While no statistically significant difference was observed for the sex hormones (P>0.05)(Table 1).This is consistent with studies of Pang et al., (2009) that referred to non-significant differences of plasma profiles of progesterone and estradiol in much prolific Huanghuai goats contrasted with nonprolific across the oestrous cycle and after ovariectomy. Association analysis of litter size with body measurements of Awassi ewes The result of the association analysis of litter size with body measurements indicated that the neck length, height at hip and chest girth were different for the type of birth of Awassi ewes (Table 2).Body dimensions supply information about the reproductive traits of the ewes.Through the most important body dimensions are chest girth, shoulder and hip length and widths, body length and hip heights (Abdullah & Tabbaa, 2011).Ewe reproductive performance is famous to be influenced by body measurement (Corner-Thomas et al., 2015).It is likely that ewes of higher body measurement may be more able to manage multiple births than ewes of poorer body measurement (Kenyon et al., 2012). Correlation analysis of litter size with other variables of Awassi ewes The correlation coefficient between litter size and phenotypic traits of the Awassi ewes are shown in Table 3.The highest and mightily positive connection (P < 0.01) was listed among litter size with live body weight (r=0.698,P=0.001), height at shoulder (r=0.242,P=0.031) and height at hip (r=0.309,P=0.005).The result refers to the presence of positive and significant correlation (P< 0.05) among litter size with live body weight and body measurement.The present study coordinated with the study of Yavarifard et al., (2015) that reported the appositive correlation between birth type and body dimension in Mehraban sheep.Similarly, Moraes et al., (2016) refer that the maternal body provision score of Corriedale ewes, was positively correlated to reproductive traits during breeding (r = 0.37, p < 0.05).The phenotypic variations (heart girth, body weight, punch girth and other dimensions) were significantly higher (P< 0.05) in goats bearing multiplied births than the goats bearing a single birth.This variation may be used as a useful tool for the discrimination between the goats carrying multiple births and the goats carrying single birth and thus achieving more economic benefits by this discrimination (Pan et al., 2015). The litter size of Awassi ewes is influenced by the other phenotypic traits.The ewes with heavier live body weight and higher body measurements are more favourable to having more lambs than ewes with a single birth.
2023-06-04T15:07:39.419Z
2020-05-25T00:00:00.000
{ "year": 2020, "sha1": "2342a1ceb26420128ad3b9d1be1f3aa058336496", "oa_license": "CCBYNC", "oa_url": "https://journals.uokerbala.edu.iq/index.php/Agriculture/article/download/793/388", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "f7dcccd554b0394651cc59a17ced1fbdfe843cff", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
80707597
pes2o/s2orc
v3-fos-license
Prevalence of Bovine Immunodeficiency Virus Infection in Buffaloes in East Azerbaijan, Northwestern Iran Bovine immunodeficiency virus (BIV) has a worldwide distribution, but its prevalence in different regions of Iran is unknown. In this study, for the first time, the presence of BIV infections is detected by using the PCR method in Iranian water buffalo in East Azerbaijan. For this matter, blood samples were taken from 83 randomly selected buffaloes slaughtered in Tabriz industrial slaughterhouse from June to October 2012. All of the animals were clinically examined before sampling. Viral Gene-spinTM Viral DNA/RNA Extraction Kit was used to extract the DNA, and PCR was performed on the extracted DNA using oligonucleotiode primers specific for the gag gene region of the BIV virus. The prevalence of BIV in buffaloes was 2.4% (2 of 83), which is less than the prevalence of BIV in Pakistan (10.3%) and India (19%). The low prevalence observed in this study may be due to our small sample size. BIV was originally isolated from an 8-year-old dairy cow with persistent lymphocytosis, progressive weakness and wasting and was appointed in 1972 as R29 [6]. After BIV's recognition as a lentivirus in the late 1980s, it has been shown that the BIV infections occur widely, causing lifelong and generally subclinical diseases [7]. BIV infections have been shown to be variably associated with alterations in animal production, weight loss, secondary diseases, decreased milk production *Address correspondence to this author at the Large Animal Internal Medicine, Faculty of Veterinary Medicine, University of Tabriz, Tabriz, Iran; Tel: 09141080928; E-mail: Mtooloei@tabrizu.ac.ir and increased incidence of encephalitis [8][9][10]. Whether BIV transmission via uterus, placenta, colostrum, or milk is still under investigation but Proviral DNA of BIV was also detected in bull semen [11,12]. and it has been shown that the seroprevalence of BIV infection increases according to the aging of animals in the same dairy herd, suggesting that BIV would be possibly transmitted through natural or artificial inseminations, and/or through blood instrument or blood sucking insects [13][14][15]. Although BIV induces dysfunction in monocytes and neutrophils, BIV inoculated calves did not exhibit severe clinical symptoms, so pathogenesis of BIV remains unclear [16][17][18]. The clinical significance of BIV infection can depend on the strain of BIV, breed of the cattle, and environmental stressors [19]. Buffalo is a native animal of Iran and East Azerbaijan province, with a total population of about 92620, is one of the most important regions of buffalo farming. As there is no data on BIV in Iranian buffaloes, we conducted the prevalence of this infection in Azerbaijan. Blood Sampling A total of 83 whole peripheral blood samples were collected from randomly selected Asian water buffalos (Bubalus bubalis) slaughtered in Tabriz industrial slaughtered house in East Azerbaijan province of Iran from June to October 2012. (31 animals were female and the others were male and the all of them had more than 3-year-old age.) Ethylenediaminetetraacetic acid (EDTA) was used in sampling tubes as an anticoagulant agent. For the preparation of DNA, aliquots of whole blood (500 µl) were added to 1 ml red blood cell lysis buffer (10 mM Tris-HCl pH 7.6; 5mM MgCl 2 , 100mM NaCl, 0.75 % Triton X-100) and were mixed and incubated at room temperature for 2 minutes. The tubes were then centrifuged at 12000g for 20 seconds and the supernatant was discarded. The pellets were frozen at -70°C until they were examined. DNA Extraction For DNA extraction, Viral Gene-spin™ Viral DNA/RNA Extraction Kit (Intron Biotechnology.Inc) was used. At first, the pellet was transferred to the 1.5ml micro-centrifuge tube, resuspended in 250µl Viral Gene-spin™ buffer (Lysis buffer) and incubated at 80°C for 10 min. After that, it was mixed by vortexing for 15 seconds and incubated at room temperature (15-25 °C) for 10 min, then 350µl of binding buffer added and completely mixed by gently vortexing, subsequently the solution was placed in a spin column in a provided 2ml collection tube and centrifuged at 13,000 rpm for 1 min. The solution in collection tube was discarded and the column was placed back in the same 2ml collection tube, afterwards, 500µl of Washing Buffer A was added to the column and centrifuged for 1 min at 13000rpm, the solution in collection tube was discarded and placed the spin column back in the same 2ml collection tube. Then 500µl of Washing Buffer B was added to the column and centrifuged for 1 min at 13000rpm. The solution in collection tube was discarded and the spin column was placed back in the same 2ml collection tube, subsequently centrifuged for 1min at 13000rpm. Then the column was placed in an RNase-free 1.5ml micro-centrifuge tube, 60µl of Elution buffer was added directly onto the membrane and was incubated at room temperature for 1 min, then was centrifuged for 1 min at 13000rpm. 2-5µl of an eluted solution was used as a template for PCR. Polymerase Chain Reaction The DNA extracted from each blood sample was used as a template to detect BIV proviral DNA by PCR as described previously by Nikbakht et al. (2010) [20]. The primers chosen are listed in Table 1. A 25 ml PCR reaction mix contained 0.5µl of each primer, 0.2 µl of dNTP, 0.5 µl Taq DNA polymerase, 1 µl MgCl 2 (concentration fixed by titration tests), 2.5 reaction buffer, and 16µl D.D. water. Two ml of DNA template were obtained from all samples. The PCR was performed for 37 cycles in three stages: 1 cycle (94°C for 1 min, 51°C for 45 second, 72°C for 1 min), 35 cycles (94°C for 45 second, 51°C for 30 second, 72°C for 45 second) and 1cycle (72°C for 5 min). 7µl of each reaction mixture was mixed with 2µl of loading buffer and run on a 1.2% agarose then stained by ethidium bromide and visualized by a U.V. transilluminator. 750bp DNA marker was used to distinguish DNA fragment bands in lanes. Plasmid DNA containing the complete BIV gag-coding region (pGEM7-gag) served as a positive control and water was used as a negative control. DISCUSSION BIV is prevalent globally. The earliest report of the incidence of BIV in Louisiana cattle indicated a collective seroprevalence of 11% in four dairy herds [21] and in 1992 seroprevalence of BIV in beef herds and dairy herds of Louisiana was reported 40% and 60%, respectively [22]. In 1992 BIV seroprevalence was reported 21% in a dairy herd of Colorado [23] and one study in Italy demonstrated that 5.8% of the dairy herds and 2.5% of the tested cows were seropositive for BIV [24]. A seroepidemiological investigation of BIV infection in two Mississippi dairy herds revealed 38 -58% incidence of BIV infection in [15]. In Canada Gonzalez et al. utilized a simple gene amplification technique for detection of sequences from the 3 major BIV genes, gag, pol and env. and indicated that, the frequency of BIV infection is 5.5 -12% among dairy cattle in Ontario [17]. A study in Argentina showed that 12% of the animals tested were positive for BIV [25]. In 1998 a study using western blot method revealed that 11.7% of the cattle in Hokkaido had the antibodies against BIV [14]. In another study by [26] performed in 5 states in Cambodia, 544 cattle and 42 buffalos were examined. This research indicated that 26.3% of the cattle and 16.7% of the buffalos were positive for anti-BIV-P26 antibodies. In another survey that was performed on buffaloes in Pakistan by using recombinant nested PCR assay to detect proviral DNA, 10.3% of buffalos and 15.8% of cattle were seropositive [11]. In a study in Zambia, 11.4% out of a total of 262 sera were found positive for anti-BIV p26 antibodies [27]. In Korea, 35% and 33% of dairy and beef cattle were BIV seropositive, respectively [13]. A serological and molecular study showed that 12.3% of cattle were infected with BIV in Turkey [18]. A study in India showed that 22% of cattle and 19% of buffalos were seropositive [28]. In Poland, 1541 serum samples from Holstein cattle from 23 herds were analyzed using ELISA method. The average BIV prevalence was 4.9% in individual cattle while the percentage of herds harboring at least one seropositive animal, was 82.6% [29]. Investigation on BIV in Iran revealed 20.3% of positive cattle in Tehran province [20], 60% and 30% of positive cattle and sheep in Chaharmahal Bakhtiary province of Iran [30]. To the best of our knowledge, this is the first report of BIV infection in Iranian water buffalo. The prevalence of BIV in buffaloes in this study was lower than the prevalence of BIV observed in Pakistan (10.3%) and India (19%). However, this result may be due to the small sample size. Our study adds to the available data on BIV and is the first report of this disease in buffaloes in Iran. Further studies are needed to determine the epidemiology of the infections in Iran.
2019-03-18T14:03:21.823Z
2018-08-16T00:00:00.000
{ "year": 2018, "sha1": "2b12a9003902927a086ad8dca7a489940a2131d9", "oa_license": "CCBY", "oa_url": "https://lifescienceglobal.com/pms/index.php/JBS/article/download/5552/3120", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "9ed5332af5565c12ec086b1394a690264fe92743", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
11394077
pes2o/s2orc
v3-fos-license
Alternative surgical approaches for aggressive angiomyxoma at different sites in the pelvic cavity Aggressive angiomyxoma, a rare soft tissue benign neoplasm, predominantly occurs in the female pelvic peritoneum and perineum region during reproductive age. It is slow growing, locally infiltrative, and has a high risk of local recurrence and the neoplastic character of blood vessels. The standard treatment is surgery. We report three unusual aggressive angiomyxoma cases. The first case was a pedunculated mass of the left labium major; the second, a left perineal mass that infiltrated into the paravesical area via the obturator foramen; and the third, a big mass in the retroperitoneal cavity, found that growing aggressive angiomyxoma looked like lava expulsion in the pelvic area. After a thorough examination and full radiologic workup, we performed surgical excision in each patient via different approaches. Histopathologic findings were consistent with diagnosis of aggressive angiomyxoma. To date, no relapse has been observed. Introduction Aggressive angiomyxoma (AAM), a rare soft tissue neoplasm, predominantly occurs in the female pelvic peritoneum and perineum region. These lesions are distinguished by their slow growth and frequent recurrences, and are characterized histologically by a predominantly myxoid stroma and abundant thin-and thick-walled vascular channels. Because of its rarity, AAM is often initially misdiagnosed as a gynecological malignancy. We also found that AAM looked like lava expulsion in the pelvic cavity. After a thorough examination and full radiologic workup, we performed surgical excision, using a different surgical approach for each growth site. Case report 1. Case 1 A 49-year-old woman presented with a large vulvar mass. She previously had a much smaller vulvar mass for a year, and had been asymptomatic. The size of the mass suddenly increased in a month. Examination revealed a large polypoid pedunculated mass arising from the left labium major, with ulceration and secondary infection ( Fig. 1A-a, b). There were no palpable masses upon examination of the abdominal and groin areas. Pelvic computed tomography revealed a bulging mass (19×19 cm), with mild non-homogeneous enhancement along the ventral margin of the pelvic wall and major labium. No pelvic mass was observed; there was no free fluid or hydronephrosis. The mass was excised using a 1-cm lateral margin, with the patient under general anesthesia. No recurrence has been observed 4 years after excision. On pathological analysis, an aggressive vulvar angiomyxoma, 27×24 cm in size and 1,520 g in weight was found ( Fig. 1A-c, d). Immunohistochemical staining showed positivity for estrogen receptor, CD24, desmin, and S100. The patient has been regularly followed-up for 4 years and is not undergoing adjuvant therapy. Case 2 A 31-year-old woman complained of progressive abdominal distension and lower abdominal swelling. Examination revealed a distended, slightly tender abdomen. Ultrasonography revealed a huge, mixed echogenic mass that looked like fluid collection in the whole pelvic cavity. We suspected hemoperitoneum. Computed tomography (CT) scanning showed an 18×15×8-cm high-density cystic mass and fluid collection in the pelvis with diffuse peritoneal thickening ( Fig. 2A). The patient had a normal menstrual history, no operation history, and had never been diagnosed with hepatitis or tuberculosis. Blood serum tests were negative for hepatitis B and C viruses. Laparoscopic surgery was performed under general anesthesia. However, after finding the huge solid mass, we converted to laparotomy to excise the tumor. The tumor extended from the para-aortic bifurcation to the paravesical and pararectal spaces in the retroperitoneal area ( Fig. 1B-a-c). We dissected the mass while preserving the uterus, ovaries, and rectosigmoid colon. Intra-operative frozen section diagnosis was that of a mesenchymal tumor. Specimen histopathology showed capillaries and cavernous vascular spaces filled with blood, and stellate spindle cell proliferation in interstitial tissue (Fig. 2C). Immunohistochemical staining showed positivity for CD10, smooth muscle actin, desmin, and CD34. These findings were consistent with AAM diagnosis. The patient has had no recurrence 12 months after following surgery, and has been regularly followed-up. Case 3 A 36-year-old woman presented with a progressive perineal mass on the left buttock. She was performed excisional biopsy due to suspicious fibroma. Physical examination revealed swelling and a dense lesion on the left buttock ( Fig. 1C-a- A-a Rectal examination revealed that the mass was not related to the rectum. Magnetic resonance imaging (MRI) showed a 15×10×6-cm irregularly shaped enhanced mass in the left perineum, extending to the left retroperitoneum via the obturator foramen (Fig. 2B). In the lithotomy position with general anesthesia, the tumor was excised via the perineal approach (Fig. 1C-c), and confirmed remaining mass via abdominal laparotomy. Histopathology of the specimen also showed capillaries and cavernous vascular spaces filled with blood, and stellate spindle cell proliferation in the interstitial tissue. Immunohistochemical staining showed positivity for CD10, smooth muscle actin, desmin, and CD34. The patient has had no recurrences 4 months after surgery and has been regularly followed-up. Discussion AAM is a slow-growing mesenchymal neoplasm with a high local recurrence rate, but with a low tendency to metastasis. It was first described as a distinct variant of myxoid neoplasms in 1983 [1]. These tumors mostly occurs in the 4th decade of life and 95% of them occur in women [2]. It typically presents as a vulvar polypoid mass, and diagnosed using histology. The tumors are sometimes positive for estrogen and progesterone receptors [1]. They are thus likely to grow during pregnancy and respond to hormonal management [3]. AAM occurs mainly in the pelvis, perineum, vulva, vagina, and urinary bladder, and may exert pressure on adjacent organs. Appropriate management and long-term follow-up of this tumor should A-a be considered owing to its locally aggressive nature [4]. This tumor also involves the blood vessels. It can affect the vulva [5] and other parts of the pelvis [6]. This disease has locally infiltrative and recurrent characters. It presents as a painless gelatinous mass and can simulate Bartholin gland cyst or inguinal hernia-like symptoms. Many options have been used for treating recurrences, with varying success, and no single modality is clearly more beneficial than others [1]. Although it is a benign tumor and does not invade the neighboring tissue, it has a tendency to recur after surgical excision [2]. Recurrence can occur at same site after the initial resection [7]. The incidence of local recurrence after surgery is 36% to 72%. The recurrence rate in patients with narrow surgical margins is not higher than in patients with wide surgical margins. The reported age at presentation is 6 to 77 years, with peak occurring incidence during the reproductive years. The female-tomale ratio is 6.6/1. AAM should be distinguished from benign tumors with a low risk of local recurrence as well as from malignant tumors with widespread metastatic potential. AAM has a characteristic appearance on CT and MRI, and these techniques reveal the tumor extent. On CT, the tumor has a well-defined margin and attenuation less than that of muscles. On T2-weighted MRI, the tumor has high signal intensity [8]. After diagnosis, preoperative angiographic embolization and preoperative external beam irradiation can be helpful to decrease local recurrence [9]. On gross examination, these tumors are typically soft, bulky masses [7]. The external surface is smooth and usually appears not to be encapsulated. Some have projections resembling fingers that extend into neighboring tissues. The cut surface reveals a grey tumor of homogeneous consistency with focal areas of congestion and hemorrhage. Histologically, AAM are mesenchymal tumors, composed of fibroblasts within a strong myxoid background. The tumor consists of scattered hypocellular and fusiform cells with thin cytoplasmic processes in a loose myxoid matrix that gives the tumor a pale-pink color on eosin staining. The tumor has also a prominent vascular component with distinct smooth muscle cells without anastomosis. Mitosis is usually not observed, and occasional cases may show mild atypia. Immunohistochemical staining of the tumor for desmin, vimentin, CD34, CD44, S100, smoothmuscle actin, and muscle-specific actin are necessary for the diagnosis [7,10,11]. The term AAM was chosen to emphasize the neoplastic nature of the blood vessels, its locally infiltrative nature and the high risk of local recurrence, but not to indicate its malignant nature [7]. The pathogenesis of AAM is poorly understood. Non-random involvement of the 12q15 region, where the high mobility group protein HMGA2 (an architectural transcription factor expressed primarily during embryogenesis) is located, has been suggested [10]. AAM resection is sometimes difficult because of its infiltrative nature, and recurrence occurs in 70% of cases, but even patients with clear resection margins can develop recurrence [3]. Nonetheless, excision with wide tumor-free margins is still the most common treatment [3]. Incomplete resection is acceptable when high operative morbidity is anticipated or preservation of fertility is an issue [5,7]. Possible alternative treatment methods for recurrences are gonadotrophin-releasing hormone (GnRH) agonist or antihormonal therapy (tamoxifen) [7,11]. Several beneficial results have been described using a GnRH agonist [12]. However, long-term GnRH agonist therapy is associated with adverse effects such as menopausal symptoms and bone loss. Potential disadvantages are that the tumor will become resistant to GnRH agonist therapy, and medication withdrawal may result in neoplasm regrowth [7,[12][13][14]. The optimal duration of therapy is not known, but long term GnRH agonist therapy is not recommendable because of its potential adverse effects [12]. Chemotherapy and radiation therapy are not considered owing to low mitotic activity of the tumor [7]. For estrogen receptor and/or progesterone receptor positive tumors, hormone treatment is a good option. Longterm follow-up and careful monitoring with imaging techniques are essential for timely identification of recurrence [10]. In conclusion, AAM is a rare mesenchymal benign tumor that occurs primarily in the vulvar, vaginal, perineal, and pelvic soft tissues in women of reproductive age. The tumor grows slowly but infiltrates into the softer pelvic spaces in a way similar to lava expulsion from a volcano. Awareness of this disease and full radiologic workup is necessary for surgical planning and recurrence monitoring. Conflict of interest No potential conflict of interest relevant to this article was reported.
2016-05-15T17:07:01.994Z
2015-11-01T00:00:00.000
{ "year": 2015, "sha1": "a1e33a3076ebf88bbf84ff73addee7733eb83670", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5468/ogs.2015.58.6.525", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a1e33a3076ebf88bbf84ff73addee7733eb83670", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54948104
pes2o/s2orc
v3-fos-license
Dynamic formation of nanostructured particles from vesicles via invertase hydrolysis for on-demand delivery The unique multicompartmental nanostructure of lipid-based mesophases can be triggered, on-demand, in order to control the release of encapsulated drugs. In this study, these nanostructured matrices have been designed to respond to a specific enzyme, invertase, an enzyme which catalyses the hydrolysis of sucrose. The effect of two sugar esters upon the phase behaviour of two different lipids which form cubic phases, phytantriol and monolinolein, was investigated. Factors affecting the hydrolysis of the sucrose headgroup are discussed in terms of the molecular structure of the sugar surfactant and also its ability to incorporate into the lipid bilayer. By hydrolysing the incorporated sugar esters, a dynamic change in mesophase nanostructure from vesicles to a cubic phase was observed. This phase change resulted in the triggered release of an encapsulated model drug, fluorescein. This investigation demonstrates, for the first time, that changes on a molecular level by subtly controlling the hydrophilic and hydrophobic features of an amphiphilic additive at the interface by enzymatic hydrolysis can result in a global change in the system and so paves the way towards the design and development of lipid-based matrices which are responsive to specific enzymes for the controlled delivery of pharmaceutically active molecules or functional foods. Introduction Nanostructured lipid-based lyotropic liquid-crystalline (LLC) materials are promising advanced materials for use in drug delivery, biosensing, foodstuff and protein crystallisation. [1][2][3][4][5][6] The unique multicompartmental structure of some of these mesophases, particularly the bicontinuous cubic (V 2 ) and inverse hexagonal (H 2 ) mesophases, allows for the encapsulation, protection and controlled release of a wide range of hydrophilic, hydrophobic and amphiphilic substrates. [7][8][9][10][11] This feature is attributed to the nanostructure of the LLC mesophase as the dimensions and geometry of the aqueous channels and lipidic domains determine the rate of release of encapsulated active molecules. 12,13 Additionally, the interaction between the encapsulated drug and the lipidic interface can retard or accelerate the release rate. 14,15 In order to exert exact spatiotemporal control over the release rate, research has focussed upon altering intramolecular interactions as a means to change the packing of the amphiphiles. The alteration of lipid packing can be achieved by introducing molecules which alter the packing of the lipidic molecules, whereby molecules with a larger hydrophobic portion promote the formation of structures with increased negative curvature [16][17][18] and those which have an inuence on the hydrophilic part promote the swelling and/or formation of structures with less negative curvature. [19][20][21] By engineering parts of the amphiphile in order to specically alter the phase behaviour, selective phase transitions can be achieved. 22,23 Changing the external environment of lipidic matrices, such as an increase in pressure, can also result in selective phase transitions. 24,25 Exposure to a stimulus can result in the specic alteration of the global physicochemical properties of the matrix such as viscosity. 26 Most importantly, for application in active drug delivery, dynamic release of encapsulated drug from these nanostructured matrices has been demonstrated using temperature, 27 pH, 28 light activation of photothermal nanoparticles 29,30 and a magnetic eld, 31,32 whereby the application of the external stimulus has induced a phase transition and consequent alteration of the release rate that reects the change in nanostructure. 33 Lipidic mesophases are thermodynamically stable and consequently can be dispersed into sub-micron particles with preserved internal nanostructure and colloidal stability commonly imparted by block copolymers, proteins and particles. [34][35][36][37] Particles which have an internal nanostructure of V 2 and H 2 are called cubosomes and hexosomes respectively. These nanostructured particles retain the interfacial properties of the parent bulk phase, with the added benet of having a lower viscosity to enable easier administration in drug delivery applications. Additionally, the properties of the stabiliser can be used to inuence the biocompatibility of the particles. 38,39 One limitation of these dispersed systems is that they exhibit burst release for small molecule additives. 40 In order to avoid this phenomenon, the on-demand manipulation of these systems from a slow releasing to a fast releasing nanostructure is proposed. The selective modication of the colloidal nanostructure has been achieved by selective material transfer between different lipidic particles, 41 the presence of different ionic species 42,43 and pH, 44,45 thus demonstrating the potential of these nanostructured colloids as stimuli responsive materials for drug delivery. Towards site selective drug release, enzyme sensitive systems are of interest where over-expressed disease-associated enzymes are utilized to trigger drug release. 46,47 Enzymes are macromolecular biological catalysts, which accelerate specic chemical reactions. Recently, we have demonstrated the use of enzymes to change the mesophase structure. By mimicking the naturally occurring digestion process that occurs within digestive systems, [48][49][50] it has been shown that an emulsion of an indigestible lipid (phytantriol) and a digestible lipid (tributyrin) could be digested back to form a cubic phase upon addition of a lipase 51 (Fig. 1), whereby amphiphilic molecules re-establish themselves within the lipidic and aqueous domains of the lipidic mesophase according to their physicochemical properties. 52 The purpose of this study is to demonstrate that nanostructured dispersions can be specically designed to respond to targeted enzymes in order to trigger the release of encapsulated drugs. Lipidic mesophases were designed to respond to invertase, an enzyme which hydrolyses sucrose at the a-1,4glycosidic bond, resulting in an equimolar mixture of fructose and glucose, 53 as shown in Scheme 1. Sugar-based amphiphiles are known to modify the interfacial behaviour of lipidic membranes, as they have been utilised to target lectins associated with cancer states. [54][55][56][57] This study investigates the incorporation of sugar esters into phytantriol (PHYT) and monolinolein (MLO) matrices in order to form vesicles which could be selectively transformed into nanostructured particles (cubosomes) in response to the hydrolysis of the sucrose head group by invertase. Preparation of bulk mesophases and nanostructured dispersions Briey, lipid mixtures were produced by dissolving the required quantities of lipid (PHYT or MLO) and sugar ester with a 2 : 1 mixture of chloroform and methanol. Solvent was then evaporated under vacuum in a round bottom ask for 2 h. For bulk phases, an aliquot of the dry lipid mixture was then weighed into glass tubes and an excess of 64 mM acetate buffer, pH 5.0, was then added to ensure the formation of the mesophase under excess water conditions. This pH was chosen as it is the optimal pH at which invertase catalyses the hydrolysis of sucrose. The tubes were then stored overnight at 37 C to equilibrate. Light microscopy and equilibrium SAXS measurements were performed on bulk mesophases in order to rapidly identify formulations of interest. Nanostructured dispersions were produced by making a mixture of lipid and sugar ester lm in a round bottom ask then hydrating it with 0.5% Pluronic F127 in a pH 5.0 acetate buffer to prepare a 7.5% lipid dispersion. The coarse dispersion was then transferred into a vial and further processed with a probe sonicator (Ultraschallprozessor, Dr Kielscher GmbH) with pulsed sonication for 5 min at 20% amplitude. Polarized optical microscopy (POM) As a rapid screening technique, bulk mesophase samples were observed under cross polarised light with a Zeiss Axioskop 2 at 37 C. Under these conditions, cubic mesophases are nonbirefringent whereas those possessing a lamellar structure show a characteristic birefringent Maltese cross pattern. Nuclear magnetic resonance spectroscopy (NMR) In order to evaluate the mixture composition of sucrose laurate and stearate and to determine the place where the carboxylic acid is attached to the sucrose unit, 1D 1 H and 13 C NMR experiments, as well as DEPT and 2D 1 H-1 H (COSY) and 1 H-13 C (HETCOR) NMR experiments, were carried out on a Bruker Advance spectrometer (Bruker BioSpin GmbH, Rheinstetten, Germany), operating at 400 MHz ( 1 H) and 100 MHz ( 13 C), and using D 2 O or DMSO-d 6 as solvents and as internal standards. Example NMR spectra are presented in the ESI (Fig. SI2). † Small angle X-ray scattering (SAXS) Equilibrium SAXS patterns were acquired using a Rigaku microfocused X-ray source of wavelength l ¼ 1.54Å operating at 45 kV and 88 mA. Diffracted X-rays were collected on a gas lled 2D detector. The scattering vector q ¼ (4p/l)sin q, with 2q, the scattering angle, was calibrated using silver behenate with the q-range from 0.01Å À1 to 0.45Å À1 . Data were collected and azimuthally averaged using the SAXSgui soware to yield 1D intensity versus scattering vector, q. Viscous samples were loaded into a Linkam hot-stage between two thin mica sheets and sealed by an O-ring, with a sample thickness of ca. 1 mm. Dispersed samples were loaded into 1.5 mm quartz capillaries and sealed with epoxy glue. Measurements were performed at 37 C. Time resolved SAXS (TRSAXS) data for the enzymatic studies were collected on the SAXS/WAXS beamline at the Australian Synchrotron. In vitro digestion experiments were conducted in a thermostated glass digestion vessel at 37 C. 58 7 mL of dispersion was equilibrated in the digestion vessel before the remote addition of 0.5 mL of an invertase solution (10 mg mL À1 ) to initiate digestion. The digestion medium was pumped (10 mL min À1 ) through a 1.5 mm quartz capillary ow cell using a peristaltic pump and the quartz capillary was placed in the Xray beam. 59 The scattering proles were acquired for 5 s every 10 s at an energy of 10 keV using a Pilatus 1 M detector (active area 169  179 mm 2 with a pixel size of 172 mm) with a sample-to-detector distance of 1015 mm providing a q range of 0.01Å À1 < q < 0.7 A À1 . The scattering images were integrated into the onedimensional scattering function I(q) using the in-housedeveloped soware package ScatterBrain. The q range calibration was made using silver behenate as the standard. The cubic and hexagonal phase space groups and lattice parameters were determined by the relative positions of the Bragg peaks in the scattering curves, which correspond to the reections on planes dened by their (hkl) Miller indices. 60 Bulk and dispersed LLC phases in excess water are readily identied using SAXS, where each phase can be identied by its specic Bragg peak positions. For the double diamond cubic phase (V 2 Pn3m) the Bragg reections occur at relative positions in q at O2 : O3 : O4 : O6 : O8 : O9. and for the primitive bicontinuous cubic phase (V 2 Im3m) the Bragg peaks occur at q ¼ O2 : O4 : O6 : O8 : O10. The H 2 phase is identied by reections at 1 : O3 : O4., the bilayer lamellar (L a ) phase gives Bragg peaks in the ratio of 1 : 2 : 3 : 4. and the L 2 phase is identied by a singular characteristic broad peak. The mean lattice parameter, a, was deduced from the corresponding set of observed interplanar distances, d (d ¼ 2p/q), using the appropriate scattering law for the phase structure. For cubic phases: while for the H 2 phase: The invertase-catalysed hydrolysis of sucrose in the presence of water to form fructose and glucose. For the L 2 phase, which shows only one broad peak, d is termed the characteristic distance. High performance liquid chromatography (HPLC) The digestion of a solution of sucrose laurate (31.5 mM, a concentration that reects the amount of sugar ester in the liquid-crystalline dispersion) was analysed using high performance liquid chromatography (HPLC, Agilent 1100, Switzerland). The sucrose laurate solution was digested at 37 C, with samples (4 mL) removed at dened time points. Upon cooling, any remaining sugar ester was extracted with chloroform. The aqueous phase was then ltered before separation of the injection volume (25 mL) with an analytical column Dionex CarboPac PA1 (2 mm  250 mm) and a Carbopac PA1 guard column (2 mm  50 mm), maintained at 30 C. The elution program was 0.3 mL min À1 of 18 mM NaOH for 25 min, then 0.5 mL min À1 of 200 mM NaOH for 10 min (to eliminate possible retained anions), followed by re-equilibration with 0.3 mL min À1 of 18 mM NaOH. Retention times for glucose, fructose and sucrose were 13.4, 16.9 and 20.1 min, respectively. For the pulsed amperometric detection, the gold electrode was freshly polished and used with waveform A as per Dionex technical note 21. Before analysis, samples were spiked with 2-deoxyglucose (60 mM) as an internal standard. Calibration curves were obtained with glucose, fructose and sucrose standards relative to the internal standard signal. All data were collected and processed with Chromeleon v.6.8 (Dionex soware). Example chromatograms are presented in the ESI (Fig. SI3 †). Triggered drug release measured by the pressure ultraltration method The mixture of PHYT and 30% SL was dispersed in uorescein solution (1 mg mL À1 ) in pH 5.0 acetate buffer (64 mM). The dispersion was then separated from the free model drug solution by applying 2.5 mL of the dispersion to PD10 desalting columns (Silverwater, NSW, Australia). The separated vesicle dispersion (1 mL) was subsequently added to 20 mL of release medium, representing t ¼ 0. The release medium for the enzyme triggered release contained 6 mg of invertase, consistent with the lipid to enzyme ratio utilised in the dynamic SAXS studies. Samples (2 mL) were removed at 1, 5, 10, 15, 30, 45 and 60 min for the measurement of released uorescein concentration by pressure ultraltration. The temperature of the experiments was maintained at 37 C using a water bath. Particle size distribution was determined before and aer the release study using a Zetasizer Nano ZS (Malvern). Drug release and encapsulation efficiency were quantied by the pressure ultraltration method 40 using an Amicon 8010 pressure ultraltration cell and YM10 regenerated cellulose membranes (both from Millipore Corp., Bedford, MA) with a 30 kDa cut-off and high-purity compressed air as the gas source. Drug concentration in the ltrate was measured using uorescence (Innite® 200 PRO series microplate reader, Tecan, Männedorf, Switzerland) at excitation and emission wavelengths of 460 nm and 515 nm, respectively. To determine the volume of solution that was required to saturate the membrane, a solution containing 0.1 mg mL À1 uorescein in Milli-Q water (Millipore) was passed through the membrane, and fractions of four drops at a time were collected, and the volume of each fraction was determined by weight, assuming the density of water. The concentration of uorescein in each fraction of the ltrate was determined by uorescence, and the volume required for the concentration in the ultraltrate to reach 95% of the concentration in the cell was determined. This void volume was discarded and the subsequent 150 mL aliquots were collected and analysed. Molecular simulations Structures were drawn using HyperChem 8.0.3 soware, and the semi-empirical quantum mechanics method Austin Model 1 (AM1), together with the Polak-Ribiere conjugate gradient algorithm with a root-mean-square (RMS) gradient of 0.05 kcalÅ À1 mol À1 , was used for the geometrical optimization of the molecules, as well as the spin-pairing restricted Hartree-Fock (RHF) operators for all neutral species. The self-consistent eld (SCF) convergence limit was set at 10 À5 , and the accelerated convergence procedure was used. All geometrical values and some quantitative structure-activity relationship (QSAR) parameters were obtained from the quantum mechanics simulations. Inuence of sucrose esters on phase behaviour MLO and PHYT Sucrose esters are known to swell the mesophase and promote the formation of vesicles at higher concentrations. 13 The effect of the two sugar esters on the phase behaviour of both PHYT and MLO in excess water was determined in order to nd the amount of sugar ester required to switch the nanostructure from the bicontinuous cubic (V 2 ) to lamellar (L a ) phase, so that enzymatic attack on the hydrophilic headgroup of the sugar ester would reverse the impact on packing resulting in a vesicle to cubosome transition. The molecular structures and partial phase diagrams of these systems are shown in Fig. 2. The addition of the sucrose esters resulted in the swelling of the MLO V 2 Pn3m nanostructure and then an order-to-order transition to the V 2 Im3m as previously reported, 13 an occurrence which was not observed in the equilibrium phase behaviour of phytantriol. The compositions that were selected for further study were one step higher than the critical concentrations of sucrose laurate which provided the lamellar phase. These formulations were PHYT + 30% sucrose laurate (PHYT + 30% SL), PHYT + 40% sucrose stearate (PHYT + 40% SS), MLO + 30% sucrose laurate (MLO + 30% SL) and MLO + 30% sucrose stearate (MLO + 30% SS). Digestion of sugar ester-doped vesicles followed using time resolved SAXS The digestion of the different vesicle formulations by invertase was initiated by the remote addition of the enzyme solution. The resulting TRSAXS proles of the structures present during digestion of PHYT based vesicles are shown in Fig. 3 and the MLO based vesicles in Fig. SI1. † An increase in scattering at low q is attributed to addition of invertase upon initiation of digestion. PHYT + 30% SL was the only system that demonstrated an order-to-order transition from vesicles to V 2 Im3m beginning at 580 s, not returning to V 2 Pn3m as indicated in the PHYT + SL + water equilibrium phase behaviour. As the sucrose headgroup of the sugar ester is hydrolysed, it is anticipated that the monosaccharide diffuses away from the interface into the aqueous domain of the PHYT + 30% SL and eventually to the surrounding solution, and the other product of digestion (monosaccharide laurate) remains at the interface in a position to modify the phase behaviour. Although the PHYT + 40% SS system appeared to be vesicles in the equilibrium study, it was not the case when utilising the synchrotron X-ray source. The system exhibited the V 2 Im3m phase throughout the duration of the experiment, with a slight 3% increase in V 2 Im3m peak intensity. As anticipated, the PHYT dispersion that did not contain any sugar ester displayed the V 2 Pn3m phase and did not change over time upon addition of invertase solution. In contrast, the MLO based systems were not responsive to invertase digestion as no change in nanostructure or in peak intensity upon addition of invertase was observed (Fig. SI1 †). Multiply responsive dispersionsdigestion with both invertase and lipase By incorporating multiple elements into the lipidic bilayer that are responsive to different enzymes a dual responsive system can be created. In this system, the indigestible lipid is PHYT, and the two digestible elements are MLO and SL which are digested by lipase and invertase, respectively, where the three amphiphiles were added in a 1 : 1 : 1 weight ratio. Upon the addition of invertase (Fig. 4A), no phase transition was observed, however, the intensity of the peak at q ¼ 0.131Å À1 decreases by 6% over 2000 s. The access of invertase to the sucrose laurate headgroups could have been inhibited by the arrangement of the amphiphiles at the bilayer. Upon the subsequent addition of lipase (Fig. 4B) a phase transition was observed from vesicles to V 2 Im3m at $700 s to the inverse micellar phase at $1600 s. A signicant increase in scattering at low q was also observed over time, which could indicate the formation of micellar structures. These phase changes could be the result of the action of both the lipase digesting the MLO and so exposing the SL to be hydrolysed by invertase. Digestion kinetics of sucrose laurate in solution Hydrolysis of the sucrose ester was complicated by the location of the substitution of the laurate tail on the sucrose moiety. Analysis of the alkyl substitutions on the sucrose headgroup by NMR found that the laurate tail is attached to different hydroxyl groups of the sucrose molecule (Table 1). In addition, the surfactant is only 80% mono-substituted and the remaining 20% di-and tri-substituted. From the HPLC analysis, the amount of fructose eventuating from the digestion was approximately only 25% of the 31.5 mM SL solution, which correlates to the amount of mono-substituted laurate chains onto the glucose moiety of the sucrose headgroup. This suggests that the alkyl tail prevents invertase from hydrolysing the sucrose laurate isomers where the laurate chain is substituted onto the fructose moiety, most likely by preventing its approach to the active site of the enzyme. In addition to the substitutions of the laurate tail, the kinetics of invertase-catalysed hydrolysis was hypothesised to be complicated by the presence of the alkyl chain. As enzymes themselves are amphiphilic in nature, the surfactant properties of the sucrose esters may affect the hydrolysis of the sucrose headgroup by invertase. The kinetics of hydrolysis of sucrose stearate is not displayed as invertase was not able to catalyse the hydrolysis of the sucrose stearate in solution due to its limited solubility, and potentially as a result of the denaturation of the enzyme due to the surfactant. 61 The digestion of a 31.5 mM solution of SL was followed by HPLC (Fig. 5). Sucrose (1.04 mM) was present at t ¼ 0, which suggests that some sucrose, hydrolysed from the parent ester, was present in the solution prior to the addition of the enzyme solution. Consequently, the concentration of glucose and a fraction of the fructose (1.04 mM) that was produced during digestion are attributed to the hydrolysis of this free sucrose and not to the digestion of the ester. Thus, only the sucrose laurate esters where the alkyl tail is attached to the glucose, is hydrolysed by invertase. By tting the two different hydrolysis reactions that are catalysed by invertase in this solution as indicated in Fig. 5, the rates of hydrolysis of sucrose and SL were determined ( Table 2). The rate of hydrolysis of the sucrose ester was found to be slower than that of the sucrose itself, potentially due to the need for the enzyme to overcome higher steric barriers to reach its substrate. Invertase-triggered release study Invertase-triggered release was demonstrated from a PHYT + 30% SL vesicle formulation, utilising uorescein as a model drug (Fig. 6). Vesicles that were not exposed to the enzyme displayed a slow release over the course of the study. Vesicles which were exposed to invertase demonstrated a signicant boost in the amount of uorescein released, whereby 90% release was achieved aer 2 min. Moreover, particle sizing recorded before and aer the invertase-catalysed study showed a decrease in size, whereby before hydrolysis was initiated, the hydrodynamic diameter of the particles in the lipid dispersion was 123.5 AE 4 nm with PDI 0.196 AE 0.04, and aer hydrolysis, it was 107.4 AE 2 nm with PDI 0.206 AE 0.03. This is not unexpected as the overall molecular volume of the headgroups of the particle would decrease, thus reducing the hydrodynamic diameter of the particle. Both the triggered release and the reduction in particle size are attributed to the hydrolysis of the sucrose headgroup initiating a transition from vesicles to V 2 Im3m. Discussion Sugar surfactants affect the phase behaviour of lipidic bicontinuous cubic phases, where the hydrophilicity of the sugar surfactant promotes hydration at the interface, swelling the cubic phase with increasing concentration and eventually forcing the transition to the lamellar structure. 13 Pure sugar surfactants with saturated side-chains form normal, as opposed to inverse, mesophases in aqueous environments, 62 and as such promote the formation of lamellar structures when incorporated into MLO and PHYT systems. 19 Additionally, a difference in the phase behaviour of the mixed amphiphile systems was observed between the systems formed by the two sugar esters with the different length tails; the phase transition from the bicontinuous cubic to vesicles occurs at a lower concentration of sucrose laurate added to the main lipid compared to sucrose stearate. In comparison, the hydrolysis of the sucrose laurate to glucose laurate produced a more substantial transformation Fig. 5 The digestion of sucrose laurate solutions. HPLC analysis of samples taken from the digestion of 16.5 mg mL À1 sucrose laurate over time, where black squares show the evolution of glucose, red circles correspond to fructose and blue triangles to sucrose (n ¼ 3, AEs). These data were fitted with the equations indicated adjacent to each curve, where S 0 is the initial sucrose concentration, SL 0 is the initial sucrose laurate concentration (glucose-attached), and k SL and k S are the first order kinetic constants for the hydrolysis processes. Table 2 Rate constants of the invertase catalysed hydrolysis of sucrose (S 0 ) and sucrose laurate (SL 0 ) as determined from data in Fig. 5. F ¼ fructose, G ¼ glucose and GL ¼ glucose laurate Hydrolysis reaction Rate of hydrolysis Fig. 6 Release of the model drug, fluorescein, over time from the untriggered PHYT + 30% sucrose laurate (red) and invertase triggered release (black), (n ¼ 3, AESEM). Release via invertase catalysed hydrolysis of the sucrose headgroup results in 93% release within 10 min of adding the enzyme. Release studies were performed at pH 5.0 and 37 C. Lines are only intended to be a guide for the eye. in the molecular packing. Upon exposure to invertase, the area of the sucrose headgroup is diminished which results in an increase in its CPP to 0.469 and consequently an increase in hydrophobicity. The resulting molecular shape resembles a truncated cone which is more akin to the shape of phytantriol and so is more likely to form inverse cubic phases. The small amount of conversion of the additive to the glucose laurate, up to $6% of the total amount of lipid in the system, was sufficient to trigger the transition from vesicles to V 2 Im3m. By modifying the molecular shape of the additive to match the shape of the parent lipid, the desired nanostructure was formed. The rationalisation of this behaviour lies in the self-assembly of lipidic amphiphiles in a bilayer which can qualitatively be described by the critical packing parameter (CPP), dened as: where V is the surfactant tail volume, L c is the tail length, and A 0 is the equilibrium area per molecule at the aggregate surface. 63 Quantum molecular simulations revealed the different features of single sucrose laurate molecules as compared to the hydrolysed product, glucose laurate (Table 3). Additional information obtained from simulations of the molecules encountered in this study are included in the ESI (Table SI1 and Fig. SI4). † There were two major factors in determining the ability of the enzyme to access the sucrose headgroup of the sugar ester; the positioning of the sucrose ester at the interface and the accessibility of the enzyme to the headgroup. The bulkiness of the headgroup, 64 length of the alkyl tail of the surfactant, 65 as well as its substitution and the molecular chirality, 66,67 affect its positioning at the interface, and as a consequence, can assist or hinder the ability of the enzyme to access and hydrolyse the headgroup. The mechanism for the hydrolysis of sucrose by invertase involves the Asp-23 as a nucleophile and Glu-204 as an acid/base catalyst. 68 If these active sites cannot access the molecule, hydrolysis cannot occur. It should be noted that the effect of substrate and product inhibition of the enzyme is likely to be minimal as the amount of both in the system is marginal. The longer tail of the stearate does not match the bilayer packing of the bulk lipids as it is >5Å longer than the L c of both PHYT and MLO, and can result in it residing more in the lipidic domain than at the interface. The shorter tail of the laurate allows it to insert optimally at the interface of the PHYT bilayer where it can most efficiently inuence the packing and so be available for digestion. Thus, only the fructose substituted sucrose laurate could be accessed by the invertase at the interface. The invertase catalysed hydrolysis of the bilayer results in the triggered release of the model drug. Drug release does not appear to exactly follow the timing of the phase change which is attributed to the dynamic rearrangement of amphiphiles from the outer membrane to inner membrane in order to nd equilibrium. 69 There are two main premises as to how a drug is released from liposomes upon exposure to a stimulus. Firstly, the redistribution of the membrane lipids across the bilayer can result in the cavitation of the liposomal bilayer and therefore complete burst release of the payload. 47 This occurs mainly in systems that contain lipids which do not form inverse phases. The second conjecture is that the burst release of the aqueous solubilised payload from within the vesicles can also be due to the transient formation of inverse phases, 70 which is the case in this system. In both cases, the uncontrolled release of hydrophilic molecules is shown. Thus, hydrophobic molecules, which must traverse from within the lipidic bilayer to the surface of the cubosome before being released, may be more suited to this type of on-demand release. In addition, molecular modelling may be used to visualise where and how the molecules sit at the interface and how they redistribute upon the application of a stimulus. Conclusions The use of a carbohydrate targeting enzyme to induce transformation of an on-demand nanostructured lipid-based system was demonstrated for the rst time in this study. The incorporation of the digestible moiety, sucrose laurate, into a MLO or PHYT lipid bilayer resulted in the formation of vesicles. Invertase catalysed the hydrolysis of sucrose laurate incorporated into PHYT vesicles, which were dynamically converted to cubosomes with a V 2 Im3m structure. Further, the use of a mixture of PHYT and MLO enabled the preparation of a dual enzyme responsive vesicle system, responsive to both invertase and lipase. Vesicles formed by PHYT and sucrose laurate were triggered to release a model drug upon exposure to invertase. This research paves the way towards the informed design of structured lipidic systems, which are responsive to specic enzymes for the controlled delivery of pharmaceutically active molecules or functional foods.
2018-12-09T06:24:32.007Z
2017-01-10T00:00:00.000
{ "year": 2017, "sha1": "e3936a2541a0ed9f57ce5db1ae320f8a43f19030", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2017/ra/c6ra26688f", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "686687e2e6f82c899a93788459108b1a93e9ad29", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
16877570
pes2o/s2orc
v3-fos-license
Two Metallothionein Genes in Oxya chinensis: Molecular Characteristics, Expression Patterns and Roles in Heavy Metal Stress Metallothioneins (MTs) are small, cysteine-rich, heavy metal-binding proteins involved in metal homeostasis and detoxification in living organisms. In the present study, we cloned two MT genes (OcMT1 and OcMT2) from Oxya chinensis, analyzed the expression patterns of the OcMT transcripts in different tissues and at varying developmental stages using real-time quantitative PCR (RT-qPCR), evaluated the functions of these two MTs using RNAi and recombinant proteins in an E. coli expression system. The full-length cDNAs of OcMT1 and OcMT2 encoded 40 and 64 amino acid residues, respectively. We found Cys-Cys, Cys-X-Cys and Cys-X-Y-Z-Cys motifs in OcMT1 and OcMT2. These motifs might serve as primary chelating sites, as in other organisms. These characteristics suggest that OcMT1 and OcMT2 may be involved in heavy metal detoxification by capturing the metals. Two OcMT were expressed at all developmental stages, and the highest levels were found in the eggs. Both transcripts were expressed in all eleven tissues examined, with the highest levels observed in the brain and optic lobes, followed by the fat body. The expression of OcMT2 was also relatively high in the ovaries. The functions of OcMT1 and OcMT2 were explored using RNA interference (RNAi) and different concentrations and treatment times for the three heavy metals. Our results indicated that mortality increased significantly from 8.5% to 16.7%, and this increase was both time- and dose-dependent. To evaluate the abilities of these two MT proteins to confer heavy metal tolerance to E. coli, the bacterial cells were transformed with pET-28a plasmids containing the OcMT genes. The optical densities of both the MT-expressing and control cells decreased with increasing concentrations of CdCl2. Nevertheless, the survival rates of the MT-overexpressing cells were higher than those of the controls. Our results suggest that these two genes play important roles in heavy metal detoxification in O. chinensis. Introduction Metallothioneins (MTs) were first discovered as cadmiumbinding proteins and isolated from horse kidneys in 1957 [1]. MTs are low-molecular-mass (,10 kDa), cysteine-rich proteins (15-30% of their amino acid contents) that lack aromatic residues, resulting in their optimal capacities for metal ion coordination. These high cysteine levels are necessary for the coordination of metal ions through the thiolate cluster as facilitated by the Cys-X-Cys and Cys-X-Y-Cys motifs, in which X can be any amino acid other than cysteine [2]. These types of metal-binding proteins have been widely found in all organisms, including bacteria, plants, invertebrates and vertebrates [3][4][5]. More complete information is available in recently published reviews [6][7][8][9]. Studies involving MTs have been performed in various fields, including toxicology, physiology and molecular and developmental biology [10]. MTs play an important role in zinc and copper homeostasis as well as in the detoxification of non-essential trace elements, such as cadmium (Cd) and mercury (Hg), because of their characteristic high cysteine levels [11]. MTs also aid in protecting cells from oxidative stress via the intracellular scavenging of free radicals [9,12]. Because of their high affinities for heavy metals, the roles of MTs in the detoxification of heavy metals and in maintaining essential metal ion homeostasis within cells have been widely investigated [7,13]. Drosophila metallothioneins play important roles in copper homeostasis as well as in the detoxification of cadmium [14]. Paul (2000) found that 99.0% of cadmium is present in the gut epithelium in the form of metallothionein-bound cadmium after exposing O. cincta to different cadmium concentrations [15]. Several pieces of evidence have indicated that MTs can also act as scavengers of free hydroxyl and superoxide radicals. MT synthesis can be induced by oxidative stress and hormonal stimuli similar to heavy metals [16][17][18]. MTs are mainly considered to involved in the protection against oxidative stresses and neuroprotective mechanisms [19]. A number of studies have reported the detailed molecular structure of MT by molecular sequencing and nuclear magnetic resonance spectroscopic analysis and have investigated the roles of MTs in the detoxification of heavy metals. However, these studies have focused primarily on mammalian and aquatic organisms and plants [9]. Few studies have examined the interactions between MTs and metal ions in Diptera and Collembola insects [20,21]. Little is currently known regarding the molecular characteristics and functions of MT genes in Orthoptera insects, especially grasshohper Oxya chinensis. Oxya chinensis, which is an agricultural pest, feeds on the leaves of gramineous plants, particularly rice, and inhabits rice-growing areas of China. Due to grasshopper behavior in the farmland ecosystems, heavy metals (such as cadmium) in the agricultural environment transfer into the bodies of the grasshoppers through the food chain. Previous studies performed by our laboratory have indicated that heavy metals can accumulate in O. chinensis through the food chain [22,23]. Our previous research has also found that MT levels increase in O. chinensis when the grasshoppers feed on wheat leaves containing heavy metals (data unpublished). It has been difficult to clone the MT sequences based on only several conserved cysteines, and thus, the molecular characteristics of the MTs and their roles in the detoxification of heavy metals have not been further studied. However, two MT sequences have been recently described in the O. chinensis transcriptome database, allowing additional analyses to be performed. The present study aimed to 1) clone and identify two full-length cDNAs of MT genes from O. chinensis, 2) analyze the expression patterns of these two MT genes in different tissues and at different developmental stages, 3) investigate the functions of these two MT genes by RNAi, and 4) evaluate the tolerances of the two MT proteins to Cd using recombinant MT in an E. coli expression system. The present study will help to elucidate the characteristics and functions of MTs in O. chinensis and demonstrate the potential value of heavy metal pollution prevention. Insects Oxya chinensis, which is an important agricultural pest, inhabits a wide range of rice-growing areas spanning most of China. The O. chinensis used in this study were collected from paddy fields in the Jinyuan District, Taiyuan, Shanxi province (north latitude: 34.28, east longitude: 112.45) where there is no land protection of any type. Local farmers must use pesticides to kill these grasshoppers. We explicitly confirmed that no specific permissions were required for these locations/activities and that the field studies did not involve endangered or protected species. The O. chinensis eggs were incubated in a climate chamber (Yiheng Co., Ltd. Shanghai, China) at 2862uC with a 14:10-h (light: dark) photoperiod at 60-75% humidity. After hatching, the nymphs were raised in nylon net cages, and all grasshoppers were reared using fresh wheat leaves. Healthy and uniform sets of insects were selected for our experiments. Identification and sequencing of two OcMT cDNA fragments Two cDNA sequences were obtained from the O. chinensis transcriptome database from samples that included 1st-5th instar nymphs, adults, and cadmium-treated insects. Two full-length cDNA sequences were identified using BLASTX and were designated as OcMT1 and OcMT2. To confirm the predicted coding sequences of these two genes, two specific primers were used to amplify the cDNAs by reverse transcription PCR (RT-PCR) using cDNA templates prepared from the whole insect body. The RT-PCR products were run on a 1.5% agarose gel, purified using a Gel Extraction Kit (Omega, Doraville, CA, USA) and subcloned into the pEASY-T3 Cloning Vector (TransGen Biotech Co., Ltd. Beijing, China) and then sequenced by GBI Biotechnology Co., Ltd. (Beijing, China). The physical and chemical properties of OcMT were analyzed using the ExPASy online tools (http://us.expasy.org/tools). The similarities and characteristics of the two OcMTs were compared with those of other known insect species on the basis of the deduced amino acid sequences. The amino acid features were analyzed using the online BLAST program provided by NCBI (http://blast.ncbi.nlm.nih.gov/Blast.cgi). Total RNA extraction and cDNA synthesis Total RNA was isolated from the liquid nitrogen-preserved samples using RNAiso Plus (TaKaRa, Dalian, China) according to the manufacturer's protocol. The RNA purity was estimated using a NanoDrop 2000 UV-Vis Spectrophotometer (Thermo, Waltham, MA, USA) according to the absorbance ratio of A260/280, and its integrity was assessed by 1.5% agarose gel electrophoresis. One microgram of RNA was used to synthesize the first-strand cDNA using M-MLV Reverse Transcriptase (Promega, Madison, WI, USA) and an oligo-(dT) 18 primer. Expression patterns of OcMT1 and OcMT2 at the developmental stages and in the tissues To determine the expression patterns of the OcMT1 and OcMT2 genes at the seven developmental stages, including the egg, first-, second-, third-, fourth-, and fifth-instar nymphs and the adults, insects at day 3 of each stage were collected for total RNA extraction. To detect the tissue-dependent expressions of OcMT1 and OcMT2, eleven selected tissues were dissected from the adults (pooled from ten adults) under a binocular microscope, including brain, optic lobe, muscle, foregut, midgut, hindgut, gastric caeca, Malpighian tubule, fat body, testis and ovary tissues. RT-qPCR was conducted in a 20-mL reaction containing 2 mL of 20-fold diluted cDNA, 0.8 mL of each primer, 6.4 mL distilled water and 10 mL SYBR Green Real-time Master Mix (TOYABO, Japan) using the Applied Biosystems 7300 Real-time PCR System (Applied Biosystems, USA). b-actin was used as the reference gene. The optimized RT-qPCR program that was used for both b-actin and the OcMTs consisted of an initial step at 95uC for 15 s followed by 40 cycles of 95uC for 15 s and 60uC for 34 s. A melting curve was evaluated for each RT-qPCR experiment to confirm the amplification efficiency. All experiments were performed in triplicate, each with two technical replicates. Amplification specificity was verified using the dissociation curve. The fold changes for comparing the relative gene expression levels to those of the controls in the different tissues and at the different developmental stages were determined using the 2 2DDCt method [24]. The sequences of the primers used for the RT-qPCR analysis are shown in Table 1. Functional analysis of OcMT1 and OcMT2 by RNAi To evaluate their vital biological functions, an RNA interference analysis of both the OcMT1 and OcMT2 genes was performed by injecting sequence-specific double-stranded RNA (dsRNA) into O. chinensis. The specialized PCR was performed using cDNA from the whole bodies of the adults to prepare the templates for the OcMT1 and OcMT2 dsRNA syntheses. The primers used for the synthesis of the dsRNA and the transcript analysis are shown in Table 1. The PCR products of OcMT1 and OcMT2 were subcloned and sequenced to confirm their specific identities. The OcMT1 and OcMT2 dsRNA and GFP were prepared and synthesized using the T7 RiboMAX Express RNAi System (Promega, Madison, WI, USA) following the manufactur-er's instructions. The prepared dsRNA was dissolved in nucleasefree water, and a product contained within a single band was verified using a 1.5% agarose gel. The concentration of dsRNA was adjusted to 1.5 mg.mL 21 . Aliquots of 4 mL of the OcMT1 and OcMT2 dsRNA (containing 6 mg dsRNA) were injected into the abdomens between the second and third abdominal segments of the adult insects using a manual microinjector (Ningbo, China). The control groups were injected with equivalent volumes of dsGFP alone. All experiments were performed in triplicate. The whole bodies of three adults from each replicate were pooled for total RNA extraction at 12, 24 and 48 h after the injections of dsOcMT1, dsOcMT2 and dsGFP, respectively. The efficiencies of the RNA silencing of the two MT genes were evaluated by measuring their mRNA transcription levels using RT-qPCR as described in Section 2.4. To evaluate the roles of OcMT1 and OcMT2 in the detoxification of heavy metals, five concentration gradients of CdCl 2 Recombinant expressions of OcMT1 and OcMT2 in Escherichia coli To further determine the roles of OcMT1 and OcMT2 in heavy metal detoxification, we constructed a recombinant plasmid that produced the OcMT1 and OcMT2 proteins in an E. coli expression system. The coding sequences of OcMT1 and OcMT2 were obtained by PCR amplifications with primers (Table 1) containing BamHI and HindIII sites, and the products were then digested with these two restriction endonucleases. The resulting digests were ligated into the BamHI and HindIII sites of the expression vector pET-28a, in which OcMT was expressed under the T7 bacteriophage promoter. The recombinant plasmids, pET-28a-OcMT1 and pET-28a-OcMT2, were transformed into E. coli DH5a-competent cells and were then sequenced (Invitrogen China Limited, Shanghai, China). The transformation mixture was plated onto LB agar, containing 100 mg mL 21 kanamycin. One positive clone for each cDNA was transformed into competent E. coli BL21 (DE3) cells for the expressions of the proteins. The OcMT fusion protein was expressed in E. coli and induced with 1 mM isopropyl b-D-thiogalactoside (IPTG) for 6 h at 33uC. A parent vector lacking an OcMT gene insert was used as a negative control. The cells were harvested by centrifugation at 12,000 g for 10 min. The cells were then lysed by mild sonication at 4uC and centrifuged at 12,000 g for 15 min, and the fusion proteins were isolated from the supernatants. The fractions in the crude BL21 (DE3) cell lysates harboring pET-28a-OcMT1 and pET-28a-OcMT2 were detected by 15% SDS-PAGE. Evaluation of heavy metal tolerance using recombinant OcMTs To further investigate the metal tolerance of the transformed E. coli BL21 (DE3) cells, 5 mL of each culture at OD 600 = 0.6 was inoculated into new tubes, containing 5 mL of liquid nutrient LB medium. The tubes were shaken at 37uC until the OD 600 measurements were between 0.50 and 0.55. The cells containing pET-28a-OcMT were divided into two groups. One group was induced by 1 mM IPTG, and the other was not. A parent vector (pET-28a) without inserts was used as a negative control. The Data analysis The MT mRNA levels in the different tissues and at the various developmental stages and the mortalities between the control and the exposed groups were analyzed using a one-way ANOVA followed by Tukey's HSD test. Differences were considered statistically significant at P,0.05. All data are shown as the mean 6 standard deviation, and the statistical analyses were performed using the SPSS version 11.5 software (SPSS Inc., Chicago, IL, USA). Analysis of cDNAs and deduced amino acid sequences of OcMTs Two full-length cDNA sequences were obtained from the O. chinensis transcriptome database putatively encoding two different OcMTs, which were named OcMT1 and OcMT2 (GenBank accession numbers KJ153014 and KJ153015) (Fig. 1). The fulllength cDNA sequence of OcMT1 was 552 base pairs (bp) long, with an open reading frame of 123 bp that encoded a 40-amino acid peptide with a predicted molecular weight (MW) of approximately 3.74 kDa. OcMT1 had a theoretical isoelectric point (pI) of 8.11 and one Cys-Cys and three Cys-X-Cys motifs (CCXXXXXCXXXXCKCXXXCTCTNCAC). The full-length cDNA sequence of OcMT2 was 363 bp, with an open reading frame (ORF) of 195 bp that encoded a 64-amino acid peptide. The predicted peptide molecular mass and isoelectric point (pI) were 6.92 kDa and 4.88, respectively, according to the ExPASy Proteomics website. OcMT2 contained two Cys-Cys, three Cys-X-Cys and three Cys-X-Y-Z-Cys motifs. One Cys-Cys and three Cys-X-Y-Z-Cys motifs (CCDVCXXXCKEEEKCXXXCKCX-XXCK) were at the N terminus, and one Cys-Cys and three Cys-X-Cys motifs (CCQSGKEETKGSPCECKQGDDAPCVC-PENSCKCE) were at the C terminus, which is a structure typical of MT proteins. The cysteine concentrations of the deduced OcMT1 and OcMT2 protein sequences were 22.5% and 25%, respectively (Fig. 2). Two OcMT sequences contained 9 and 16 cysteines, respectively, and all cysteine residues were in the characteristic Cys-Cys, Cys-X-Cys, Cys-X-Y-Cys or Cys-X-Y-Z-Cys configuration similar to that observed in other MTs. The polyadenylation signal (AATAAA) was located upstream of the poly (A) tract. The deduced amino acid sequences of the MTs from O. chinensis were compared with the other insect MTs using the GeneDoc FASTA sequence comparison program. As shown in Fig. 2, the two OcMTs shared low amino acid sequence similarities but high identities. Importantly, they were found to code for conserved cysteine residues and functional motifs, such as Cys-Cys, Cys-X-Cys and Cys-X-Y-Cys, that are found in other species. Tissue expression patterns of OcMT1 and OcMT2 The RT-qPCR analysis indicated that OcMT1 mRNA was widely expressed in all tissues examined. Its expression levels were highest in the optic lobes, which exhibited 2.5-and 3-fold higher levels compared with those of the fat bodies and brain, respectively. The OcMT1 transcript levels found in the optic lobe were 7-to 30-fold higher compared with all other tissues examined (Fig. 3). The highest OcMT2 expression levels were detected in the brain, and high levels were also found in the optic lobes; however, low levels were observed in the muscles, foregut, midgut, hindgut, gastric caeca, Malpighian tubules, fat bodies, testes and ovaries. The OcMT2 expression levels in the brain were approximately 5to 350-fold higher compared with the other tissues. Stage-dependent expression patterns of OcMT1 and OcMT2 The relative mRNA expression profiles of the OcMT1 and OcMT2 genes indicated that their expression levels varied significantly throughout the seven life stages (Fig. 4). The highest expression levels of OcMT1 were detected at the egg stage (3.5-to 8-fold higher than in the other stages), and the lowest levels were observed at the 1st-instar nymph stage. OcMT2 displayed the lowest expression levels at the 4th-instar nymph stage and the highest levels at the egg stage (1.5-to 7-fold higher than in the remaining stages). Functional analysis of OcMT1 and OcMT2 by RNAi After the injections of the dsRNAs, the OcMT1 and OcMT2 transcript levels in the whole bodies of the adults decreased by approximately 63.1% to 70.9% by 24 h and 48 h post-injection, but no significant differences were observed at 12 h (Fig. 5). As shown in Fig. 6, when OcMT1 was silenced at 48 h, the mortalities increased from 64% to 72.5% for CdCl 2 , from 72.3% to 83.7% for CuCl 2 , and from 69% to 79.5% for ZnSO 4 . Similarly, the mortalities of the grasshoppers increased from 80.5% to 97.2% for CdCl 2 , from 76.5% to 91.5% for CuCl 2 , and from 70.9% to 84.7% for ZnSO 4 after OcMT2 was silenced. The mortalities of each group displayed dose-dependent increases of 8.5% to 11.4% and 13.8% to16.7% after the silencing of OcMT1 and OcMT2, respectively. Roles of OcMTs in heavy metal tolerance as determined using recombinant proteins OcMT1 and OcMT2 were expressed successfully as shown in Fig. 7, lanes 2 and 5. The theoretical molecular weights of OcMT1 and OcMT2 are 3.74 and 6.92 kDa, respectively. Our results indicated that the OD values of the BL21(DE3) cells (pET-28a-MT1-IPTG group) were 1.37-2.82-fold higher than those of pET-28a-MT1 and pET-28a, and the OD values of the BL21(DE3) cells (pET-28a-MT2-IPTG group) were 1.15-3.92fold higher than those of pET-28a-MT2 and pET-28a (Fig. 8). The OD values of the BL21 (DE3) cell pET-28a-MT1/2 strains were higher than those of the pET-28a groups, which may have been due to leaky expression caused by the presence of betagalactosidase in the liquid nutrient LB medium. Discussion There have been several reports of MTs in various species, including insects and animals. The numbers of MTs vary among different species. For example, Drosophila melanogaster has five MTs [21], but only a single MT has been identified in Orchesella cincta [15]. In the present study, two full-length MT cDNA sequences were obtained from the O. chinensis transcriptome database. These two MTs possessed different coding sequences, peptides, and cysteine concentrations. In particular, the amino acid sequence of the OcMT1 protein was similar in length to those of the metallothioneins (MTA, MTB, MTC and MTD) in Drosophila, which vary from 40 aa to 44 aa [25] and are much Two Metallothionein Genes in Oxya chinensis PLOS ONE | www.plosone.org shorter than the MTs of most other species, which range in size from 58 aa to 61 aa. Importantly, both the OcMT1 and OcMT2 transcripts code for most of the conserved cysteine residues and functional motifs (such as C-C, C-X-C and C-X-Y-C) that are typical of metallothionein structures. The conserved structural patterns are CCX (5) CX (4) CX-CGASCXCTNCXC X (10) in OcMT1 and CCXXCKDTCKX (4) -CGKQCKCPETCK at the N terminus and CCX (11) CECX (7) -CVCX (4) CKC at the N terminus and the C terminus of OcMT2. The non-cysteine-rich spacer region between the two termini has been proposed to play important roles in the stabilization and subcellular localization of MTs [26]. A total of 16 cysteine residues were found along the entire OcMT2 sequence, and cysteine and lysine (Lys, K) were adjacent at four positions. The Cys residues adjacent to Lys have been suggested to play roles in the structures and stabilities of the metal-binding sites of the protein [27]. These important structural characteristics suggest that OcMT1 and OcMT2 may be involved in heavy metal detoxification by capturing the metal within the tissues and that these residues may serve as primary chelating sites [28,29]. A previous study has reported high MT protein levels in the nervous systems of grasshoppers (data not published). In this study, the OcMT mRNA levels were very highly expressed in the brain and optic lobe. This may suggest that the MTs in insects are the most responsive to harmful environmental stresses and are associated with neuroprotective mechanisms. There is no information available regarding the neuronal distribution of the MTs in insects. Studies of MT expression in the nervous system have been focused on humans, model animals and aquatic organisms. In mammals, three major MT isoforms are expressed widely throughout the adult central nervous system [30,31]. MTs have been consistently found to be upregulated in mammalian brains in which neuroinflammation and oxidative stress are present, for example, in cases of acute or chronic brain injury [32,33]. These studies concluded that MTs have important functions in the central nervous system and brain because MT-1 and MT-2 protect the central nervous system from damage induced by chemical and physical injuries [34,35]. We found that two OcMT genes were widely expressed in the digestive tissues (FG, GC, MG and HG). These results are Figure 1. Nucleotide and deduced amino acid sequences of OcMT1 and OcMT2 cDNAs from Oxya chinensis. The deduced amino acid sequence is shown below the nucleotide sequence. Blue letters indicate the start codon (ATG), and an asterisk (*) indicates the stop codon. The putative polyadenylation signal sequence (AATAAA) is underlined. The numbers on the right refer to the amino acid residues. The cysteines (C) are highlighted in red. The deduced amino acid sequences of OcMT1 and OcMT2 are shown, with the cysteine residues arranged as C-C, C-X-C and C-X-X-C motifs, in which X can be any amino acid other than cysteine. doi:10.1371/journal.pone.0112759.g001 consistent with previous findings. In other insects, such as D. melanogaster and Callinectes sapidus, MTs are expressed principally in the larval midgut [14,36]. Durliat et al (1995) and Hensbergen et al (2000) have reported that the insect gut is the main organ for MT expression in both D. melanogaster and Orchesella cincta [37,38] because the gut plays key roles in food absorption, water uptake and waste expulsion. Similarly, metals and other exogenous chemicals can enter the body through the digestive tract during the ingestion of food and water [39]. Surprisingly, MT expression levels were relatively lower in the gut than in the nervous system (brain and optic lobe) in O. chinensis. Differing patterns of MT expression may occur according to particular insect species, stages, habitat conditions and dietary habits. Although no studies have focused on MT expression in the nervous system, it is possible that these proteins are highly expressed in the nervous systems of other insects. In this study, the MTs were widely expressed throughout the digestive system and highly expressed in the nervous system. Thus, the digestive system was an important region involved in heavy metal detoxification. The expressions levels of OcMT1 and OcMT2 in the fat bodies were higher compared with those in the other tissues with the exception of the brain and optic lobe. These findings suggest that OcMT1 and OcMT2 can detoxify exogenous chemicals. The higher expression levels of OcMT2 in the ovaries suggest that this MT may be related to the protection of O. chinensis reproduction from metal toxicity or from oxidative stress [13]. Similar results have been reported in several studies of crabs and rats [40,41]. Therefore, we propose that these two MTs may play important roles in the detoxification of exogenous chemicals. Generally, MTs appear to act as multifunctional stress proteins in higher eukaryotes [42]. MTs are stress proteins that bind with metals and regulate the homeostasis of essential trace metals, counteracting the toxic effects of heavy metals such as Cd, Hg and Ag in insects [43]. The expression patterns of the two OcMT genes were evaluated at all developmental stages. The highest expression levels were found in the eggs, which cannot be considered the first target of heavy metals. These results suggest that high MT mRNA expression may be associated with the oxidative stress response. MTs may also be sensitive to the perturbations of the homeostasis of essential metals, such as Cu and Zn, during embryonic development [44]. MTs may also act in the regulation of redox buffering because redox gradients are important during embryonic development [45]. In aquatic organisms, the early embryo-larval stages appear to be highly sensitive to micropollutants, particularly metals [46]. MTs likely remove O 22 N and NOH simultaneously, especially considering that MTs react with hydroxyl radicals (OH) at approximately 10,000 times faster rates than superoxide dismutase (SOD) in aquatic invertebrates [47,48]. In mammals, cells containing increased MT levels are protected against heavy metal toxicity and oxidant stress, whereas the decreased expressions of MTs in cell lines or in MT-null mice has been shown to lead to heightened sensitivities to metal balance disorders and oxidative stress [47,49]. MTs have high affinities for metals and may play special roles in the regulation of cellular metal distribution [50]; thus, they play key roles in the oxidative stress response and metal homeostasis. Further research is needed to fully elucidate the associated underlying mechanisms. MTs are known to play physiological roles in essential metal chelation, metal homeostasis, heavy metal detoxification [7,51], the alleviation of several types of abiotic stresses [52,53], developmental regulation [54], and the scavenging of reactive oxygen species (ROSs) [55]. Studies have demonstrated that MT concentrations increase in O. chinensis when the grasshoppers ingest wheat leaves containing heavy metals. However, the crucial roles of MTs in insects have not been properly elucidated due of the difficulties of purifying the native proteins and cloning the MT sequences [56]. In this study, we evaluated the roles of OcMTs using RNAi and recombinant proteins in an E. coli expression system. RNAi is a meaningful tool in functional genetic analyses. We used this technique to achieve a high degree of silencing of the OcMT genes by injecting the dsRNAs into the adult hemocoels. The grasshopper mortality increased after the silencing of OcMT1 and OcMT2. These findings suggest that both genes play important roles in the detoxification of the three metals by chelation, which occurs through their Cys-X-Cys and Cys-X-X-Cys motifs. MTs have been considered to be involved primarily in the detoxification of non-essential and excess essential metals by most authors working in the MT field, and these functions have been observed in species ranging from fungi to mammals [57]. For example, Lumbricus rubellus, Jatropha curcas, and Perinereis nuntia all express distinct MT isoforms that have analogous structure-function relationships for metal binding [58][59][60]. Furthermore, MTs can act as scavengers of the free radicals produced by heavy metal stresses [61,62] and have been reported to be capable of scavenging free oxygen radicals in transgenic mice and in plants [63,64]. The expression patterns of the recombinant OcMTs in this study suggested that the cells transformed with the recombinant plasmids had higher Cd tolerances, which may have been due to the chelation of Cd and/or the scavenging of the free radicals produced by Cd by the OcMTs. An increased tolerance to Cd, Zn and Cu has been confirmed in transgenic yeast expressing ThMT3 from Tamarix hispida [65]. Enhanced tolerance to the heavy metal cadmium in a recombinant strain expressing an MT has also been demonstrated in Musca domestica [66]. A similar study has been performed with the biofuel plant Jatropha curcas [59]. In summary, we have described two MT genes of O. chinensis and analyzed their molecular characteristics and expression patterns. The functions of these two MT genes were investigated using RNAi, and changes in the Cd tolerances of the grasshoppers overexpressing these two MT proteins were analyzed using a recombinant MT expression system. Our results provide novel insights into the tissue localizations of MTs in grasshoppers, which were predominantly expressed in the brain and optic lobe and at the egg stage. Further studies of the regulatory roles of OcMTs in the nervous system (brain and optic lobes) are currently underway. However, additional studies are needed to better understand these results. The roles of OcMTs in cellular metal detoxification will be investigated in our next study. Bacterial growth curve of E. coli cells transformed with pET-28a, pET-28a-OcMT and pET-28a-OcMT-IPTG. pET-28a is an ''empty'' vector; pET-28a-OcMT group is transformed with the OcMT1 or OcMT2 gene without IPTG; pET-28a-OcMT-IPTG group is transformed with the OcMT1 or OcMT2 gene with IPTG. Five microliters of CdCl 2 was added into medium when bacteria were grown to OD600 = 0.6. All bacteria were grown for 11 h. Concentration gradient of CdCl 2 were 0, 0.82, 1.74, 3.27 mM. doi:10.1371/journal.pone.0112759.g008
2018-04-03T01:34:43.486Z
2014-11-12T00:00:00.000
{ "year": 2014, "sha1": "8819191e95fdd1caedaf79a974e36c20a097668c", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0112759&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8819191e95fdd1caedaf79a974e36c20a097668c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
237291800
pes2o/s2orc
v3-fos-license
An Empirical Study on Refactoring-Inducing Pull Requests Background: Pull-based development has shaped the practice of Modern Code Review (MCR), in which reviewers can contribute code improvements, such as refactorings, through comments and commits in Pull Requests (PRs). Past MCR studies uniformly treat all PRs, regardless of whether they induce refactoring or not. We define a PR as refactoring-inducing, when refactoring edits are performed after the initial commit(s), as either a result of discussion among reviewers or spontaneous actions carried out by the PR developer. Aims: This mixed study (quantitative and qualitative) explores code reviewing-related aspects intending to characterize refactoring-inducing PRs. Method: We hypothesize that refactoring-inducing PRs have distinct characteristics than non-refactoring-inducing ones and thus deserve special attention and treatment from researchers, practitioners, and tool builders. To investigate our hypothesis, we mined a sample of 1,845 Apache's merged PRs from GitHub, mined refactoring edits in these PRs, and ran a comparative study between refactoring-inducing and non-refactoring-inducing PRs. We also manually examined 2,096 review comments and 1,891 detected refactorings from 228 refactoring-inducing PRs. Results: We found 30.2% of refactoring-inducing PRs in our sample and that they significantly differ from non-refactoring-inducing ones in terms of number of commits, code churn, number of file changes, number of review comments, length of discussion, and time to merge. However, we found no statistical evidence that the number of reviewers is related to refactoring-inducement. Our qualitative analysis revealed that at least one refactoring edit was induced by review in 133 (58.3%) of the refactoring-inducing PRs examined. Conclusions: Our findings suggest directions for researchers, practitioners, and tool builders to improve practices around pull-based code review. INTRODUCTION In Modern Code Review (MCR), developers review code changes in a lightweight, tool-assisted, and asynchronous manner [18]. In this context, regular change-based reviewing, in which code improvements are embraced, became an essential practice in the MCR scenario [18,66]. Code changes may comprise new features, bug fixes, or other maintenance tasks, providing potential opportunities for refactorings [60], which in turn form a significant part of the changes [19,75]. Empirical evidence suggests a distinction between refactoring-dominant changes and other types. For instance, reviewing bug fixes is more time-consuming than reviewing refactorings, since the latter preserve code behavior [69]. Given the nature of changes significantly affects code review effectiveness [63], as it directly influences how reviewers perceive the changes, the provision of suitable resources for assisting code review is essential. Characterization studies of MCR have been conducted to investigate technical aspects of reviewing [20,24,41,[66][67][68]71], factors leading to useful code review [25], circumstances that contribute to code review quality [45], and general code review patterns in pullbased development [49]. Those studies are relevant because MCR is critical in repository-based software development, especially in Agile software development, driven by change and collaboration [1]. In practice, Git Pull Requests (PRs) are relevant to MCR as they promote well-defined and collaborative reviewing. Through PRs, the code is subject to a review process in which reviewers may suggest improvements before merging the code to the main branch of a repository [29]. Such improvements may take the form of refactorings, resulting from discussions among the PR author and reviewers on code quality issues, including spontaneous actions of the PR author aiming to refine the originally submitted solution. We hypothesize that PRs that induce refactoring edits have different characteristics from those that do not, as refactoring may involve design and API changes that require more extensive effort, discussion and knowledge of the project. It is worth clarifying that this study sheds light on refactorings induced by code review (Section 4) aiming to provide an initial understanding of how review discussions induce such edits. Motivation: By distinguishing refactoring-inducing from nonrefactoring-inducing PRs, we can potentially advance the understanding of code reviewing at the PR level and assist researchers, practitioners, and tool builders in this context. No prior MCR studies made a distinction between refactoring-inducing and nonrefactoring-inducing PRs, when analyzing their research questions, which might have affected their findings or discussions. For instance, by also regarding refactoring-inducing PRs, Gousios et al. [37] and Kononenko et al. [46] could have found different factors influencing the time to merge a PR; Li et al. [49] could have included refactoring concerns to the multilevel taxonomy for review comments in the pull-based development model; Pascarella et al. [62] could have identified further information to perform a proper code review in presence of refactorings; Paixão et al. [17] could have complemented the study on the reasons for refactorings during code review when analyzing projects in Gerrit; whereas, Pantiuchina et al. [61] could have different conclusions on the motivations for refactorings in PRs, since they analyzed PRs in which refactorings were detected even in the initial commit (i.e., these refactorings were not induced from reviewer discussions). In practice, being unaware of refactoring-inducing PRs' characteristics, practitioners and tool builders might miss opportunities to manage better their resources and to assist developers in PRs, respectively. Moreover, a refactoring-aware notification system could help in allocating reviewers with more knowledge on the design of the refactored code when a PR becomes refactoring-inducing, as design changes caused by refactoring need to be more extensively discussed and agreed upon. Definition 1.1. A PR is refactoring-inducing if refactoring edits are performed in subsequent commits after the initial PR commit(s), as a result of the reviewing process or spontaneous improvements by the PR contributor. Let = { 1 , 2 , ..., w }, a set of repositories in GitHub. Each repository q , 1 ≤ ≤ , has a set of pull requests ( q ) = { 1 , 2 , ..., m } over time. Each pull request j , 1 ≤ ≤ , has a set of commits ( j ) = { 1 , 2 , ..., n }, in which ( j ) is the set of initial commits included in the PR when it is created, ( j ) ⊆ ( j ). A refactoring-inducing pull request is that in which ∃ k | ( k ) ≠ ∅, where ( k ) denotes the set of refactorings performed in commit k and | ( j )| < ≤ . To clarify our definition, Figure 1 depicts a refactoring-inducing PR consisting of three initial commits ( 1 − 3 ) and six subsequent commits ( 4 − 9 ), three of which include refactoring edits ( 5 , 7 , 8 ), e.g., commit 7 has two Rename Class and three Change Variable Type refactoring instances. Our study explores differences/similarities between PRs based on the refactorings performed in PR commits subsequent to the initial ones ( 4 − 9 ). We propose an investigation at the PR level because we understand it as a complete scenario for exploring code reviewing practices in a well-defined scope of development, which allows us to go beyond an investigation at the commit level. For instance, we can obtain a global comprehension of contributions to the original code, in terms of both commits and reviewing-related aspects (e.g., reviewers' comments). Our conception is mainly inspired by empirical evidence showing that pull-based development is associated with larger numbers of contributions [81], and that PR discussions lead to additional refactorings [61]. To guide our investigation, we designed the following research questions: • RQ 1 : How common are refactoring-inducing PRs? • RQ 2 : How do refactoring-inducing PRs compare to non-refactoringinducing ones? • RQ 3 : Are refactoring edits induced by code reviews? We mined merged PRs from Apache's Java repositories in GitHub, and we used state-of-the-art tools and techniques, such as Refactor-ingMiner [11] and Association Rule Learning (ARL) [23] to answer the first two questions. RefactoringMiner is currently considered the state-of-the-art refactoring detection tool (precision of 97.96% and recall of 87.2% [78], whereas ARL can discover non-obvious relationships between variables in large datasets [12]. We used Refac-toringMiner to detect refactorings in a sample of 1,845 merged PRs. Then, we performed ARL on two groups (refactoring-inducing and non-refactoring-inducing PRs), and formulated eight (8) hypotheses on differences between refactoring-inducing and non-refactoringinducing PRs by manually exploring 562 association rules discovered by ARL. We found that refactoring-inducing PRs significantly differ from non-refactoring-inducing ones in terms of number of subsequent commits, code churn, number of file changes, number of review comments, length of discussion, and time to merge; however, we found no statistical evidence that the number of reviewers is related to refactoring-inducement. In order to address the third research question, we carried out a manual investigation of 2,096 review comments cross-referenced to 1,891 detected refactorings from 228 refactoring-inducing PRs -a stratified sample from our original sample (by considering a confidence level of 95% and a margin of error of 5%). We found 133 refactoring-inducing PRs (58.3%) in which at least one refactoring edit was induced by review comments. Contributions: (1) To the best of our knowledge, this is the first study investigating aspects related to refactoring and code review in the context of refactoring-inducing PRs (Def. 1.1). (2) We investigate PRs merged by merge pull request and squash and merge options. We tried to avoid either PRs merged by rebase and merge or merged PRs that suffered rebasing, intending to minimize threats to validity (Section 4.1). To deal with squashed commits, we implemented a script that recovers them (git squash converts all commits in a PR into a single commit). (3) We performed a manual analysis of refactoring-inducement, by exploring more than 2,000 review comments. (4) We made available a complete reproduction kit [10] including the mined dataset and implemented scripts to enable replications and future research. BACKGROUND 2.1 Refactoring and Modern Code Review As software evolves to meet new requirements, its code becomes more complex. Throughout this process, design and quality deserve attention [44]. For that, code restructurings, coined as refactorings by Opdyke and Johnson [57], are performed to improve the design quality of object-oriented software, while preserving its external behavior, and they should be performed in a structured manner [33,56]. Developers can recover those restructurings through refactoring detection tools -which automatically identify refactoring types applied to the code, for assisting tasks such as studies on code evolution [60] and MCR [14,35]. MCR consists of a lightweight code review (in opposition to the formal code inspections specified by Fagan [32]), tool-assisted, asynchronous, and driven by reviewing code changes, submitted by a developer (author), and manually examined by one or more other developers (reviewers) [18]. Git-Based Development and Pull Requests Git-based collaborative development as implemented in GitHub [8] has presented a fast growth in the number of developers (more than 56 million) [4]. Each Git repository maintains a full history of changes [29] structured as a linked-list of commits, in turn, organized into multiple lines of development (branches). A PR is a commonly used way for submitting contributions to collaborationbased projects [9]. After forking a Git branch, a developer can implement changes, and open a PR to submit them for reviewing in line with the MCR process. Next, reviewers can submit comments based on a diff output that highlights the changes, whereas the author and other contributors can answer the reviewers' comments. After the reviewing, there are three options of merging: • Merge pull request merges the PR commits into a merge commit and adds them into the main branch, chronologically ordered, as depicted in Figure 2. Note that the arrows indicate a commit's parent, and the before and after markers indicate the commits searchable in the PR, respectively, before and after merging; • Squash and merge squashes the PR commits into a single commit and merges it into the main branch ( Figure 3); and • Rebase and merge re-writes all commits from one branch onto another, by updating their SHA, in a manner that unwanted history can be discarded, as illustrated in Figure 4. In this case, commits 0be3d3f and 66f02d3 received review comments, but they are not accessible via PR. Hence, it is mandatory to recover the original commits when investigating reviewing-related aspects. Nonetheless, such a recovery is not trivial [42]. . Support indicates the number of transactions in that supports an AR, so expressing its statistical significance. Interestingness measures can determine the strength of an AR. Confidence means how likely { } and { } will occur together. Lift reveals how X and Y are related to one another (0 denotes no association, < 1 indicates a negative co-occurrence of the antecedent and consequent, and > 1 express that the two occurrences are dependent on one another and the ARs are useful) [36]. Conviction is a measure of implication, ranging in the interval [0, ∞]. Conviction 1 denotes that antecedent and consequent are unrelated, while ∞ expresses logical implications, where confidence is 1 [26]. ARL usually follows this workflow: feature selection, feature engineering (applying any encoding technique, such as one-hot encoding using a group of bits to represent mutually exclusive features [80]), algorithm choice and execution, and result interpretation (assisted by interestingness measures) [79]. MOTIVATING EXAMPLE This study has evolved from results of preliminary investigations on refactorings and code reviews to get a better understanding of the topic and plan the research design. As a motivating example, we describe a case history, in which we explored the refactoringinducement and code review aspects. We randomly selected 24 PRs from Apache's drill repository. Then, we ran RefactoringMiner and obtained 11 (45.8%) refactoring-inducing PRs. We compared refactoring-inducing and non-refactoring-inducing PRs concerning code churn (number of changed lines), and discussion length (i.e., review and non-review comments). As a result, we identified that the refactoring-inducing PRs presented a higher code churn and discussion length than non-refactoring-inducing PRs. Note that we took into account one measure of each context under investigation: changes (code churn), code review (length of discussion), besides the number of refactoring edits. We manually analysed the refactoring-inducing PRs, by contrasting the descriptions of the detected refactorings by Refactoring-Miner against review comments. Our strategy of analysis consisted of reading comments and searching for keywords (e.g., "refac", "mov", "extract", and "renam"). We observed refactorings directly induced by review comments in four refactoring-inducing PRs. To exemplify, in PR #1762 1 , the review comment "Lot of code here and in DefaultMemoryAllocationUtilities are duplicate. May be create a separate MemoryAllocationUtilities to keep the common code..." motivated one Extract Superclass and four Pull Up Method refactorings. In a nutshell, those results provided insights on the pertinence of (i) exploring technical aspects of changes, code review, and refactorings in the PR level, since we perceived differences between refactoring-inducing and non-refactoring-inducing PRs in terms of code churn and length of discussion; (ii) considering refactorings as part of contributions to the code improvement during code review, and (iii) investigating quantitatively and qualitatively technical aspects in light of the refactoring-inducing PR definition. STUDY DESIGN The main goal of this study is to investigate code reviewing-related data to characterize refactoring-inducing PRs in Apache's repositories hosted in GitHub, from the reviewers' perspective. Thus, we formulated these research questions: • RQ 1 : How common are refactoring-inducing PRs? We firstly explored the presence of PRs that met our refactoring-inducing PR definition (Def. 1.1). • RQ 2 : How do refactoring-inducing PRs compare to non-refactoringinducing ones? We quantitatively investigated code reviewingrelated aspects aiming to find out similarities/differences in PRs based on the refactorings performed. • RQ 3 : Is refactoring induced by code reviews? We qualitatively scrutinized a stratified sample of refactoring-inducing PRs to validate the occurrence of refactoring edits induced by code reviewing, by manually examining review comments and discussions. Accordingly, supported by guidelines [70], we designed an empirical study that comprises five steps, as shown in Figure 5 and described in the next subsections. Also, we made publicly available a reproduction kit containing the mined datasets and developed scripts for replicating the results for our research questions [10]. 1 Apache drill PR #1762, available in https://git.io/JczHh. Mining Merged Pull Requests We mined merged PRs from Apache's repositories at GitHub. We focused on merged PRs because they reveal actions that were in fact finalized, therefore, we can get a more in-depth understanding of refactoring-inducement. We chose GitHub due to its popularity [4] and to the mining resources available through extensive APIs -GitHub REST API v3 [7] and GitHub GraphQL API v4 [6]. The Apache Software Foundation (ASF) manages more than 350 open-source projects, with more than 8,000 contributors from all over the world; all of its projects migrated to GitHub in February 2019 [2]. Given Apache's popularity and relevance of contributions in the open-source software development context, we selected it for mining PRs [5]. The refactoring mining tool we selected (Section 4.2) only supports projects developed in the Java, so we considered Java projects (almost 57% of Apache's code is developed in Java). In August 2019, we searched on Apache's non-archived Java repositories in GitHub (to take into account only actively maintained repositories), resulting in 65,006 merged PRs, detected in 467 out of 956 repositories; we then implemented a script to mine their merged PRs. We obtained two datasets: pull requests dataset consists of 48,338 merged PRs (merge PR option) from 453 distinct repositories while commits dataset contains 53,915 recovered commits from 16,668 merged PRs (squash and merge or rebase and merge options) from 255 repositories. Then, we recovered the commit history of squashed and merged PRs before any exploration of its original commits, assisted by the HeadRefForcePushedEvent object accessible via GitHub GraphQL API [6]. To clarify, consider the Apache's PR 1807 ( Figure 6) that, originally, had 12 commits ( 1 − 12 ) that were squashed into single commit ( ) after a force-pushed event. Consequently, only one commit may be gathered from the PR ( ). Our recovery strategy follows two steps: (1) we recover the commits and through HeadRefForce-PushedEvent object; and (2) we rebuild the original commits' history through tracking the commits from , which has the same value of 12 , until reaching the same SHA of the 's parent, by using the compare operation, as available in GitHub REST API v3 [7]. We executed the strategy's Step 1 for gathering the after and before commits from 65,006 pull requests, obtaining 53,915 commits after running the strategy's Step 2. We discarded PRs merged by rebase and merge option since, in rebasing, some commits within the PR may be due to external changes (outside the scope of the code review sequence), conveying a threat to the validity, as argued in [59]. Accordingly, we considered the number of HeadRefForcePushedEvent events and PR commits to identify PRs merged by squash and merge. In specific, PRs merged by merge pull request and squash and merge present zero and one HeadRefForcePushedEvent event, respectively (squashed and merged PRs keep one PR commit). Moreover, we dropped all PRs containing at least one subsequent commit with two parents, because such commits may represent external changes rebased onto a branch, as depicted in Figure 7. Note that, once commit ee88dea has two parents, it integrates external changes, which were not reviewed in PR reviewing time. Refactoring Detection RefactoringMiner detects refactorings in Java projects, presenting better results when compared to its competitors (precision of 99.6% and recall of 94%) [77,78]. We considered version 2.0, which supports over 40 different refactoring types, including low-level refactorings, such as variable renames and extractions, allowing us to work with a more comprehensive list of refactoring edits. For these reasons, we selected it for refactoring detection (Step 2). In essence, it identifies the refactorings performed in a commit in relation to its parent commit, displaying a description of the applied refactorings (type and associated targets, e.g., the methods and classes involved in an Extract and Move Method refactoring). In this step, we considered only merged PRs containing two or more commits (sample 1, Figure 5) intending to conform with our refactoring-inducing PR definition. After three weeks of Refactor-ingMiner running, we obtained a random sample of 225,127 detected refactorings in 8,761 merged PRs (13.5% of the total number of Apache's merged PRs) from 209 distinct repositories, embracing 68,209 commits. The source of randomness lies in the order in which the repositories were processed. At that point, we checked the commits' authored date against the PRs' opening date in order to identify initial and subsequent commits for the sample's PRs. Therefore, the number of refactorings of a PR takes into account only subsequent commits. Mining Code Review Data Empirical studies have investigated code review efficiency and effectiveness to understand the practice, elaborate recommendations, and develop improvements. Together, these studies share a set of useful code review aspects for further investigation, such as change description [25,75], code churn [76], length of discussion [45,54,66,76], number of changed files [25,45], number of commits [54,67], number of people in the discussion [45], number of resubmissions [45,66], number of review comments, [21,54,66], number of reviewers [66,72], size of change [20,45,66], and time to merge [37,41]. Therefore, the mining of raw code review data (Step 3) consisted of collecting the code reviewing-related attributes listed in Table 1, considering 8,761 PRs from Step 2 (sample 2, Figure 5). Attributes number, title, labels, and repository's name are useful to uniquely identify a PR. We clarify that we do not count the distinct files changed (i.e., the set of the changed files), but the number of times the files changed (i.e., the list of file changes) over subsequent PR commits. Hence, the number of added lines and deleted lines denote the number of lines modified across file changes. For mining, we imposed one precondition: only merged PRs, comprising at least one review comment, should be mined aiming to explore refactoring-inducement and to collect review comments for further investigation. Thus, the mining generated two datasets, code review dataset and review comments dataset, refined according to the Association Rule Learning Aiming to explore what differentiates refactoring-inducing PRs from non-refactoring-inducing ones, we executed ARL (Step 4). Such strategy assists exploratory analysis by identifying natural structures derived from the relationships between the characteristics of data [28]. Accordingly, by considering ARL on refactoringinducing and non-refactoring-inducing PRs, we can identify ARs that likely support us in the formulation of more accurate hypotheses concerning differences/similarities between those two groups. One may argue that clustering is a better alternative than ARL to find groups of PRs with distinct characteristics. Nonetheless, we experimentally performed clustering in our sample of PRs, after conducting a rigorous selection of clustering algorithm and input parameters 2 , but we found a great noise ratio (76.3%). Selection of features. We selected all features that can be represented as a number regarding changes, code review, and refactorings, from the code review dataset (Step 3). We considered a three-context perspective (changes, code review, and refactorings) because they together might potentially support the identification of differences between refactoring-inducing and non-refactoringinducing PRs. These are the selected features: number of subsequent commits, number of file changes, number of added lines, number of deleted lines, number of reviewers, number of review comments, length of discussion, time to merge, and number of detected refactorings. Note that the length of discussion and time to merge are derived from review comments + non-review comments, and merge date − creation date (in number of days), respectively. One may argue that other features could also be considered; however, (i) the PR title is written using natural language, so it is subject Feature engineering. We applied one-hot encoding based on the quartiles of the features, resulting in the binning presented in Table 2. We chose such technique due to its simplicity and linear time and space complexities [80]. We did not discard the outliers because, in the context of this study, they do not represent experimental errors; thus, they can potentially indicate circumstances for further examination. Consequently, the very high category (fourth quartile) includes the outliers. Selection and execution of an algorithm. We selected the FP-Growth algorithm due to its performance [39]. Then, we developed a script for the ARL by using the FP-growth implementation available in the mlxtend.frequent_patterns module [64]. We set the minimum support threshold to 0.1 to avoid discarding likely ARs for further analysis [30]. Aiming to get meaningful ARs, we considered minimum thresholds for confidence ≥ 0.5, lift > 1, and conviction > 1. We performed a prior experiment concerning values of minimum support and minimum confidence by taking the thresholds considered in [12] as a reference (support of 0.01, confidence of 0.5). We ran FP-growth considering support values ranging from 0.01 to 0.1 by steps of 0.01, and confidence 0.5 (Table 3). In all these settings, we found ARs that cover all input features. Since support is a statistical significance measure, we consider the last setting (minimum support of 0.1, confidence of 0.5) for purposes of FP-growth execution. A lift threshold > 1 reveals useful ARs [22], while a conviction threshold > 1 denotes ARs with logical implications [26]. Interpretation of results. We considered the feature levels (none, low, medium, high, and very high), instead of absolute values, as items for composing ARs aiming to identify relative associations among two groups for investigation, e.g., {high number of added lines} → {high number of reviewers}. The ARs work as basis for the formulation of hypotheses regarding the characterization of our sample's PRs. In this sense, we carried out the following procedure: (1) manual examination of the ARs to recognize potential differences/similarities that support the formulation of hypotheses; (2) analysis of the pairwise ARs, ARs containing the number of refactorings as an item, and ARs whose conviction is infinite to assist the rationale for the formulation of hypotheses; and (3) formulation of hypotheses to quantitatively investigate the differences between refactoring-inducing and non-refactoringinducing PRs. Data Analysis 4.5.1 Quantitative data analysis. We analyzed the output of Step 3 by exploring the detected refactorings by PR to answer RQ 1 . The number of refactorings was computed by considering the edits detected as in the PR subsequent commit(s). As a complement, we computed a 95% confidence interval for the percentual (proportion) of refactoring-inducing PRs in Apache's merged PRs, by performing bootstrap resampling [31]. We applied statistical testing of hypotheses intending to answer RQ 2 . That analysis encompassed the testing of eight hypotheses formulated from the analysis of the ARL output (Step 4), driven by a comparison between refactoring-inducing and non-refactoring-inducing PRs. We executed each hypothesis testing in line with this workflow, guided by [27]: (1) Definition of null and alternative hypotheses. (2) Performing of statistical test. We considered a significance level of 5%, and a substantive significance (effect size) for denoting the magnitude of the differences between refactoring-inducing and non-refactoring-inducing PRs at the population level. First, we checked the assumptions for parametric statistical tests (steps a and b), since the independence assumption is already met (i.e., a PR is either a refactoring-inducing or not). For exploring the difference between refactoring-inducing and non-refactoringinducing PRs, we computed a 95% confidence interval by bootstrapping resample according to the output from steps a and b, in mean or median (step c). Then, we conducted a proper statistical test and calculated the effect size (step d). (a) checking for data normality by using the Shapiro-Wilk test; (b) checking for homogeneity of variances via Levene's test; (c) computation of confidence interval for the difference in mean or median aligned to output from steps a and b; (d) performing of either parametric independent t-test and Cohen's d, or non-parametric Mann Whitney U test and Common-Language Effect Size (CLES) in line with the output from steps a and b. CLES is the probability, at the population level, that a randomly selected observation from a sample will be higher/greater than a randomly selected observation from another sample [53]. (3) Deciding if the null hypothesis is supported or refused. Qualitative data analysis. In order to answer RQ 3 , three developers (intending to mitigate researcher bias) manually examined review discussions and validated the detected refactorings from a subset of refactoring-inducing PRs of our sample. We adopted a stratified random sampling to select refactoring-inducing PRs for an in-depth investigation of their review comments and discussion while cross-referencing their detected refactoring edits. Moreover, we validated these refactorings by checking for false positives. As a whole, the qualitative analysis lasted 30 days. We chose that sampling strategy because it provides a means to sample non-overlapping subgroups based on specific characteristics [52], (e.g. number of refactorings), where each subgroup (stratum) can be sampled using another sampling method -a setting that quite fits to further investigation of categories of refactoring-inducing PRs containing a low, medium, high, and very high number of refactorings ( Table 2). To define the sample size, we considered a confidence level of 95% and a margin of error of 5%, so obtaining 228, thus considering 57 refactoring-inducing PRs randomly selected from each category. We split the samples into four categories based on the numbers of refactorings in order to check if there is a difference in the effect of code review refactoring requests/inducement between PRs with massive refactoring efforts versus PRs with small/focused refactoring efforts. In the analysis, firstly, we conducted a calibration in which one of the analysts followed up ten analyses performed by the others. Next, each analyst apart examined 40.3%, 38.2%, and 21.5% of the data. In such subjective decision-making, we considered the refactoringinducement in settings where review comments either explicitly suggested refactoring edits (e.g., "How about renaming to ...?" 3 ) or left any actionable recommendation that induced refactoring (e.g., "avoid multiple booleans" induced a Merge Parameter instance 4 ). How Common are Refactoring-Inducing Pull Requests? We found 557 refactoring-inducing PRs (30.2% of our sample's PRs), equaling 12,547 detected refactoring edits. As shown in Figure 8a, the histogram of refactoring edits is positively skewed, presenting outliers. Thus, a low number of refactoring edits is quite frequent. The number of refactorings by PR is 11.8 on average (SD = 32.3) and 3 as median (IQR = 6), according to Figure 8b. Using bootstrapping resampling and a 95% confidence level, we obtained a confidence interval ranging from 28.1% to 32.3% for the proportion of refactoring-inducing PRs in Apache's merged PRs. These results reveal significant refactoring activity induced in PRs. This is a motivating result, while outliers' presence can indicate scenarios scientifically relevant for further exploration. How Do Refactoring-Inducing Pull Requests Compare to non-Refactoring-Inducing Ones? From ARL, we obtained 562 ARs (146 from refactoring-inducing PRs and 416 from non-refactoring-inducing PRs). Then, we manually inspected them, by searching for pairwise ARs (AR 1 -AR 7 ), ARs whose conviction is infinite (AR 5 , AR 6 ), and the remaining ARs (AR 2 , AR 3 , AR 4 ). Accordingly, we selected four ARs (AR 1 -AR 4 ) obtained from refactoring-inducing PRs and three ARs (AR 5 -AR 7 ) from non-refactoring-inducing PRs, all catalogued in Table 4, in decreasing order of conviction. Since we did not identify the same pairs of ARs in both groups, we needed to consider a distinct number of ARs (hence, itemsets) for the comparison purpose when addressing all features. Afterwards, we carried out an analysis of those ARs. We formulated eight hypotheses on the differences/similarities between refactoring-inducing and non-refactoring-inducing PRs, discussed as follows. Table 5 shows the average, Standard Deviation (SD), median, and Interquartile Range (IQR) of the examined features from refactoring-inducing and non-refactoring-inducing PRs. Refactoring-inducing PRs are more likely to have more added lines than non-refactoring-inducing PRs (AR 2 /AR 3 ,AR 5 ). Refactoring-inducing PRs are more likely to have more deleted lines than non-refactoring-inducing PRs (AR 2 /AR 3 ,AR 5 ). This is an expected result in light of the findings from Hegedüs et al., since refactored code has significantly higher size-related metrics [40]. We speculate that reviewing larger code churn may potentially promote refactorings. This understanding is supported by Rigby et al., who observed that the code churn's magnitude influences code reviewing [67,68], and Beller et al. who discovered that the larger the churn, the more changes could follow [21]. Refactoring-inducing PRs are more likely to have more file changes than non-refactoring-inducing PRs (AR 2 /AR 3 ,AR 5 ). We conjecture that reviewing code arranged across files may motivate refactorings, an argument supported by Beller et al. regarding more file changes comprise more changes during code review [21]. By observing change-related aspects (churn and file changes), our findings confirm previous conclusions on the influence of the amount and magnitude of changes on code review [20,45,67,68]. When analyzing the changes and refactorings, our findings reinforce prior conclusions on refactored code significantly present higher size-related metrics (e.g., number of code lines and file changes) [40], and larger changes promote refactorings [58]. Refactoring-inducing PRs are more likely to have more subsequent commits than non-refactoring-inducing PRs (AR 2 /AR 3 , AR 5 ). Based on our previous findings on the magnitude of code churn and file changes, that result is expected and aligned to Beller et al. concerning the impacts of larger code churn and wide-spread changes across files on consequent changes [21]. Accordingly, we speculate that reviewing refactoring-inducing PRs might require more subsequent changes, in turn, denoted by more subsequent commits in comparison with non-refactoring-inducing PRs. Beller et al. found that the most changes during code review are driven by review comments [21], and Pantiuchina et al. discovered that almost 35% of refactoring edits are motivated by discussion among developers in OSS projects at GitHub [61]. Thus, we conjecture that, besides change-related aspects, GitHub's PR model can constitute a peculiar structure for code review, in which review comments influence the occurrence of refactorings, therefore explaining our result. This argument originates from the fact that a pull-based collaboration workflow provides reviewing resources [9] (e.g., a proper code reviewing UI) for developers to improve/fix the code while having access to the history of commits and discussion. Our finding also provides insight for examination of review comments to get an in-depth understanding of refactoring-inducement. A more in-depth analysis could tell how profound these lengthier discussions are, although a higher number of comments might represent developers concerned with the code, willing then to extend their collaboration to the suggestion of refactorings. Previous findings may support those claims; Lee and Cole, when studying the Linux kernel development, acknowledged that the amount of discussion is a quality indicator [48]. Also, empirical evidence reports on the impact of the number of comments on changes [21,61]. Refactoring-inducing and non-refactoring-inducing PRs present two reviewers as median -the same result found by Rigby et al. [65] in the OSS scenario. There are outliers that, in turn, could be justified by other technical factors, such as complexity of changes, as argued in [66]. However, our study does not address that scope. H 8 . Refactoring-inducing PRs are more likely to take a longer time to merge than non-refactoring-inducing PRs (AR 4 , AR 6 ). We realize the influence of refactorings on time to merge, concluding that time for reviewing and performing refactoring edits both impact the time to merge. In special, this conclusion is aligned to Szoke et al., who observed a correlation between implementing refactorings and time [74], and from Gousios et al., who found that review comments and discussion affect time to merge a PR [37]. Is Refactoring Induced by Code Reviews? To study this research question, we sampled 228 refactoring-inducing PRs, 57 PRs from each of the Low, Medium, High, and Very High categories encompassing one, two to three, four to seven, and eight to 321 refactoring edits, respectively. By examining 2,096 review comments and 1,207 discussion comments in the sampled PRs, we found 133 (58.3%) in which at least one refactoring edit was induced by review comments. Such PRs comprise 815 subsequent commits, and 1,891 detected refactorings, 545 of which were induced by review comments. Finally, we found that Rename (35.8%) (being readability a common motivation cited by reviewers) and Change Type (30.3%) operations are the most induced by review in our stratified sample. Finding 9: In a stratified sample of 228 refactoring-inducing PRs, 133 ones (58.3%) presented at least one refactoring edit induced by code review. Implications Researchers: All our findings, except for Finding 7, indicate that refactoring-inducing and non-refactoring-inducing PRs have different characteristics. Therefore, we recommend that future experiment designs on MCR with PRs to make a distinction between refactoring-inducing and non-refactoring-inducing PRs, or consider their different characteristics when sampling PRs. Researchers can also use our mined data, developed tools, and research methods to investigate code reviewing in pull-based development. Practitioners: Our findings indicate that there is no statistical difference in the number of reviewers between refactoring-inducing and non-refactoring-inducing PRs (Finding 7). But, all other findings show that refactoring-inducing PRs are associated with more churn (Finding 2), more file changes (Finding 3), more subsequent commits (Finding 4), more review comments (Finding 5), lengthier discussions (Finding 6), and more time to merge (Finding 8) than non-refactoring-inducing PRs. Thus, we suggest to PR managers to invite more reviewers when a PR becomes refactoring-inducing, to share the expected increase in review workload, and, perhaps more importantly, to share the knowledge of design changes caused by subsequent refactorings to more team members. Tool builders: In connection to our implication for practitioners, tool builders can develop bots [47,73] that recommend reviewers based on some criteria [55] when a PR becomes refactoring-inducing, to assist the PR managers in inviting additional reviewers. Our findings indicate that refactoring-inducing PRs have higher complexity in churn (Finding 2) and file changes (Finding 3). Therefore, it is necessary to help the developers distinguish refactoring edits from non-refactoring edits directly in the GitHub or Gerrit review board, where the reviews are actually taking place. In the past, researchers implemented refactoring-awareness in the code diff mechanism of IDEs [13,34,35]. Even though not directly related to our results, we believe that adding refactoring-awareness directly in the GitHub or Gerrit review board -such as the refactoring-aware commit review Chrome browser extension [51] -would allow reviewers to trace the refactorings performed throughout the commits of a PR, provide prompt feedback, and concentrate efforts on other aspects of the changes, such as collateral effects of refactorings and proposing specific tests. This recommendation is in agreement with Gousios et al. [38], who emphasized the need for untangling code changes and supporting change impact analysis directly in the PR interface. THREATS TO VALIDITY We elaborated our study design after conducting two case studies to better understand GitHub's PRs and procedures of data mining and refactoring detection. We carefully defined workflows for our research design procedures to explain all decisions taken, and we systematically structured all procedures aiming at replicability. We performed a rigorous selection of the ARL algorithm and input parameters. To mitigate researcher bias, our qualitative analysis was performed by three analysts. Despite our efforts to perform an initial calibration, there may be limitations concerning conclusions, since we carried out apart analyses. Nevertheless the establishment of a chain of evidence for the data interpretation and description of taken decisions in the study design, we did not validate the detected refactorings before data analysis, so expressing a potential threat to construct validity (RQ 1 and RQ 2 ). To overcome this issue, we selected RefactoringMiner, a state-of-the-art refactoring detection tool [78]. When addressing RQ 3 , we validated all detected refactorings in our stratified sample. Aiming to mitigate the risk related to rebasing constraints in our sample, we excluded the PRs merged with the rebase and merge option and the PRs including intermediate merge commits. Even so, there are still threats due to other non-previously identified manners to search for rebasing operations. Furthermore, as already admitted in the refactoring-inducing PR definition, we cannot claim that all refactoring edits were caused by reviewing. To deal with such limitation, we carried out a qualitative analysis of review comments from 228 randomly selected refactoring-inducing PRs, considering a sample size meeting a confidence level of 95% and a margin of error of 5%. Thus, this empirical study provides a particular motivation for further qualitative investigation of review comments to acquire in-depth knowledge on the influence of reviewing on refactoring-inducing PRs. It is not suitable to generalize the conclusions, except when considering other OSS projects that follow a geographically distributed development and are aligned to "the Apache way" principles [3]. Thus, our findings are exclusively extended to cases that have common characteristics with Apache's projects. RELATED WORK By exploring the motivations and challenges of MCR, Bacchelli and Bird identified the code improvements as one of the objectives of reviewing [18]. A finding confirmed by subsequent study on convergent practices of code review by Rigby and Bird [66], Beller et al. [21], and MacLeod et al. [50]. Those findings support us in exploring refactorings as a relevant contribution from code reviewing. The analysis of the technical aspects of code reviewing has been the focus of several empirical studies, in which a few measures have been considered: the number of reviewers by Jiang et al. [43], the review comments by Rigby and Bird [66] and by Beller et al. [21], the time to merge by Izquierdo-Cortazar [41], and the size of change by Baysal et al. [20]. They provided the first insights on code reviewing aspects investigated in our study. Also, studies explored the factors influencing code review quality. Bosu et al. discovered that changes' properties affect the review comments usefulness [25]. Nevertheless, Kononenko et al. carried out an analysis concerning how developers perceive code review quality [45], and figured out that the thoroughness of feedback is the main influencing factor to code review quality. Those results corroborate with the findings on the technical aspects empirically studied in [20,21,43,66], thus constituting an enriched set of technical aspects for investigation. Paixão et al. found that refactorings' motivations may emerge from code review and influence the composition of edits and number of reviews by analyzing Gerrit reviews [17]. These findings inspired us towards expanding the knowledge regarding code review aspects in GitHub refactoring-inducing PRs. Pantiuchina et al. analyzed discussion and commits of merged PRs, containing at least one refactoring in one of their commits, and found that most refactorings are triggered from either the original intents of PRs or discussion [61]. Those findings are motivating since they indicate the influence that review, at the PR level, has on refactorings. Our study differs from those previous ones because we distinguished refactoring-inducing PRs from non-refactoring-inducing PRs by exploring both reviewing-related aspects and refactoring-inducement. CONCLUDING REMARKS We investigated technical aspects characterizing refactoring-inducing PRs based on data mined from GitHub and refactorings detected by RefactoringMiner. Our results reveal significant differences between refactoring-inducing and non-refactoring-inducing PRs, and a substantial number refactoring edits induced by code reviewing. As future work, we suggest (i) a further investigation of review comments aiming to identify patterns/practices that could indicate the refactoring-inducement as a contribution of the code review process to the code submitted within PRs; and (ii) exploration of human aspects of reviewers, aiming to enhance the understanding of refactoring-inducement at the PR level. Replications also are highly welcome, since they can support the elaboration of a theory on refactoring-inducing PRs.
2021-08-26T01:16:10.106Z
2021-08-24T00:00:00.000
{ "year": 2021, "sha1": "75d5f7b95f89acc8257d9ecf3313a5a5fcfc07b3", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2108.10994", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "75d5f7b95f89acc8257d9ecf3313a5a5fcfc07b3", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
238527683
pes2o/s2orc
v3-fos-license
Solvent-Free Procedure to Prepare Ion Liquid-Immobilized Gel Polymer Electrolytes Containing Li0.33La0.56TiO3 with High Performance for Lithium-Ion Batteries Based on the advantages of intrinsic safety, flexibility, and good interfacial contact with electrodes, a gel polymer electrolyte (GPE) is a promising electrolyte for lithium-ion batteries, compared with the conventional liquid electrolyte. However, the unstable electrochemical performance and the liquid state in a microscale limit the commercial application of GPE. Herein, we developed a novel gel polymer electrolyte for lithium-ion batteries by blending methyl methacrylate (MMA), N-butyl-N-methyl-piperidinium (Pyr14TFSI), and lithium salts in a solvent-free procedure, with SiO2 and Li0.33La0.56TiO3 (LLTO) additives. The prepared MMA-Pyr14TFSI-3 wt % LLTO electrolyte shows the best electrochemical performance and obtains a high ion conductivity of 4.51 × 10–3 S cm–1 at a temperature of 60 °C. Notably, the electrochemical window could be stable up to 5.0 V vs Li+/Li. Moreover, the batteries with the GPE also show excellent electrochemical performance. In the LiFePO4/MMA-Pyr14TFSI-3 wt % LLTO/Li cell, a high initial discharge capacity was achieved 150 mA h g–1 at 0.5C with a Coulombic efficiency over 99% and maintaining a good capacity retention of 90.7% after 100 cycles at 0.5C under 60 °C. In addition, the physical properties of the GPE have been investigated by scanning electron microscopy (SEM), X-ray diffraction (XRD) measurements, Fourier transform infrared (FTIR) spectroscopy, and thermogravimetry (TG). INTRODUCTION With the excessive consumption of fossil fuels, the demand for developing high-energy-density storage is urgent. Owing to the high operating voltage, memoryless effect, and excellent stability, lithium-ion batteries (LIBs) have been commonly applied in portable electric devices and electronic vehicles during the past decades and play a dominant role in the electronics market. 1−3 However, the growth of lithium dendrites and the flammable organic electrolyte has caused a concern for the security of electric vehicles using LIBs. 4−6 For example, the well-known explosion and ignition of a Samsung mobile phone and the Tesla car caused by unsafe lithium-ion batteries. 7−9 Therefore, to solve these troublesome safety issues, it is necessary to develop a novel electrolyte system to replace the conventional organic liquid electrolyte, and solid electrolyte could be a desirable alternative. 10,11 Generally, solid-state LIBs show better safety due to the advantage that a solid-state electrolyte could intrinsically solve the problems of flammability, leakage, and short-circuiting of the battery caused by a lithium dendrite-piercing diaphragm compared with a conventional liquid-state electrolyte. 12−14 Unfortunately, a solid electrolyte always exhibits a low ion conductivity of about 10 −7 −10 −5 S cm −1 , which could not meet the demand of the practical application. A gel polymer electrolyte (GPE), consisting of a polymer matrix, lithium salts, and a plasticizer, shows a high ionic conductivity of 10 −3 S cm −1 at ambient temperature, good compatibility between the electrolyte and electrodes, as well as a long service life. GPE combines the advantages of both solid and liquid electrolytes and possesses high ion conductivity and good mechanical properties. 15−18 However, the preparation of GPE is usually by the methods of immersing an electrolyte membrane into an organic liquid electrolyte or compounding the organic liquids and the polymer matrix, leading to the fact that a large amount of organic liquid still remains in the electrolyte, indicating that the abovementioned GPE has not yet satisfied the safety standards. Therefore, potential flammability and explosion still exist, and the organic solvents are also harmful to the environment. In addition, the increase in ion conductivity of GPE is always accompanied by a decrease in mechanical performance. In brief, an ideal gel polymer electrolyte applied in LIBs should have the properties of high ion conductivity, sufficient safety, and good electrochemical performance. 19−23 Various strategies have been implemented to improve these properties; there are three main methods, which include copolymerization, blending, cross-link, and so forth; the addition of ceramic fillers; and the plasticizer additives. 24−27 The introduction of ceramic fillers into the polymer matrix not only improves the ion conductivity but also strengthens the mechanical properties of GPE. The metallic oxides such as Al 2 O 3 , SiO 2 , and ZrO 2 in nanosize are generally used as ceramic particles in a polymer electrolyte, which could reduce the crystallization of the polymer matrix and further enhance the ion conductivity of the polymer electrolyte. 28−30 Pradeepa and co-workers 31 fabricated a composite polymer electrolyte based on poly(vinyl chloride)/poly(ethyl methacrylate) (PVC/PEMA) with the addition of ceramic fillers of TiO 2 , and the results show that the ion conductivity and thermal stability of the polymer electrolyte have been greatly improved due to the addition of metallic oxide fillers. Except for the introduction of oxide fillers into the polymer electrolyte, active conducting fillers like LLZO and LLTO have been the prevailing additives to improve the electrochemical performance and ion conductivity of the electrolyte recently. Zhu and co-workers 32 developed a PEO-based composite polymer electrolyte filled with Li 0.33 La 0.557 TiO 3 nanofibres, which exhibited a high conductivity of 2.4 × 10 −4 S cm −1 and a wide electrochemical window of 5 V vs Li + /Li. 33 Furthermore, Cha and co-workers 34 synthesized a stable composite solid polymer electrolyte composed of a PEO matrix, a PEGDME plasticizer, and Li 7 La 3 Zr 2 O 12 , which exhibits the highest ion conductivity of 4.7 × 10 −4 S cm −1 at 60°C. 35 Room-temperature ionic liquids (RTILs) have attracted increasing attention due to high ion conductivity and nonflammability and could be the preferred plasticizer in the polymer electrolyte for LIBs. 36 Anuar and co-workers 37 prepared an electrolyte based on PEMA-NH 4 SO 3 CF 3 with butyl-trimethyl ammonium bis(trifluoromethylsulfonyl)imide ionic liquid (BMATFSI). The electrolyte shows a relatively enhanced ion conductivity of 10 −4 S cm −1 , demonstrating that the introduction of ionic liquid could increase the ionic conductivity of the electrolyte. Poly(methyl methacrylate) (PMMA) has been widely used as a polymer matrix of an electrolyte applied for LIBs because of the environmental friendliness, good processing capability, low cost, and excellent electrochemical stability. 38 In addition, the liquid polymer monomer methyl methacrylate (MMA) is beneficial to dissolve the lithium salts and other fillers in the electrolyte. Unfortunately, the practical application of PMMA has been limited by low thermal stability and low ion conductivity. In this work, we develop a gel polymer electrolyte by dissolving lithium salts using MMA and N-butyl-N-methylpyrrolidinium bis((trifluoromethyl)sulfonyl)imide ionic liquid (Pyr 14 TFSI), and with SiO 2 and LLTO as fillers, and AIBN as a polymer initiator. The electrolyte was assembled in an LiFePO 4 /GPE/Li cell, and then the cell was kept in a blast air oven at 60°C for 5 h to make the electrolyte fully polymerized. With the advantages of nonflammability, electrochemical stability, and high ion conductivity of ion liquids, it is a good choice when preparing the ion liquid-immobilized GPE. The LLTO powders and SiO 2 are added to enhance the electrochemical performance and physical properties of GPE with the crystalline phase down and promotion of lithium-ion migration. The whole preparation of the GPE is carried out in solvent-free conditions and the disappearance of organic solvents not only enhances the safety performance but is also environmentally friendly. The electrochemical performance, such as the ion conductivity, electrochemical window, and cycling performance, and the physical properties (morphology, thermal stability, and crystalline structure) of GPE were studied and the results were obtained. EXPERIMENTAL SECTION 2.1. Material. Methyl methacrylate (MMA) was purchased from Sigma-Aldrich to prepare a gel polymer electrolyte. Azobisisobutyronitrile (AIBN) from Sigma-Aldrich was used to achieve the polymerization of MMA. N-Butyl-N-methylpyrrolidinium bis((trifluoromethyl)sulfonyl)imide (Pyr 14 TFSI) was purchased from Lanzhou Institute of Chemical Physics (Lanzhou China). LiNO 3 (>99.99%), La(NO 3 ) 3 ·6H 2 O (>99.99%), and Ti(OC 4 H 9 ) 4 (>99.0%) were all provided by Aladdin, and all of the chemicals were used without further purification. In addition, the bis-(trifluoromethane)sulfonimide lithium salt (LiTFSI) and Nano silica from Aladdin were dried at 80°C under vacuum for 24 h. The syntheses of LLTO powders were accomplished by blending and calcination reported in the previous literature. 39 As shown in Figure 1, first, LiNO 3 and La(NO 3 ) 3 ·6H 2 O were dissolved in ethanol with a mass ratio of 1:6 by vigorous stirring for 0.5 h to get a homogeneous and transparent solution. Meanwhile, acetylacetone and Ti(OC 4 H 9 ) 4 were mixed with a fixed mass ratio of 3:1 by magnetic stirring for 0.5 h to gain a homogeneous Khaki-colored liquid. Then, the abovementioned solutions were mixed together by intense magnetic stirring for 2 h and a light Khaki homogeneous solution was obtained. Second, after being dried at 70°C in an air-dry oven for about 1 week and followed by being dried at 80°C in a vacuum for 24 h, the uniform solution became a Khaki Xerogel. At last, the white LLTO powders were obtained by calcination (remaining at 900°C for 2 h) and ball milling, and then kept in a glovebox for the next experiment. 2.2. Preparation of the MMA/Pyr 14 TFSI/LLTO Gel Polymer Electrolyte. Owing to the reason of that the electrolyte is easily oxidized in the air, all of the experiments were carried out in a glovebox (H 2 O < 0.1 ppm, O 2 < 0.1 ppm). First, MMA and Pyr 14 TFSI were mixed together with a molar ratio of 0.8:1, and a transparent solution was obtained by magnetic stirring for a few minutes. Second, a certain amount of SiO 2 (10 wt %) and LLTO (1, 3, 5 wt %) powders were successively added into the abovementioned solution, respectively. Subsequently, the uniform solution was obtained by vigorous magnetic stirring for 0.5 h at 50°C. Finally, AIBN was added into the abovementioned solution with a weight ratio of 1:10 with respect to the weight of MMA, which produced a homogeneous stiff solution after another 0.2 h of magnetic stirring at 50°C. Consequently, the gel polymer electrolyte was successfully achieved. All of the preparation processes of the gel polymer electrolyte are shown in Figure 2. 2.3. Assemblage of the Solid-State Cell. The fabrication of the CR2032 coin cells was carried out in a Mikrouna glovebox, using LiFePO 4 as a cathode, with a Li foil as a counter electrode, and the prepared gel polymer electrolyte. Then, the cells were subsequently heated in a drum wind dryer at 60°C for 5 h to obtain a completely polymerized electrolyte. The final obtained solid-state cells were kept in a glovebox for 2 h before testing, and the cells with the electrolyte without LLTO were assembled as a control. 2.4. Characterization. Scanning electron microscopy (SEM; Hitachi SU8220) was conducted to characterize the surface morphology of the electrolyte samples. The crystallization and the structure of the samples were monitored by Xray diffraction (XRD) (Bruker D8 Advance diffractometer, Germany) measurements with Cu Ka radiation from 10 to 90°. A Fourier transform infrared spectrometer (FTIR) (Bruker VERTEX 33, Germany) was used to investigate the composition of the samples with a range from 4500 to 500 cm −1 . The thermal properties of gel polymer electrolytes were studied by thermogravimetry (Netzsch 209F3) from 30 to 600°C in a nitrogen atmosphere, with a linear heating rate of 10°C min −1 . Tensile tests were carried out using a universal testing machine (SHIMADZU, EZ-LX) at a cross-head speed of 12 mm min −1 . The electrochemical performances of prepared batteries were tested using a Neware CT-4008 battery test system (Shenzhen, China) and a Gamry electrochemical workstation. Electrochemical impedance spectroscopy (EIS) was investigated to evaluate the ion conductivity of the gel polymer electrolyte with different amounts of LLTO powders in the CR2032 coin batteries, which were assembled by sandwiching the gel polymer electrolyte between two stainless steel electrodes with a diameter of 14 mm. The EIS measurements mentioned above were tested in a frequency range from 10 −1 to 10 5 HZ with an applied voltage of 5 mV and at temperatures of 60°C. Ion conductivity (σ) was calculated from eq 1 where D (cm) is the thickness of the gel polymer electrolytes, R(Ω) is the bulk resistance value in the measurements of EIS, and S is the surface area of the electrode. The linear sweep voltammetry (LSV) with the coin cells consisting of Li/GPE/stainless steel was measured to study the electrochemical stability window of a gel polymer electrolyte filled with different contents of LLTO at a potential range of 2−8 V with a scan rate of 0.5 mV s −1 at ambient temperature. The galvanostatic charge/discharge stations were used to obtain the cycling and rate performance of the LiFePO 4 /CPE/ Li cells tested between 1.0 and 4.8 V at various current densities. Characterization of the Gel Polymer Electrolyte. The morphology of the electrolyte film and the LLTO powders was characterized by SEM. As shown in Figure 3, it could be seen that the electrolyte film shows a dense and nonporous surface with some white powder (LLTO) attached. From Figure 3a−c, the LLTO with different contents are uniformly dispersed in the electrolyte film and increased in the film with the increasing contents of LLTO powders, compared with the electrolyte in Figure 3d, which shows a dense and nonporous film without white powders on it. Figure 3e−f displays the morphology of the pristine LLTO and ball-milled LLTO. The LLTO exhibited a lump morphology before ball milling (Figure 3e), and the microsphere LLTO could be seen after ball milling (Figure 3f). The characterization of XRD was carried out to investigate the crystal structures and chemical stability of the electrolyte samples. As shown in Figure 4a, it can be observed that the curves of MMA-Pyr 14 TFSI-x wt % LLTO polymer electrolytes show several diffraction peaks at 2θ values of 32, 40, and 58°, respectively, corresponding to the typical diffraction peaks of LLTO, indicating that LLTO has been successfully added into the gel polymer electrolyte and maintains the pristine structure well. Moreover, with the increasing amount of LLTO in the electrolyte, the diffraction peaks of the MMA-Pyr 14 TFSI-x wt % LLTO electrolyte (x = 1,3,5) show increasing intensity and no significant shift. Furthermore, the MMA-Pyr 14 TFSI gel polymer electrolyte shows weak and wide diffraction peaks at a 2θ value of 19°, which could be attributed to the signals of the crystal phase of PMMA. 40 Compared with the curve of MMA-Pyr 14 TFSI, the diffraction peaks at 19°almost disappear in the curves of the MMA-Pyr 14 TFSI-x wt % LLTO electrolyte (x = 1,3,5) so that the addition of LLTO could decrease the crystallinity down in the gel polymer electrolyte, indicating more amorphous regions and increased ion conductivity of the electrolyte. To further investigate the functional groups in the MMA-Pyr 14 TFSI-x wt % LLTO electrolyte (x = 1,3,5,0), the FTIR spectrum analysis was performed. As shown in Figure 4b, the absorption peaks around 1020, 1200, and 1350 cm −1 could be assigned to the stretching vibrations of C−O, C−F, and Si− O−Si, respectively. 41 The characteristic peak of C−H could be confirmed at 2950 cm −1 and the characteristic group of La−O appears at 2800 cm −1 due to the addition of LLTO, resulting in a new absorption peak of the La−O group, and the intensity of the peak increases with the increasing amount of LLTO. Thermal stability is one of the most important factors of GPE, which could affect the electrochemical performance in LIBs. The TG and differential thermogravimetry (DTG) Figure 5a,b, respectively. It can be observed that the degradation of MMA-Pyr 14 TFSI-x wt % LLTO generally starts at about 360°C, and the weight loss increases with the increasing temperature and the loss rate reached the highest value at almost 450°C. While the degradation of MMA-Pyr 14 TFSI starts at about 100°C, its weight loss rate reached its maximum at about 460°C. The residual weight of MMA-Pyr 14 TFSI-x wt % LLTO was about 42% at 450°C. However, in the same condition, the residual weight of MMA-Pyr 14 TFSI was about 35%, which is significantly lower than the above one, confirming that the MMA-Pyr 14 TFSI-x wt % LLTO shows better thermal stability than the pristine MMA-Pyr 14 TFSI, indicating that the addition of LLTO enhances the thermal performance of the gel polymer electrolyte. The tensile strength has been tested and displayed in Figure 6. The Figure 7a displays the EIS spectra with the different gel polymer electrolytes, and it is obvious that the curves of the MMA-Pyr 14 TFSI-x wt % LLTO electrolyte (x = 1,3,5) show lower bulk resistance than the curve of the pristine MMA-Pyr 14 TFSI electrolyte, indicating that the ion conductivity of the MMA-Pyr 14 TFSI-x wt % LLTO electrolyte is much higher than the MMA-Pyr 14 TFSI electrolyte, indicating that the addition of LLTO is beneficial to increase the ionic conductivity. In Table 1, the ionic conductivities of the gel polymer electrolyte with different contents of LLTO have been listed, and all of the tests were operated at a temperature of 60°C . The MMA-Pyr 14 TFSI-3 wt % LLTO electrolyte shows a high ionic conductivity of 4.51 × 10 −3 S cm −1 , higher than that of the MMA-Pyr 14 TFSI-1 wt % LLTO electrolyte (4.06 × 10 −3 S cm −1 ) and the MMA-Pyr 14 TFSI-5 wt % LLTO electrolyte (8.1 × 10 −4 S cm −1 ). The results show that 3 wt % LLTO could be an optimal ratio in a gel polymer electrolyte with ionic conductivity, which could be ascribed to the fact that the addition of a proper amount of LLTO could increase the segment motion of the gel polymer electrolyte but the excess of it could decrease the transport and motion of ions in electrolytes. Consequently, 3 wt % LLTO could be a proper ratio to obtain the highest ion conductivity of the gel polymer electrolyte. 3.2.2. Electrochemical Stability of the Gel Polymer Electrolyte. The electrochemical stability of the gel polymer electrolyte determines the operating voltage range of LIBs. Therefore, it is essential to widen the electrochemical stability window for the electrolyte. To evaluate the electrochemical stability of different gel polymer electrolytes, linear sweep voltammetry (LSV) was conducted to investigate the electrochemical window of the gel polymer electrolyte in a Li/GPE/SS cell at ambient temperature. As shown in Figure 7b, the MMA-Pyr 14 TFSI-x wt % LLTO electrolytes (x = 1,3,5) exhibit a wide electrochemical stability window up to almost 4.8, 5.0, and 4.5 V, respectively, demonstrating that the gel polymer electrolytes with LLTO show good electrochemical stability. However, the LSV curve of the MMA-Pyr 14 TFSI electrolyte shows a sudden increase at 4.5 V, indicating poor electrochemical stability, which means that the addition of LLTO could enhance the electrochemical stability of gel polymer electrolytes. In addition, the MMA-Pyr 14 TFSI-3 wt % LLTO electrolyte showed the highest electrochemical window up to 5.0 V vs Li + /Li among all of the electrolyte samples, further indicating that 3 wt % LLTO is the most suitable ratio in gel polymer electrolytes. Furthermore, the cyclic voltammetry curves of GPEs are shown in Figure 7c, which is investigated at a scan rate of 0.3 mV s −1 . It could be noticed that there are a pair of cathodic and anodic peaks of each gel polymer electrolyte at around 3 V, which corresponds to the reduction and oxidation of lithium. Moreover, the gel polymer electrolytes with LLTO are more stable than the one without LLTO and all of the gel polymer electrolyte samples showed cathodic behavior at 0 V without the deposition process of lithium. 3.2.3. Electrochemical Analysis of the Gel Polymer Electrolyte at Different Temperatures. The temperaturedependent ionic conductivity of the gel polymer electrolyte has been measured via electrochemical impedance spectroscopy (EIS) at different temperatures, as shown in Figure 7d. The conductivity of the MMA-Pyr 14 TFSI-3 wt % LLTO electrolyte is 5.16 × 10 −3 S cm −1 at 70°C and 4.53 × 10 −3 S cm −1 at 60°C , respectively, much higher than the conductivity at ambient temperature (8.42 × 10 −4 S cm −1 ). Obviously, the ionic conductivity values increase with the increased temperature, which could be ascribed to the gradually decreased bulk resistance and a reduced crystallization zone with increasing temperature, indicating that the high temperature could contribute to the migration of Li + and the movement of chain segments of the gel polymer electrolyte. Moreover, the MMA-Pyr 14 TFSI-3 wt % LLTO electrolyte and the MMA-Pyr 14 TFSI-1 wt % LLTO electrolyte exhibit higher ionic conductivity at all temperatures than other gel polymer electrolytes, which may be attributed to the fact that the addition of moderate LLTO not only reinforces the lamellar structure but also increases the segment motion of the gel polymer electrolyte. 3.2.4. Cycle Performance. Figure 8a shows the typical discharge plots of Li/MMA-Pyr 14 TFSI-3 wt % LLTO/ LiFePO 4 cells with different cycles at a rate of 0.5C at 60°C . In the first cycle, the battery shows a high discharge capacity (148 mA h g −1 ), and after 100 cycles, the capacity is still maintained at 135 mA h g −1 , keeping a high capacity retention of 91.22%. Figure 8b exhibits the corresponding discharge−charge curves of the batteries with different amounts of LLTO at a temperature of 60°C. It can be seen that the discharge specific capacities of the LiFePO 4 /GPE/Li cell were 95, 109, 126, and 136 mA h g −1 at 0.5C, respectively, and all of the batteries exhibit a well-defined and stable charge−discharge plateau, corresponding to the results of CV (Figure 6c). The corresponding cycling performance of an MMA-Pyr 14 TFSI-3 wt % LLTO electrolyte in a LiFePO 4 /GPE/Li cell is shown in Figure 8c. After the first seven cycles, the Coulombic efficiency is over 99%, and the capacity retention is maintained at 93.5% after 100 cycles, indicating an outstanding cycling performance, which is superior to the other gel polymer electrolyte reported in lithium-ion batteries; 42 the reason could be ascribed to the reinforced network of the gel polymer electrolyte by adding the LLTO powders and SiO 2 particles. 43 Figure 8d shows the charge and discharge curves of the MMA-Pyr 14 TFSI-3 wt % LLTO electrolyte in the LiFePO 4 /GPE/Li cell at different rates. With the increase in the current density, the discharge capacity decreases gradually caused by electrochemical polarization. The cell shows a high capacity (152 and 138 mA h·g −1 ) at 0.1C and 0.2C, respectively. Moreover, the cell achieves a high discharge capacity (90 mA h g −1 ) at high cycling rates (2C), demonstrating an excellent rate performance. CONCLUSIONS In summary, we fabricated a safe, high ion conductivity, good electrochemical performance gel polymer electrolyte in a solvent-free procedure. The physical properties and electrochemical performance of the prepared gel polymer electrolyte have been investigated comprehensively. The results demonstrate that LLTO could enhance the thermal and electrochemical performance, which could be ascribed to the fact that the addition of LLTO may reduce the crystallization and enhance the segment chain movement of a gel polymer electrolyte. In addition, 3 wt % LLTO has been demonstrated as an optimal ratio in an electrolyte, the GPE shows the highest conductivity of 4.54 × 10 −3 S cm −1 at 60°C, and the wide This manuscript was written through contributions of all authors. All authors have given approval to the final version of the manuscript. Notes The authors declare no competing financial interest.
2021-10-10T05:26:08.481Z
2021-09-21T00:00:00.000
{ "year": 2021, "sha1": "fcf019b524070411e0065a9d8ade590b712d753b", "oa_license": "CCBYNCND", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.1c03140", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fcf019b524070411e0065a9d8ade590b712d753b", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
6255047
pes2o/s2orc
v3-fos-license
Description of all conformally invariant differential operators, acting on scalar functions We give an algorithm to write down all conformally invariant differential operators acting between scalar functions on Minkowski space. All operators of order k are nonlinear and are functions on a finite family of functionally independent invariant operators of order up to k. The independent differential operators of second order are three and we give an explicit realization of them. The applied technique is based on the jet bundle formalism, algebraization of the the differential operators, group action and dimensional reduction. As an illustration of this method we consider the simpler case of differential operators between analytic functions invariant under the modular group. We give a power series generating explicitly all the functionally independent invariant operators of an arbitrary order. Introduction It is well-known that the Maxwell equations are conformally invariant. This motivate an permanent interest in studying the conformal classes of metrics, conformally invariant operators and structures. The Maxwell differential operator is linear and so it is a splitting operator between two (infinitely dimensional) linear representation of the conformal group. There are many papers in the literature treating the splitting operators between different representations of the conformal group. These operators generalize the Maxwell operator with respect of this property of invariance. These considerations are based on the studying of the description and the structure of the linear representations and their subrepresentations. The aproach applied in the present work is different. It is based on jet bundles technique ( [3]), ( [5]) and ( [6]). The jet lifting of functions (or sections) plays a role of an universal differential operator. The differential operators are viewed as a composition of a jet lifting and a fibre preserving map (a morphism for the case of linear operators) between vector bundles. This algebraise the differential operators and after this the invariance means invariance of these maps (morphisms). During the next step the technique of group action and dimensional reduction are used to describe the invariant fibre-preserving maps (sections). A crucial point is the study of the action of stationary subgroup of some point on the jets of smooth sections at this point. This contains elements of the Catastrophe theory in the sense of R. Tom and Arnold (see [1]and [2]). If the steps have been done explicitly this prescription gives all invariant operators including the nonlinear ones. In our case (differential operators acting on scalar functions on Minkowski space) the main result is the following: all invariant operators are nonlinear of order equal or bigger than two (we exclude the trivial case of zero order operators). For second order (n = 2) there are three independent differential operators (see 35, 36 and 37). Any other invariant operator of order two is a function of three variables of them. ASimilarly, for order n = 3 there are 23 independent operators-the previous three and 20 operators more of order three. Every invariant operator up to order three is a function of these universal operators. For the case of an arbitrary finite order n there exist a finite complete system of independent conformally invariant operators up to order n i.e. they generate all other invariant operators up to the corresponding order. In the simpler case of differential operators between analytic functions, invariant under the modular group the result is similar. All invariant operators are nonlinear. There are n − 2 independent operators of the order up to n. All others of are functions of them. We give a power series generating the family of independent invariant differential operators. The invariant operator of order three is closely related to the Schwarz derivative (there is a difference because the Schwarz derivative takes values in the quadratic differentials). A connected Lie group G acts on ξ by bundle morphisms where ξ x = p −1 (x) is the fibre over x ∈ M and t g ∈ Dif f (M ) is the projection of a morphism T g . In adapted coordinates (x µ , u a ) the action of G reads The group G has a natural action on C ∞ (ξ) In the local coordinates (x µ , u a ) this action is given by A natural problem is the description of the vector subspace C ∞ (ξ) G ⊂ C ∞ (ξ) of all G-invariant sections. The invariance condition g(ψ) = ψ locally looks as In general a description of all G-invariant sections is hardly possible but under some natural requirements imposed on the group action it may be achieved. There is a "smaller" reduced bundle ξ G that smooth sections (without any restriction) in it are one-to-one correspondent with G-invariant sections in ξ (elements of C ∞ (ξ) G ). The abstract algebraic construction of ξ G consists of two steps. Consider the stationary subgroup As a matter of fact, this is a restriction on the values of G-invariant sections at any given point } be the vector subspace of all fixed vectors. We assume that the collection of all spaces st(ξ) x for all x ∈ M is a vector bundle st(ξ) ⊂ ξ called a stationary subbundle. This condition limits the action of G. Obviously, C ∞ (ξ) G ⊂ C ∞ (st(ξ)). Note: The explicit construction of the stationary subbundle st(ξ) is the crucial point where the new structure of the reduced bundle arises. This is the most difficult step in our approach. The second step consists in taking the quotient of the base M for the bundle st(ξ). We suppose that the projection t g has uniform orbits in the base M and M itself represents a total space of a smooth locally trivial bundle (M, π, M/G) of homogeneous spaces. It is another requirement on the action of the group G. Let's consider an orbit of G linking up x, y ∈ M , i. e. ∃ g ∈ G : y = t g (x). If ψ ∈ C ∞ (ξ) G the value ψ(y) is uniquely determined by the value of the section at x (in accordance to the invariance condition (2)) Let (M, π, M/G) be a trivial bundle and N be a global section i.e. N ⊂ M is transversal to the orbits. Because of the relation (4), a G-invariant section ψ is completely determined if we know the restriction ψ| N (ψ| N ∈ C ∞ (st(ξ)| N )). Moreover, if ϕ ∈ C ∞ (st(ξ)| N ) then ϕ correctly induces an invariant section ψ ∈ C ∞ (ξ) G . Indeed, let y ∈ M , then the orbit through y intersects the submanifold N only in one point x : . The element g ∈ G is not uniquely determined by y and x but since ϕ(x) ∈ st(ξ) x the value ψ(y) doesn't depend on the specific choice of the group element. The restriction st(ξ)| N is a coordinate realization of the bundle ξ G . If we consider another submanifold N ′ ⊂ M transversal to the orbits in M , the corresponding restriction st(ξ)| N ′ is another coordinate realization. There is a canonical isomorphism st(ξ) N ′ ≈ st(ξ))N induced by the group action. This procedure sews the abstract reduced bundle from the coordinate realization. If (M, π, M/G) isn't trivial the construction of ξ G is analogous but slightly complicated. We have to sew local coordinate realizations. Smooth sections in the reduced bundle are one-to-one correspondent with G-invariant sections. We shall transform the problem of describing invariant differential operators into a problem for characterization of invariant sections in appropriate jet bundles. Consider two bundles ξ and η over the same base M. A group G acts on both of them by the same projection t in M. The action of G on C ∞ (ξ) and C ∞ (η) induces an action on differential operators D : We use for simplicity the same notation for the actions of G on ξ and on η . A differential operator is called G-invariant if it satisfies the following condition The problem we consider is the description of all invariant differential operators. This problem can be reduced to the problem we studied before by using the jet bundle technique (for more details about jet bundles see [3], [4], [5], [6]). Let a differential operator D : C ∞ (ξ) → C ∞ (η) be of order (up to) k and linear. We denote with J k (ξ) the corresponding k-jet bundle of ξ. For a local coordinate frame (x µ , u a ) in ξ there exists an induced coordinate frame in J k (ξ) denoted by (x µ , u a , u a µ , . . . , u a µ1µ2...µ k ), where the indices are ordered Any linear differential operator is completely determined by its general symbol, i.e. the bundle morphism D : J k (ξ) → η (over the identity on M ). Using some natural isomorphisms, one can view general symbols as sections in the tensor product J k (ξ) * ⊗ η. The set of all linear differential operators of order up to k corresponds one-to-one to smooth sections C ∞ J k (ξ) * ⊗ η . The jet lifting of the sections j k : C ∞ (ξ) → C ∞ J k (ξ) plays the role of an universal differential operator of order (up to) k. An arbitrary linear differential operator D : C ∞ (ξ) → C ∞ (η) of order k is a composition D = D • j k . If D is a nonlinear operator then its general symbol is a fibre-preserving map D : J k (ξ) → η. The action of the group G in the bundle ξ induces another action of G in the corresponding jet bundle J k (ξ)(so-called jet lifting of the action). Thus we have an action of G in the tensor product J k (ξ) * ⊗ η. Then a differential operator is invariant if and only if its general symbol is an invariant section in J k (ξ) * ⊗ η. So the jet bundle technique gives an algebraization of the (linear) differential operators. If we construct the reduced bundle of J k (ξ) * ⊗ η this will provide a full description of all invariant operators. In the nonlinear case general symbol mathscrD is a fibre-preserving map. At any point x ∈ M the restriction D x : J k (ξ) x → η x is a smooth (nonlinear) map. The description of the G-invariant nonlinear operators is similar to the linear case. We study the action of the stationary subgroup H x0 on the space C ∞ J k (ξ) x0 , η x0 -the space of all smooth maps J k (ξ) x0 → η x0 and then we have to find the fixed elements in it. We use this scheme to describe all conformally invariant differential operators acting on Minkowski space. As a simple illustration we will consider the two dimensional algebraic conformal case. In both cases the base M is a homogeneous space. The reduced bundle ξ G consists of one fibre over one point. The crucial point is to find the stationary elements of C ∞ (ξ) x0 for only one point. Illustrative example We consider the space of analytic functions of a single complex variable and the differential operators between analytic functions. The analytic functions may be viewed as sections on trivial line bundles over the complex plane C, i.e. ξ = (C × C, p, C). The group GL(2, C) acts on C by rational transformations GL(2, C) acts on the analytic functions by transforming the argument. In adapted coordinates (z, u) this action is The problem to solve is the description of all GL(2, C)-invariant differential operators. The invariance condition now reads The complex plane is a homogeneous space, because translations in C are a subgroup of GL (2, C) We choose z 0 = 0. Its stationary subgroup H 0 is defined as the subset The fibre J k (ξ) 0 is the set of all k-jets taken at z = 0 of analytic functions. The k-jet of an analytic function is its Taylor expansion up to order k. where u l = d l f (0) dz l , l = 1, 2, . . . , k. We have assumed above for simplicity that u 0 = 0 since it doesn't lead to any loss of generality. The prolonged action of H 0 on the fibre J k (ξ) 0 is given by the definition The bar over the right hand side of (11) indicates that the composition is truncated, i.e. all monomials of higher order than k have been ignored. The jet of a composition of functions is the composition of their jets. On the other hand, the truncated Taylor expansion has the form The transformation h(u 1 , . . . , u k ) = (w 1 , . . . , w k ) is determined by the equation For example, the transformation of the jet of fourth order is Since we consider only scalar functions we have to describe maps J k (ξ) 0 → C invariant under the action of H 0 . Let J k (ξ) 0 /H 0 be the quotient space and π : J k (ξ) 0 → J k (ξ) 0 /H 0 be the canonical projection on it. A map D 0 : J k (ξ) 0 → C is said to be H 0 -invariant if and only if there exists another mapD 0 : J k (ξ) 0 /H 0 → C such that the relation D =D • π holds. One may rewrite that requirement in terms of commutative diagrams as follows In this sense the canonical projection π is a universal H 0 -invariant map. The components of π (in any coordinates in the quotient space) are invariant. Any H 0 -invariant map is a function of the components of π. To describe the quotient space means we must find a canonical representative in every orbit of H 0 . The jets with u 1 = 0 is an invariant subspace. Considering the general case with u 1 = 0 in each orbit there is an unique representative with u 1 = 1 and u 2 = 0. The projection on this canonical representative is given by the element h ∈ H 0 with a = 1/u 1 , c = u 2 /2u 1 more precisely The coefficients w l are coordinates in the quotient space The coefficients w l are the general symbols of the H 0 -invariant differential operators at z = 0. Since translations act in a trivial manner on the k-jets they look exactly the same at any point of C. If we consider the infinite order jets by using the previous method we can obtain the following generating power series Note: The special conformal transformations have a correct global definition in the compactified Minkowski space. The conformal group acts on the space of the smooth scalar functions R 4 → R by transforming the argument. We are looking for differential operator between scalar functions invariant under the action of the conformal group. We are going to follow the scheme demonstrated in the precedent section. The Minkowski space is a homogeneous space. The stationary subgroup H 0 ⊂ C(1, 3) for x = 0 is induced by the transformations We must consider the action of H 0 on the jets J k R 4 4 , f (0) = 0 and find the H 0 -invariant functions J k R 4 0 → R, i.e. we have to describe the canonical projection π : J k R 4 0 → J k R 4 0 /H 0 . The first nontrivial case is k = 2. The 2-jet of a smooth function f is the Taylor polynimial where (u α , u α1α2 ), α 1 ≤ α 2 represent coordinates in the fibre J k R 4 0 . Let ϕ : R 4 → R 4 be a diffeomorphism with a fixed point x = 0 then its second order jet at assuming that det (A µ ν ) = 0 (A µ ν1ν2 is arbitrary). The action of a diffeomorphism ϕ on the space J 2 R 4 0 is given by A special case is the action of the stationary subgroup H 0 . We have for the prolonged action of the dilatation as well as for rotations The infinite jet of special conformal transformations ϕ reads Thus the action is where b µ = η µν b ν . We will describe the quotient space J 2 R 4 0 /H 0 by choosing a canonical representative from each orbit of H 0 . Let (u α , u α1α2 ) be coordinates in J 2 R 4 0 . At this step we assume that u 2 := η µν u µ u ν < 0. By dilatation choosing λ = 1/ (−u 2 ) we obtain another representative of the same equivalence class By a special hyperbolic rotation In the case of 2-jets looking like (31) the action (28) reads This formula enables us to choose a β = w 0β /2 and therefore we get another "more canonical" representative The unordered triple of eigenvalues (λ 1 , λ 2 , λ 3 ) are coordinates in the quotient space J 2 (R 4 ) 0 /H 0 , so they describe completely this quotient space. As coordinate frame we may choose the elementary symmetric polynomials but we prefer working with the following frame because of its convenient form The quantities S 1 ,S 2 and S 3 are coordinates in the equivalence class of the jet that we started from. To obtain them as explicit functions of the initial jet we must take the composition The functions D k = T r w k = D k (u α , u α1α2 ) are the general symbols of the invariant operators at x = 0. Since the translations act trivially on the jets the differential operators look in the same way at any point of the Minkowski space. The final result is where So we have just obtained the differential operators having considered functions with time-like gradients. According to the general scheme these differential operators are functionally independent and universal, i.e. they generate all conformally invariant operators of second order (defined on functions with time-like gradient). This procedure also contains an algorithm for calculating the higher order differential operators (defined on the same subset of functions). The natural framework of describing the conformally invariant differential operators involves the complexified and compactified Minkowski space. This technique is applicable to the case of n-dimensional pseudoeuclidean space too. There are no conformally invariant first order differential operators. The conformally invariant differential operators of second order are generated by n − 1 functionally independent differential invariants. For example, one of these invariant operators (involving functions with time-like gradient) is the following one
2014-10-01T00:00:00.000Z
2004-10-27T00:00:00.000
{ "year": 2004, "sha1": "283dcdcdaa448f0216ce270824d0c16199728bdf", "oa_license": null, "oa_url": "http://arxiv.org/pdf/math-ph/0410056", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "283dcdcdaa448f0216ce270824d0c16199728bdf", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
260178099
pes2o/s2orc
v3-fos-license
A Deep Learning Framework for the Characterization of Thyroid Nodules from Ultrasound Images Using Improved Inception Network and Multi-Level Transfer Learning In the past few years, deep learning has gained increasingly widespread attention and has been applied to diagnosing benign and malignant thyroid nodules. It is difficult to acquire sufficient medical images, resulting in insufficient data, which hinders the development of an efficient deep-learning model. In this paper, we developed a deep-learning-based characterization framework to differentiate malignant and benign nodules from the thyroid ultrasound images. This approach improves the recognition accuracy of the inception network by combining squeeze and excitation networks with the inception modules. We have also integrated the concept of multi-level transfer learning using breast ultrasound images as a bridge dataset. This transfer learning approach addresses the issues regarding domain differences between natural images and ultrasound images during transfer learning. This paper aimed to investigate how the entire framework could help radiologists improve diagnostic performance and avoid unnecessary fine-needle aspiration. The proposed approach based on multi-level transfer learning and improved inception blocks achieved higher precision (0.9057 for the benign class and 0.9667 for the malignant class), recall (0.9796 for the benign class and 0.8529 for malignant), and F1-score (0.9412 for benign class and 0.9062 for malignant class). It also obtained an AUC value of 0.9537, which is higher than that of the single-level transfer learning method. The experimental results show that this model can achieve satisfactory classification accuracy comparable to experienced radiologists. Using this model, we can save time and effort as well as deliver potential clinical application value. Introduction Nowadays, thyroid cancer is becoming more common worldwide, and the incidence rate is increasing rapidly compared with other malignant tumors. According to the American Cancer Society and cancer statistics, it is the most prevalent endocrine tumor, with 500,000 new cases identified each year (567,233 in 2018) [1]. In clinical practice, ultrasonography is the most commonly utilized test for the screening and diagnosis of thyroid gland disorders due to its non-invasive, non-radioactive nature, affordability, and real-time capabilities. However, its drawbacks include a low signal-to-noise ratio and the presence of visual image artifacts visible in the ultrasound image. In addition, diagnostic accuracy and reproducibility of ultrasound images are limited and it necessitate a high level of expertise and training [2]. In light of these challenges, many researchers have reported the importance of computer-aided diagnosing systems to characterize thyroid nodules. Recently, there has been increased interest in the computer-aided diagnosis of thyroid nodules, and significant research progress has been made in this area. In the traditional ma-dataset of significant images as a bridge dataset. Several studies have demonstrated possible associations between breast and thyroid cancer, including shared hormonal risk factors, similarity in the appearance of nodules, and genetic susceptibility [4]. Furthermore, thyroid and breast cancers exhibit similar characteristics under high-frequency ultrasound, such as malignant nodules having a taller-than-wide shape, hypoechogenicity, and an ill-defined margin. This is why we selected the breast ultrasound image dataset as a bridge dataset for multi-level transfer learning for the classification of thyroid nodules [4]. Inspired by the above-discussed issues, this paper proposes an architecture that combines the inception architecture with the squeeze and excitation module based on the multi-level transfer learning technique for developing an efficient characterization framework for thyroid nodule diagnosis. The contributions of this paper are as follows: • We utilize the concept of attention mechanism with each inception block and propose a network architecture for thyroid nodule diagnosis. • We propose a multi-level transfer learning model for thyroid nodule diagnosis which uses breast ultrasound images as a bridge dataset. We utilize a new concept of multilevel transfer learning to the thyroid ultrasound images, whereas most of the previous studies are similar to ours but have remained within the traditional transfer learning technique. We test the feasibility of the model and prove its potential for thyroid nodule diagnosis. • We check the effectiveness of breast ultrasound images for use as a bridge dataset in the development of a multi-level transfer learning model for thyroid nodule diagnosis. They are able to show the potential and usefulness in the development of a thyroid nodule classification model. The remainder of this paper is organized as follows. Section 2 includes the related works. Section 3 discusses the proposed approach for the thyroid nodule characterization. Section 4 discusses the experimental framework. Section 5 discusses the obtained results. Lastly, Section 6 contains our concluding remarks. Background Thyroid nodule diagnosis using machine learning techniques has long been an important research topic that provides aid to clinical diagnosis. This section reviews the state-of-the-art approaches in the development of a computer-aided diagnosing system for thyroid nodules. In the traditional machine learning framework, several works have been proposed for the computer-aided diagnosis of thyroid nodules. In 2008, Keramidas et al. extracted fuzzy local binary patterns as noise-resistant textural features and adopted support vector machine as the classifier [12]. In 2009, Tsantis et al. proposed a model for thyroid nodule classification in which a set of morphological features (such as mean radius, radius entropy, and radius standard deviation) were extracted from the segmented nodule to describe the shape and boundary regularity of each nodule [13]. In 2012, Singh and Jindal utilized gray level co-occurrence matrix features to construct a k-nearest neighbor model for thyroid nodule classification [14]. Acharya et al. utilized Gabor transform to extract the features of thyroid ultrasound images to differentiate benign and malignant nodules. They compared the classification performance of SVM, MLP, KNN, and C4.5 classifiers. In 2014, Acharya et al. extracted grayscale features based on stationary wavelet transform and compared the performance of several common classifiers [15]. As thyroid nodules vary in shape, size, and internal characteristics, the low-level handcrafted features used in this traditional CAD method can only provide limited differentiating capacity due to their inherent simplicity and locality [16]. On the other hand, the performance of deep learning models, especially convolutional neural networks, has been superior to conventional learning methods in various visual recognition tasks. By learning hierarchical visual representations in a task-oriented manner, CNNs can capture the semantic characteristics of input images [16]. Due to this critical advantage, numerous CNN-based CAD methods have been proposed for thyroid nodule diagnosis in recent years. In 2017, Ma et al. trained two complementary patch-based CNNs of different depths to extract both low-level and high-level features and fused their feature maps for the classification of thyroid nodules [17]. In 2017, Chi et al. utilized a pre-trained GoogleNet architecture to extract high-level semantic features for the classification of thyroid nodules [18]. Gao et al. proposed a CAD system based on multi-scale CNN model that achieved better sensitivity [19]. In 2018, Song et al. proposed a cascaded network for thyroid nodule detection and recognition based on multi-scale SSD network and spatial pyramid architecture [20]. Recently, Li et al. structured a model for the diagnosis of thyroid cancer based on ResNet50 and Darknet19 [21]. This model, despite its simplicity in structure, exhibited excellent diagnostic abilities in identifying thyroid cancer. It demonstrated the highest value for AUC, sensitivity, and accuracy compared with the other state-of-the-art deep learning models. Wang et al. conducted a largescale study on multiple thyroid nodule classification [22]. Both InceptionResnetv2 and VGG-19 architectures were utilized for the classification [22]. It was a microscopic histopathological image(rather than an ultrasound image) was used in the investigation. Liu et al. proposed a multi-scale nodule detection approach and a clinical-knowledge-guided CNN for the detection and classification of thyroid nodules. By introducing prior clinical knowledge such as margin, shape, aspect ratio, and composition, the classification results showed an impressive sensitivity of 98.2%, specificity of 95.1%, and accuracy rate of 97% [16]. The method involves using separate CNNs to extract features within the nodule boundary, around margin areas, and from surrounding tissues [16]. As a result, the architecture of the network is complex with a higher risk of overfitting [16]. Juan Wang et al. developed an Artificial Intelligence-based diagnosis model based on both deep learning and handcrafted features based on risk factors described by ACR-TIRADS [23]. Yifei Chen et al. proposed two kinds of neural networks, which are GoogleNet and U-Net, respectively [24]. GoogleNet was utilized to obtain the preliminary diagnosis results based on the original thyroid nodules in the ultrasound images. U-Net was used to obtain the segmentation results and medical features are extracted based on the segmentation results. The mRMR feature selector was used as the feature selector. The 140 statistical and texture features were sent to the designed feature selector to obtain 20 features. Then, they were utilized for training an XgBoost classifier. The above CNN-based approaches have achieved good performance in classifications, but they still have limitations in global feature learning and modeling. CNNs always focus on the fusion of local features, owing to the locality of their convolutional kernels. Some improved extraction strategies of global features, such as downsampling and pooling, have been proposed. However, they tend to cause the loss of contextual and spatial information. Jiawei Sun et al. proposed a vision-transformer-based thyroid nodule classification model using contrast learning [25]. Using ViT helps to explore global features and provide a more accurate classification model. Geng Li et al. proposed a deep-learning-based CAD system and transformer fusing CNN network to segment the malignant thyroid nodule automatically [26]. As in the above papers, various deep learning networks, training methods, and feature extraction methods were utilized to develop an efficient thyroid nodule diagnosis model. In general, there have been many papers on applying deep learning techniques to achieve computer-aided diagnosis of thyroid nodules. However, only a few of them address the issues regarding small datasets. In 2021, Y. Chen et al. proposed a multi-view ensemble learning model based on a voting mechanism that integrates three kinds of diagnosis results obtained from thyroid ultrasound images. They utilized features from the GoogleNet architecture, medical features obtained from the U-Net architecture, and several statistical and textural features to develop the ensemble model. To date, Artificial Intelligence-based Computer-aided Diagnosis (AI-CAD) systems have been developed for specific medical fields or specific organs. Integrating these systems and evaluating related organs with similar characteristics would benefit AI-CAD system development. For example, several researchers have reported an association between the incidence rate of thyroid and breast carcinoma, possibly related to the effect of estrogen, which is the transport mechanism of iodine. Considering that thyroid and breast nodules exhibit similar characteristics over high-frequency ultrasound, Zhu et al. developed a generic VGG-based framework to classify thyroid and breast lesions in ultrasound imaging [4]. Xiaowen Liand et al. proposed a multi-organ CAD system based on CNN for classifying thyroid and breast nodules from ultrasound images [27]. Inceptive characteristics of Googlenet and CaffeNet were exploited to classify nodules in the ultrasound images. However, this paper mainly focuses on the concept of inception network and squeeze and excitation network. We also integrate the idea of multi-level transfer learning by considering the relationship between thyroid and breast nodule ultrasound images. Framework Overview In this paper, we mainly explore the effectiveness of parallel convolutions in the inception architecture, squeeze and excitation module and the idea of multi-level transfer learning for characterizing thyroid nodules. In Sections 3.2 and 3.3, we introduce the basic concepts regarding the inception architecture and squeeze and excitation modules, respectively. In Section 3.4, we explained how the inception blocks are updated using the squeeze and excitation module. Section 3.5 discusses the proposed convolutional neural network based on improved inception blocks. Section 3.6 discusses the essential information regarding multi-level transfer learning and how it is implemented in the proposed system. Inception Module The most straightforward way to improve the performance of a deep neural network is to increase its size. This includes expanding the depth and the width. The depth of the network defines the number of levels of the network, and the width represents the number of units at each level [28]. It is easy to train higher-quality models, especially if a significant amount of labeled training data are available. A larger network tends to have more parameters, making it more prone to overfitting when the size of the dataset is limited [7,29]. Another drawback of a uniformly increased network size is the significantly higher demand for computational resources [7]. These issues can be resolved by switching from fully connected to sparsely connected architectures, even within the convolutions. Likewise, the size of the significant elements of the image can vary considerably. The area occupied by one element in an image differs from that occupied by the same element in another image. Selecting the right kernel size for convolution becomes challenging due to the wide variation in the location of significant information [28]. A larger kernel is preferred for the information distributed globally, while a smaller kernel is chosen for the information distributed locally. Unlike traditional deep neural networks, inception networks are well known for parallel stacked convolutions. The inception module is illustrated by the incorporation of convolution kernels of various scales in the same convolution module [7]. As opposed to a single convolutional kernel, the inception module can extract a wide range of significant features from single-layer feature maps using a variety of convolutional kernels. It expands the dimension of each layer of the feature map without affecting neural networks as a whole [7]. Figure 1 represents various inception blocks. Figure 1a presents the naive inception module. It performs convolution on the input with three different sizes of filters: 1 × 1, 3 × 3, and 5 × 5, respectively. Additionally, max pooling is also performed [7]. The outputs are concatenated and sent to the next inception module. As stated before, deep neural networks are computationally expensive. By adding an additional 1 × 1 convolutions before the 3 × 3 and 5 × 5 convolutions, the author reduces the number of input channels (represented in Figure 1b) [28]. Hence, it makes the network computationally cheaper. Although adding an extra operation may seem counter-intuitive, 1 × 1 convolutions are far more affordable than 5 × 5 convolutions and will reduce the number of input channels [28]. However, the 1 × 1 convolution is introduced after the max pooling layer. In [7], a new neural network architecture was developed using the improved inception module. It was referred to as GoogLeNet (Inception-v1). GoogleNet consists of nine linearly stacked inception modules. It is 22 layers in depth (27 layers deep, including the pooling layers). At the end of the last inception module, a global average pooling is applied. The concept of auxiliary classifiers is introduced to prevent the middle part of the network from "dying out". SoftMax is applied to the outputs of two of the inception modules. Moreover, an auxiliary loss is computed over the same labels [28]. The weighted sum of the auxiliary loss and the real loss is calculated, and it is considered as the total loss [28]. For each auxiliary loss, the weight value is fixed as 0.3 [28]. total loss = real loss + 0.3 * aux loss 1 + 0.3 * aux loss 2 (1) Two versions of inception have been presented together in a single paper: Inception-v2 and Inception-v3 [28]. The authors proposed a number of upgrades which increased the accuracy and reduced the computational complexity. Inception-v2 explores the following: • Reducing the Representational Bottle Neck: The neural network performs better when convolutions did not alter the dimensions of the input drastically. Reducing the dimensions too much may cause loss of information, known as representational bottle neck. • Convolution operations can be made more efficient in terms of computational complexity by using some smart factorization methods. For instance, factorize the 5 × 5 convolution to two 3 × 3 convolutions to improve computational speed (represented in Figure 2a). Although this may seem counterintuitive, a 5 × 5 convolution is 2.78 times more expensive than a 3 × 3 convolution. Hence, stacking two 3 × 3 convolutions in fact leads to a boost in performance. This is illustrated in Figure 2a. Moreover, they factorize convolutions of filter size n × n to a combination of 1 × n and n × 1 convolutions) [30]. For example, a 3 × 3 convolution is equivalent to first performing a 1 × 3 convolution and then performing a 3 × 1 convolution on its output (represented in Figure 2b. They found this method to be 33% cheaper than the single 3 × 3 convolution. • In addition, the filter banks in the module were expanded (made wider instead of deeper) to address the issue of representational bottleneck. In contrast, if the module were made deeper, the dimensions would be reduced excessively, resulting in information loss. This is depicted in the Figure 2c. The above three principles were used to develop three different types of inception modules (Let us call them modules A, B, and C in the order in which they were introduced). Inception-v3 incorporated all of the above upgrades stated for Inception-v2, and in addition used the following [28]: • RMSProp Optimizer [28]. • Factorized 7 × 7 convolutions [28]. • BatchNorm in the fully connected layer of Auxillary Classifiers: The authors observed that the auxiliary classifiers did not make a significant contribution until near the end of the training phase, when accuracies were approaching saturation. In particular, they claimed that they act as regularizers, especially if they had BatchNormalization or Dropout operations [28]. • Label Smoothing: It is a regularization technique that can address issues regarding overconfidence and overfitting behavior of a convolutional neural network [28,31]. Inception-v4 was introduced by Christian Szegedy et al. in 2016 [32]. It had three main inception modules, which are termed A, B, and C, which are very similar to those from Inception-v2 (or v3). Inception-v4 introduced a specialized reduction block that changes the width and height of the grid (represented in Figure 3). Even though the functionality of the reduction blocks was incorporated in the earlier version, they were not explicitly implemented [33]. Attention Mechanism and Squeeze and Excitation Networks Recently, attention mechanisms have been widely used in pattern recognition and have been proven effective. In contrast to natural images, medical images tend to be similar in appearance. Even though they are from different image sources, they are acquired from standardized positions using similar set of acquisition parameters. For radiologists, experience in analyzing the images is associated with knowing where exactly to look to detect specific abnormalities in the images [8]. Furthermore, extensive variability in the shape and appearance of nodules in the ultrasound images led to false positive predictions [34]. To address these issues, for reducing false positive predictions, we used squeeze and excitation networks because they used fewer parameters and provided superior results when compared with other techniques. In fact, we claim that the inability to exploit global information is a common problem in medical image analysis. This type of network is a modular mechanism that allows for the efficient exploitation of global information, which also provides soft object localization during forward pass. This type of network helps to focus on regions that are disease-specific. Generally, this strategy is particularly effective for focusing on nodule regions. It can reduce the impact of the noise in the non-nodule region, and misalignment can be alleviated. In the attention mechanism, re-weighting of certain features of the network has been accomplished with the help of some externally or internally (self attention) supplied weights [9]. In order to understand what a model is doing from an attention point of view, we need to familiarize with both hard attention and soft attention. Soft attention allows their weights to be continuous, while hard attention requires them to be binary (0 or 1) [8]. In the case of hard attention, certain parts of the image are cropped out. In essence, the original image is re-weighted so that the cropped part has a weight of 1 and the rest has a weight of 0. It is not differentiable and cannot be trained end-to-end, which is the main disadvantage of hard attention. Instead, the authors use the activation of a certain layer to determine the ROI and train the network in a complicated multistage process [9]. In order to train the attention gates, we have to use soft attention. Instead of using hard attention and recalibrating weights in terms of cropping the feature maps, Hu et al. looked at re-weighting the channel-wise responses in a certain layer of a CNN by using soft self attention to model interdependencies between the channels of the convolutional features [9]. For this purpose, the authors proposed the concept of squeeze and excitation building block. Normally, the network weights each of its channels equally when creating the output feature maps. The squeeze and excitation Networks (SENets) are all about changing this by adding a content-aware mechanism to weight each channel adaptively. It adds a single parameter to each channel and gives it a linear scale of how relevant each one is. First, they obtain a global understanding of each channel by squeezing the feature maps to a single numeric value. This results in a vector of size n, where n is the number of convolution channels. Afterwards, it is fed through a two-layer neural network, which outputs a vector of the same size. These n values can now be used as weights on the original feature maps, scaling each channel based on importance. The squeeze and excitation block work as follows. For any transformation of features F tr from X to U (e.g., Convolution), there is a transformation F sq that aggregates the global feature responses across spatial extents (H,W) [9]. This represents the squeeze operation. The squeeze operation is followed by the excitation operation F ex , which is a self-gating operation that constructs a channel-wise weight response [8]. The output of F tr is subsequently multiplied channel-wise by the result of excitation. This is depicted as F scale in Figure 4 [35]. The mathematical representation of the squeeze operation is as follows [9]. Here, u c are the outputs of the operation F tr . The squeeze operation uses global average pooling to create a global embedding. We could also use global max pooling. The authors note that average pooling slightly improves the overall performance [35]. The excitation block is represented by: The significant advantage of the squeeze and excitation block is in incorporating global information while decision making [8]. Conversely, a convolution operation focuses on local spatial information within a specific area [9]. According to the authors, in the early stages of the network, excitation weights are almost the same for different classes but become more specific in later stages [9]. In other words, lower layers of the network learn more general input features, whereas higher layers are more likely to be specific [9]. Additionally, the squeeze and excitation block does not make much sense at the last stage of the network, where most excitations become one [35]. This can be explained by the fact that the last stage of the network already contains most of the global information and the squeeze and excitation operation brings in no new information content [35]. squeeze and excitation building blocks offers the advantage of being extremely versatile; the authors mention that it can integrate with any convolutional neural network architecture, such as ResNet and Inception. We can integrate the squeeze and excitation building block to every stage of the network or just at certain stages. Additionally, it introduces only a slight overhead concerning the number of learnable parameters. Therefore, we consider the construction of SE-blocks for inception modules. Here, we simply transform F tr to be an entire inception module (see Figure 5). By making this change for each such module (see Figure 2) in the architecture, we obtain an SE-Inception network [9]. This will help the network to learn the importance of channel in the process of network training. Proposed Improved Inception Squeeze-Excitation Blocks Jie Hu et al. proposed an inception architecture combined with the squeeze and excitation module, which is given in Figure 5. In this architecture, the SE-block is inserted into every inception block. Assessment of ultrasound images for thyroid nodule diagnosis is based on the experience of the clinicians. On ultrasound images, clinicians tend to focus on certain places pertinent to diagnosis when examining thyroid nodules. We include the SE-block in each inception module to ensure that the inception modules learn these key areas independently during the network training process. Accordingly, the SE-block produces corresponding attention heat maps of the same size as the traditional inception blocks. We define that all the values have a range of (0, 1) activated by the sigmoid activation function. If the value of the heat map is closer to 1, the more the location is concerned by the network. Finally, the significant feature map is obtained by multiplying the attention heat map by the corresponding feature map produced by the traditional inception block. In other words, the more focused on the pixel, the more completely retained the feature value and vice versa. Thus, in the process of gradient descent during network training, when the classification is correct, the weight of the feature will be increased and vice versa. Eventually, the network reduces the attention values in the irrelevant part and learns the significant features for classification. Furthermore, the inception module is distinguished by incorporating convolutional kernels of different scales within the single convolutional module. The advantages are obvious: Instead of using a single convolutional kernel, the inception module can extract multiple types of features from a single layer feature map using multiple convolutional kernels that expand the dimension of each layer of feature maps without increasing the depth of the network. We believe that this module brings a new problem as well: the feature map dimension disaster [36]. Feature maps of thousands of channel appear in the final concatenation layer (As shown in Figure 2). However, these channels are not fully utilized, as doctors could hardly consider so many channels when examining thyroid ultrasound images [36]. Therefore, we introduce a fully connected module along with the squeeze and excitation block and inception module [36]. The improved inception blocks along with the SE-blocks are given in Figure 6. The improved inception blocks allow the network to learn the significant features that are crucial for the diagnosis, fully utilizing the features obtained from parallel convolutions. Figure 6 presents the improved version of the different inception blocks used in the traditional inception architecture [30,33,36]. Proposed Network The proposed network architecture is based on Inception-v4 and it is shown in Figure 7. In this work, the stem of Inception-v4 was kept similar. However, the inception modules were updated to obtain a more lightweight network (please refer to Figures 2, 3, 6, 7 and 8). The stem refers to the initial set of operations performed before introducing the inception blocks (please refer to Figure 8). Figure 7 presents the complete architecture of the network and Figures 2, 3, 5, 6 and 8 show the detailed structure of its components. Our proposed architecture contains improved inception blocks that utilize parallel stacked convolutions and attention mechanism, and it is shallower and narrower. The number of layers and number of filters are reduced, compared with the conventional Inception-v4. This was applied to reduce computational cost and, at the same time, produce a smaller model with less capacity which would be less prone to overfitting. Unlike in Inception-v4, we have not utilized an auxiliary classifier, since our network is not as deep. The Relu activation was used after every convolution due to its simplicity and efficacy. All the convolutions marked with V are valid padded, which indicates that the input patch of each unit is fully contained in the previous layer and the grid size of the output activation map is reduced accordingly. All the convolutions that are not marked with V are same padded, which indicates that the their output grid matches the size of their input. The default input size for the proposed Inception network is 299 × 299. Thus, we resized the ultrasound images in a dataset into this size. Overall Framework of the Multi-Level Transfer Learning for Thyroid Nodule Classification Using Breast Ultrasound Images Recently, deep learning has taken over the field of image processing with its state-ofthe-art performance. The problem is that deep learning models require enormous amounts of well-labeled training data. Generally, medical image datasets are difficult to access due to the rarity of the diseases and to ethical issues. In addition, manually collecting a massive amount of high-quality, labeled medical images is a time-consuming, laborintensive, and expensive process. Therefore, insufficient training data makes it challenging to build an appropriate deep learning model [37]. These are the main challenges faced while developing a deep-learning model for thyroid ultrasound images. Thus, transfer learning has been introduced to address these concerns in the domain. Transfer learning involves transferring knowledge between different domains and tasks to create robust and flexible target models [38]. Transfer learning consists of using models trained for specific tasks and leveraging the model they acquired on different but related tasks. This can be highly advantageous when sufficient instances are not available for direct training on the target domain. In addition, traditional deep learning models require training from scratch, which is computationally expensive and requires a large amount of data to achieve high performance. The transfer learning approach is computationally efficient and helps to achieve better results using small datasets. Transfer learning achieves optimal performance faster than the traditional machine learning model. The model that leverages knowledge from previously trained models already understands the basic features. The traditional machine learning model has an isolated training approach where each model is independently trained for a specific purpose without dependency on past knowledge. Contrary to that, transfer learning uses knowledge acquired from a pre-trained model is applied to the development stage of the target model. Transfer learning algorithms generally assume that the source and target domains share some information. Many real-world applications, like medical image processing and recommendation systems, do not always conform to this assumption. Moreover, knowledge transfer between two loosely related domains usually causes a negative transfer, meaning that the knowledge transfer adversely affects the performance of the task in the target domain, producing worse performance than the traditional deep learning model [37]. Medical and natural images vary significantly in several aspects, such as shape, color, resolution, and dimensionality. Compared with medical images, natural images appear diverse and possess more contour details and colors reflecting rich visual information. By contrast, medical images look almost identical, indicating considerably lesser visual information. Accordingly, natural image tasks are usually accomplished by identifying major morphological characteristics such as edges, colors, and shapes of the objects. In contrast, in medical image applications, pathologies are identified by detecting small abnormalities and local texture variations, such as bleeding and inconsistent structures. For instance, the signal-to-noise ratio of natural images is exceptionally high compared to that of medical images. Specifically, natural images have virtually no noise and are usually high-contrast and high-resolution images. Meanwhile, medical images are often noisy, and low contrast and low spatial resolution can limit detection of medical components in the images(such as nodules, cysts, bloodflow). It is obvious that such differences can hinder the effective transition of learned features from natural images for analyzing medical images. These modality differences can significantly undermine transfer learning performance. Moreover, there can be considerable modality differences even between different medical images (MRI and CT) [39]. However, the modality difference between different medical images is not significant when compared with the modality difference between medical and natural images. In addition, it was recently proven that the performance of the pre-trained model declined when they were employed for images such as chest X-ray, ultrasound, and brain MRI [40]. In 2021, J.C Hung and J.W Chang proposed the concept of multi-level transfer learning to address the issues regarding modality differences that exist among different domains [41,42]. In this approach, if optimal performance is gained at one level, knowledge gained by the corresponding model will be transferred from the current level to the next higher level. In 2017, Kim et al. proposed an approach, named modality bridge transfer learning, to address issues regarding insufficient data in the medical domain. Here, a bridge domain is introduced between the source and target domain to address the modality gap between the source and target domain. Figure 9 shows the overall framework of the modality bridge transfer learning proposed by H.G Kim et al. [39]. In a single level transfer learning approach (or traditional transfer learning), to extract image characteristics such as edge and texture, we learn the projection function,which is mapping from source image space to source feature space through the source dataset. The source database consists of a large number of natural images. To learn the features of medical images, the knowledge learned from the source domain is transferred to the target domain. However, this model does not reflect the characteristics of the target domain due to the domain difference. To learn the characteristics of the target domain (medical image), the model obtained from the source domain is supposed to be fine-tuned with images of target domain. As mentioned earlier, in the medical imaging domain, it is difficult to collect a large number of labeled images because of patient privacy protection and the high cost of reliable labeling. For this reason, training the model using this small dataset may cause overfitting or failure to converge during training. Therefore, in modality bridge transfer learning, a bridge domain is introduced between the source and target domain. A bridge domain will be from the same modality or almost similar modality to that of the target domain, but it is better in terms of the number of training samples. By learning the model from the source to the bridge domain and then from the bridge to the target domain, the domain differences between the source and target domain can be reduced. This provides a two-level approach for transfer learning. The knowledge gained from natural images (basic image features) is transferred to the bridge domain to learn the abstract features of the bridge domain. Finally, based on the learned features from two domains (source domain and bridge domain), the model is fine-tuned with the images in the target domain [43]. This particular model is applied for the different tasks in the target domain. Based on these concepts, we utilized a multi-level transfer learning approach for the thyroid nodule classification problem, which consists of three domains: the source domain (natural images), bridge domain (breast ultrasound images), and target domain (thyroid ultrasound images) [44]. Here, we selected breast ultrasound images as the bridge domain due to the similar characteristics of thyroid and breast nodules. The bridge and target domain both belong to ultrasound images (breast ultrasound images and thyroid ultrasound images). Datasets For applying the transfer learning technique to the thyroid ultrasound image classification model, the proposed network was evaluated using four different benchmark datasets: a part of CIFAR-10 [45], a part of CIFAR 100 [45], and a part of TinyImageNet [46]. Based on the performance of the network on these datasets, we selected the part of TinyIm-ageNet [46]. The TinyImageNet dataset contains square images of 64 × 64 pixels. There are three channels for color in almost all images, meaning they are 64 × 64 × 3 arrays. However, 18% of the examples are grayscale images [46]. The images are instantly converted from grayscale to RGB by replicating pixel values across three channels [46]. Each image belongs to exactly one of the 200 categories. The entire training set for TinyImageNet consists of 100K images (500 images from each category). The validation and test set has 10K images each (50 from each category) [47]. Due to the high computational requirements, we selected a subset from the training set consisting of 20K images (100 from each category). Likewise, the test set contains 2K images (10 images from each category). Another advantage of choosing TinyImageNet is the presence of grayscale images. All the images in the bridge domain (breast ultrasound) and the target domain (thyroid ultrasound) are grayscale. For the multi-level transfer learning, we utilized a breast ultrasound image dataset as the bridge dataset [44]. This dataset consists of 1312 images, where 891 images exhibit benign nature and 421 images exhibit malignant nature. At this stage, we utilized pretrained weights taken from the model trained on the part of TinyImageNet [46]. We selected this bridge dataset based on the approach proposed by Y-C Zhu et al. [4]. They proposed a generic deep-learning algorithm to classify thyroid and breast lesions [4]. Both breast and thyroid nodules are similar in basic internal or external characteristics and hormonal influences. Several studies demonstrated that a high frequency of thyroid-stimulating hormones and estrogen might contribute to the pathological evaluation of breast nodules and thyroid nodules [48,49]. Possible correlations between breast and thyroid cancer have also been explained in [4], as well as hormonal risk factors and genetic susceptibility. Under high-frequency ultrasound scans for malignant lesions, thyroid and breast nodules show similar imaging characteristics, including being taller than wide, having hypoechogenicity, and having ill-defined margins [4]. This observation strongly motivates using the breast ultrasound image dataset as the bridge dataset in multi-level transfer learning to classify thyroid nodule ultrasound images. For the thyroid nodule dataset, we initialized the proposed network with pre-trained weights taken from the breast ultrasound image classification model. We utilized an opensource thyroid nodule image dataset named DDTI. It contains ultrasound nodular thyroid images. Currently, DDTI contains 980 ultrasound images in total (322 images exhibit malignant nature and 658 images exhibit benign nature), around 60% for the training set and around 40% for the validation and test sets. This dataset was collected and published by Pedrazza et al. in 2015 [50][51][52]. The proposed database includes B-mode ultrasound images with a complete annotation and diagnostic description of suspicious thyroid lesions by expert radiologists [53]. The dataset includes several types of lesions like thyroiditis, cystic, adenomas, and carcinomas, and accurate lesion delineation is provided in an XML format. The diagnostic confirmation of malignant lesions was confirmed by their histopathological analysis [53]. Some sample images from DDTI dataset are given in Figure 10. The details of the breast and thyroid ultrasound image dataset are given in Table 1. Implementation Details Our experiments were conducted on a PC with the following specifications: Intel (R) Core (TM) i7 7700 HQ with 16 GB RAM clock speed or frequency of CPU @ 2.80 GHz and a GPU of NVIDIA GeForce GTX 1080 Ti. The algorithms were implemented in Python 2.7 using the Anaconda 64-bit Windows platform. The OpenCV, Skearn, Keras, and TensorFlow libraries were used to develop the machine learning model. Training Setting and Hyperparameter Setting In the training phase, the initial value of the learning rate (init lr ) was set to 0.001 and it was attenuated according to the formula. where γ was set to 0.5 and step size was set to 4. For adequate training, we empirically set the epoch to 100, as most training procedures converge around it. We randomly split the dataset 85:15 at the patient level to create independent training and test sets. The training data was further split 90:10 to create an independent validation set. The splits were carried out in a stratified fashion to maintain the same proportion of cancer cases in the training, validation, and test sets. For the breast ultrasound dataset (bridge domain), the total number of images in the training, validation, and testing were: 1004, 111, and 197, respectively. For the DDTI dataset (target domain), the total number of images in the training, validation, and testing sets were: 750, 83, and 147, respectively. We used TensorFlow 2.0 with Keras API to train, evaluate, and predict all the models. All the hyperparameters for the three stages of model training are given in Table 2. Initially, for the source dataset, the network was trained using a mini-batch stochastic gradient descent algorithm. Binary cross-entropy was used as a loss function. As we already said, the learning rate was set at 0.001. The number of training epochs was set as 100 with an early stop mechanism, which would cease the optimization process if 20 consecutive epochs returned the same validation loss errors. Additional details of the model training are provided in Table 2. In the case of the bridge domain, RMSProp was used as an optimizer and binary cross-entropy as a loss function. Here, we utilized a slower learning rate to avoid overfitting. For the target domain, adam was used as the objective function and binary cross-entropy was used as the loss function. For the target domain, we followed similar hyperparameters as the bridge domain. Overfitting may adversely affect the performance of the model when it deals with previously unseen data. Dropout methods, which temporarily remove specific nodes from the model and reduce its complexity, can help to avoid this problem. Expanding the training set could eliminate the overfitting issue in the training process. Hence, we utilized a data augmentation strategy to expand the training set. By using the ImageDataCreator package from the Keras library, we generated batches of tensor images with real-time data augmentation. All the parameters, along with the values, are shown in Table 3. Data Preprocessing The grayscale thyroid ultrasound image is 380 × 580 in the dataset. We modified the input to 299 × 299 to overcome the effects of image distortion. Evaluation Metrics In this study, we computed the accuracy, precision, recall, f1-score, g-mean and specificity of each class. Likewise, we computed accuracy, specificity, sensitivity, f1-score, and g-mean for the entire model for thyroid nodule classification. They are defined as: The performance of the model was also evaluated using the Receiver Operating Characteristics Curve (ROC Curve). Results and Discussions In this section, the experimental results are discussed to validate the performance of the proposed model for the characterization of thyroid nodules. Section 5.1 discusses the performance of the obtained intermediate models based on both the selected part of TinyImageNet and the breast ultrasound images. Section 5.2 deals with interpreting the training and validation curves of classification models obtained for thyroid ultrasound images. Section 5.3 summarizes the results of evaluating various models based on a test set in terms of several evaluation matrices. Section 5.5 interprets the receiver operating characteristic curves associated with each implemented model. Section 5.6 illustrates the results of several state-of-the-art methods for thyroid nodule characterization. Section 5.7 explains several benefits of the proposed method, and Section 5.8 discusses several limitations of the current study and some future directions from it. Evaluation of the Proposed Network for Breast Cancer Ultrasound Images First, the network is trained with part of TinyImageNet, as discussed in Section 4.1. Then the bridge dataset, the breast ultrasound image dataset, acts as a bridge across the source and target domains by constructing a high-level feature space and reducing the corresponding distribution divergences. The performance of the network for the source domain (part of TinyImageNet) is reported in Table 4 as Phase 1. It achieves an accuracy of 0.9857, a precision of 0.9790, a recall of 0.9286, and an F1-score of 0.8975. In Phase 2, the network trained using the part of TinyImageNet is finetuned using the breast ultrasound image dataset and achieves an accuracy of 0.8967, precision of 0.8567, recall of 0.9286, and an F1-score of 0.8340. The performance of the network for these classifications is much better. Hence, it is suitable for transfer learning. Next, we try to transfer the network parameters to the target domain to characterize thyroid nodules in ultrasound images. Evaluation of the Proposed Method As a first attempt, the architecture proposed in Section 3.5 was trained from scratch using the DDTI dataset. In the remainder of the article, this model will be referred to as the baseline model. Figure 11 depicts the evolution of the running average of the training and validation accuracy and loss function. However, we noticed that the model severely overfitted the training data at 30 epochs. The training accuracy had already exceeded 90% and was improving rapidly. At the same time, the validation accuracy remained stable at around 75%. We can see a significant deviation in training loss and validation loss. It suggests that applying more regularization approaches to our model could help it generalize to validation sets or data that have never been seen before. Next, we improved our baseline model by adding more convolutional and dense layers. We add a dropout of 0.3 after each hidden dense layer to facilitate regularization. Dropout is a powerful regularization method for deep convolutional neural networks. It can be applied separately to both the input and hidden layers. The dropout layer sets the output of a few layers to zero to prevent overfitting (in our case, 30% of the units in dense layers). Figure 12 depicts the training and validation curve for the regularized model. It is evident from the training and validation curves that the model still continues in a state of overfitting. However, it takes slightly longer, and our validation accuracy is somewhat better, which is decent but not amazing. Due to limited training data, the model continuously sees the same occurrences across epochs, leading to model overfitting. The solution to this challenge would be to augment images in our training set using an image augmentation strategy that uses minor alterations to existing data. For the next attempt, we added data augmentation at training time. This means that we added a preprocessing step to each batch before training. Each image was randomly flipped from left to right and altered in brightness and contrast. All the details of data augmentation are given in Section 4.3. The training parameters described in the earlier attempts were kept identical. The validation and training accuracy are plotted and given in Figure 13. While there are some spikes in the validation accuracy and loss curves, we can see a significant improvement in the validation accuracy. It is significantly closer to training accuracy, which indicates the generalization capability of the model compared to our previously obtained models. Here also, the variation in training and validation accuracy curve indicates the state of model overfitting. A way to combat this would be to adopt transfer learning strategies. Therefore, we trained the model using a massive dataset with a more significant number of instances (here, we used a subset of TinyImageNet). During this stage, the network can learn a robust hierarchy of features: spatial, rotational, and translational invariants. The network can extract relevant features from the images using this pre-trained model for thyroid nodule classification. Here, the network is fine-tuned with thyroid ultrasound images from DDTI datasets. As a result, we achieved a validation accuracy of close to 76%, which is an improvement of almost 6-7% from our basic CNN model with image augmentation. The model does seem to be overfitting, though. After the fifth epoch, there is a substantial gap between model training and validation accuracy curves, suggesting that the model is in the state of overfitting. As of now, however, this appears to be the best model. The training and validation curves for the model are shown in Figure 14. As a next step, we tried multi-level transfer learning based on TinyImageNet and the breast ultrasound images. We can see the improvement in training and validation accuracy and the corresponding loss. As a result, we obtained a better classification model with a validation accuracy of 80%, which is nearly a 5-6% improvement over our previous CNN model which used single-level transfer learning. The training and validation curve for the model is shown in Figure 15. Evaluation Metrics We evaluated the performance of all the models, starting with the baseline model, in terms of different evaluation metrics. Tables 5-7 shows the performance of all the models, starting with the Data Augmentation + Regularization model. The results obtained for all the models (data augmentation + regularization, single-level transfer learning, and multilevel transfer learning) are given in Tables 5-7. We also included different characterization models developed from different pre-trained CNN architectures, and the performance of each model is given in Table 8. The performance of each class (for Inception-v3, Inception-ResNet v2) is shown in Table 5. Table 5 lists the precision, recall, and F1-score results for both benign and malignant classes. Table 6 lists the G-Mean and specificity for both benign and malignant classes. As presented in Table 7, the precision of the first model (Data Augmentation + Regularization) without transfer learning was 88.68% for the benign class and 93.30% for the malignant class, and training was performed using a single level transfer learning and multi-level transfer learning to improve the accuracy further. In the case of single-level transfer learning, it achieved a precision of 90% for the benign class and 93.33% for the malignant class. In the case of multi-level transfer learning, it obtained a precision of 0.9057 for the benign class and 0.9667 for the malignant class. Both approaches based on transfer learning achieved higher precision when compared with the basic CNN approach. In the case of recall, the recall for the proposed approach based on multi-level transfer learning is 0.9796 for the benign class and 0.8529 for the malignant class, while that of single-level transfer learning is 0.9600 for the benign and 0.8485 for the malignant class. Both strategies based on transfer learning obtained higher recall when compared with the basic CNN approach. In the case of the F1-score, the proposed approach based on multi-level transfer learning has an F1-score of 0.9412 for the benign class and 0.9062 for the malignant class, while that of single-level transfer learning is 0.932 for the benign class and 0.8888 for the malignant class. Both approaches based on transfer learning obtained a higher F1-score when compared with the baseline approach, which uses regularization and data augmentation. The above results indicate that our method significantly outperforms the other two methods in all metrics. The performance of the pre-trained model for the thyroid nodule classification is also given in Table 7. The performance of the proposed model is considerably better than other methods. Table 8 provides the detailed experimental results for thyroid nodule characterization from thyroid ultrasound images. It indicates that, for the DDTI datasets, the Xception and Inception networks achieved the best accuracy, precision, and recall. Receiver Operating Characteristics Curve In addition, we plotted ROC curves for each model in Figure 16, along with the AUC value for each model. It visualizes the trade-off between True Positive Rate (TPR) and False Positive Rate (FPR). This figure illustrates how much difference single-level and multi-level transfer learning can make. As shown in Figure 16, the ROC curve of the multi-level transfer learning method is close to the upper-left corner compared with the other two techniques. We can quantify the ROC curve to evaluate the performance of the models further with the AUC value as shown in Figure 16. The AUC value of the model, which follows the multi-level transfer learning technique, has a higher AUC value than the other two methods. Compared with the model without transfer learning, the models that use transfer learning exhibit improved performance, as indicated in Table 7 and Figure 16. Likewise, compared with single-level transfer learning, more improvement can be seen in multi-level transfer learning techniques, which use a medical dataset as a bridge dataset. Comparison with the State of the Art Methods Previous studies have used CNN models to diagnose thyroid cancer in the past, but the samples were small and their accuracy was not significant. As our results have demonstrated, incorporating squeeze and excitation module in the inception architecture along with the application of multi-level transfer learning improved the accuracy, sensitivity, specificity, and AUC of the proposed model. In the experiment analysis, we used the public dataset, DDTI, which includes thyroid nodules with varying sizes, shapes, textures, and locations. A large number of published works rely on private datasets which cannot be used for experimentation. Therefore, comparing the performance of our approach with these existing approaches is difficult. We have tabulated the performance of various models taken from the literature, and it is shown in Table 9. Table 9. Performance analysis of state-of-the-art thyroid nodule characterization methods for 2D ultrasound images. Methods Benign Malignant Accuracy Sensitivity Specificity AUC Advantages The proposed deep-learning approach for classifying thyroid nodules could contribute to clinical practice in different ways. Predictions made by radiologists can differ depending on the individual level of experience and expertise [4]. This automated deep-learning solution can significantly reduce image interpretation time in clinical practice and can provide more accurate results. The readout time for the model was roughly 1.15 s per image. By contrast, the radiologists took approximately 30-40 s to classify one thyroid ultrasound image [4]. Finally, the changes adopted in the improved inception network structure are not only applicable to Inception networks but are also suitable for any convolutional neural network architecture such as residual network and densenet architectures. It is worth mentioning that the approach does not increase the depth of the neural network, and it is easy to deploy. The proposed network architecture is applicable to any image classification domain. Limitations and Future Scope It is important to emphasize that our study has several limitations. The lack of sufficient annotated thyroid ultrasound images has been a predicament in the computeraided detection and characterization of thyroid nodules. A large dataset is required for the development of an efficient CAD system for thyroid nodule diagnosis. To implement and validate a new CAD system, it is necessary to use large datasets [57]. However, this poses a considerable barrier to utilizing the capability of deep learning concepts [57]. Even publicly available datasets of thyroid ultrasound images with manual annotations exist; the number of thyroid cases is limited to hundreds. Collecting a large, comprehensive dataset is required to develop effective CAD systems using deep learning techniques. As a pilot study, our analysis revolves around a public dataset with limited samples drawn from a retrospective and single-center study. Even though a different augmentation approach had to be used to enlarge the sample size, the issues related to the small sample size must be solved. Single-level and multi-level transfer learning have been utilized to address small sample size issues. In multi-level transfer learning techniques, a breast ultrasound image dataset was used as a bridge dataset, consisting of 1200 images. In the future, we will incorporate a dataset containing more samples as a bridge dataset. The proposed approach centered on the presumption that each image included one nodule. In the image where the sonographer delineated two nodules, we divided the image so that only one nodule could be seen. This research has focused only on developing a computer-aided characterization tool to classify benign and malignant thyroid nodules in thyroid ultrasound images. In the future, several aspects must be explored to improve accuracy, performance, and clinical applicability. We suggest a few directions and challenges for future research into thyroid image analysis. In future work, we intend to refine our detection and characterization framework. We can incorporate a detection network into the study that semantically segments the thyroid nodules from thyroid ultrasound images for better thyroid nodule diagnosis. It will provide physicians with a more comprehensive diagnostic model that aids them in risk evaluation and characterization. Furthermore, inception blocks can be replaced with dense or residual blocks. This approach is helpful for clinicians dealing with low-contrast images or images with uneven contrast ratios. Conclusions This paper mainly explores the effectiveness of squeeze and excitation networks and parallel convolutions in the inception architecture to characterize thyroid nodules. We also utilized a multi-level transfer learning technique that uses a bridge dataset from the same domain(ultrasound imaging) as the target domain to address limited sample size issues. The domain difference between the source and target domain is a major concern in single-level transfer learning. These models exhibited better diagnostic performance than state-of-the-art models. Based on the performance of different convolutional neural network models, the proposed approach can significantly improve the diagnosing capability of CAD systems for thyroid nodules. Furthermore, the model represents a generalized platform that can assist clinicians working across multiple domains.
2023-07-27T15:06:57.611Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "46b9878e60363c6f8178a551bb0508164497b697", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4418/13/14/2463/pdf?version=1690206112", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b7af4a89f0faef3cb085fe843265a671c3fce119", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
231758028
pes2o/s2orc
v3-fos-license
Complete Deletion of Slc52a2 Causes Embryonic Lethality in Mice. Riboflavin (vitamin B2) plays an important role in cellular growth and function. Riboflavin transporter 2 (RFVT2) is widely expressed in several tissues, especially in the brain and salivary glands, and plays an important role in the tissue disruption of riboflavin. During the last 10 years, mutations in SLC52A2 have been documented in patients with a rare neurological disorder known as Brown-Vialetto-Van Laere syndrome. However, no suitable animal model of this disease has been reported. Here, we aimed to clarify the physiological role of RFVT2 using Slc52a2-mutant mice. The appearance, body weight, and plasma riboflavin concentration of Slc52a2 heterozygous mutant (Slc52a2+/-) mice were similar to those of wild-type (WT) mice. However, intercrossing between Slc52a2+/- mice failed to generate Slc52a2 homozygous mutant (Slc52a2-/-) mice. This suggested that Slc52a2 gene deficiency results in early embryonic lethality. Our findings suggested that RFVT2 is essential for growth and development, and its deletion may influence embryonic survival. INTRODUCTION Riboflavin (vitamin B2) is an indispensable nutrient for cellular growth and function. 1) The active coenzymes, flavin mononucleotide (FMN) and FAD, which are made from riboflavin. Riboflavin deficiency leads to growth impairment, which is causally related to the role of riboflavin in generation of energy from mitochondrial metabolism. 2) Human riboflavin transporters RFVT1-3/SLC52A1-3 have been identified. 3) RFVT2 predicted to have 10 membrane-spanning domains. 4) The RFVT2-mediated uptake of riboflavin has been shown to be Na + -, Cl − -, and pH-independent. 4,5) RFVT2 mRNA is ubiquitously expressed. 4) It has been suggested that RFVT2 is essential for tissue distribution of water-soluble riboflavin. 5) Since 2010, several mutations in the SLC52A3 and SLC52A2 genes have been shown to be linked to Brown-Vialetto-Van Laere syndrome (BVVLS). 6) BVVLS patients with SLC52A3 mutations have a higher frequency of facial weakness and lower blood riboflavin levels. 7) However, abnormal gait and/or ataxia and optic nerve atrophy appear to be more prevalent features of patients with SLC52A2 mutations. 7) In addition, improvements in motor abilities, respiratory function and/or cranial nerve deficits upon riboflavin supplementation are observed in 70% patients, with the remaining patients showing stabilization of the current disease stage. The responses to riboflavin supplementation are similar in patients with SLC52A2 and SLC52A3 mutations. It has been suggested that immediate and continuous riboflavin administration may prevent neurological changes. 7) In previous studies, we have shown that Slc52a3-knockout mice exhibit phenotypes similar to those seen in patients with SLC52A3 mutations, which are associated with riboflavin deficiency. 8) An analysis of skin fibroblasts from patients with SLC52A2 mutations revealed a significant reduction in electron transport chain complex I and II activ-ity. 9) However, the pathophysiological mechanism of these symptoms is unclear. In this study, we aimed to clarify the significance of Rfvt2 in vivo using Slc52a2-mutant mice. The appearance, body weight, and plasma riboflavin concentration of Slc52a2 heterozygous mutant (Slc52a2+/−) mice were not different from those of wild-type (WT) mice. However, intercrossing between Slc52a2+/− mice failed to generate Slc52a2 homozygous mutant (Slc52a2−/−) mice. These results suggested that Rfvt2 deficiency causes embryonic lethality in mice. MATERIALS AND METHODS Animals All animal studies were conducted in accordance with the Guidelines for Animal Experiments of Kyoto University. Embryos with an Slc52a2 mutation (C57BL/6-Slc52a2 tm1(KOMP)Vlcg ) were purchased from the Knockout Mouse Project (KOMP) Repository. 10) The targeting vector is described in Fig. 1A. To determine mouse genotypes, genomic DNA was isolated from tail biopsies using the GeneAmp ® PCR System 9700 (Applied Biosystems, Foster City, CA, U.S.A.), and PCR analysis was performed using the TaKaRa Ex Taq ® Hot Start Version reaction mix (TaKaRa Bio, Shiga, Japan). The primer sets were as follows: a forward primer, 5′-CCA GAC CCT AAG GCC CAT CAG-3′, and a reverse primer, 5′-CAG CAC GCC ATT GGT CAG AG-3′, for detecting the wild-type alleles and a forward primer, 5′-GGT AAA CTG GCT CGG ATT AGG G-3′, and a reverse primer, 5′-TTG ACT GTA GCG GCT GAT GTT G-3′, for detecting mutant alleles. PCR cycling conditions were as follows: 35 cycles of 94 °C for 30 s, 60 °C for 30 s, and 72 °C for 1 min. Heterozygous mice (8 weeks old) were mated overnight and vaginal plugs were examined the following morning. Plug detection was considered to correspond to day 0.5 of pregnancy. Embryos at E10.5, pups at postnatal day 0, and adult mice older than 8 weeks were used for subsequent experiments. The mice were housed under a 12-h light/dark cycle in a temperature-controlled environment, and were given water ad libitum and a standard chow diet (F-2; Funabashi Farm, Funabashi, Japan) before being used in experiments. All protocols were approved by the Animal Research Committee, Graduate School of Medicine, Kyoto University (Permission No. MedKyo20121). Real-Time PCR Total RNA was isolated from brains dissected at 8 weeks of age, using an RNeasy Mini Kit (Qiagen, Hilden, Germany), and was then reverse transcribed. TaqMan Gene Expression assays were obtained from Life Technologies (Slc52a2, Mm01205717_g1; Carlsbad, CA, U.S.A.). Real-time PCR was performed to determine the mRNA expression level of Slc52a2 as described previously. 4) Measurement of Riboflavin We collected samples of blood and tissue from 16-week-old mice. The concentrations of riboflavin in blood and tissue samples were measured by HPLC (LC-10ADVP; Shimadzu, Kyoto, Japan) according to a previously reported method. 11) Statistical Analysis Statistics were performed using GraphPad Prism (version 7; GraphPad Software, Inc., La Jolla, CA, U.S.A.). All values are expressed as the mean ± standard error of the mean (S.E.M.), and the differences were analyzed for significance using an unpaired Welch's t-test. Multiple comparisons were performed using Bonferroni's two-tailed test, after a one-way ANOVA. The significance was shown based on the p-value (**** p < 0.0001). Targeted Disruption of the Slc52a2 Gene The mouse Slc52a2 gene was deleted from exon 2 to 5, and was integrated with a trapping cassette. PCR analysis confirmed the targeted Slc52a2 allele in genomic DNA isolated from tail biopsies of the offspring (Fig. 1A). Furthermore, real-time PCR analysis demonstrated that Slc52a2 mRNA levels in the brain were significantly lower in Slc52a2+/− mice than in WT mice (Fig. 1B). Riboflavin Homeostasis and Phenotypic Analysis in Slc52a2+/− Mice Macroscopically, the appearance of Slc52a2+/− pups and adults were not different from WT mice (Fig. 3A), and the body weights of Slc52a2+/− and WT mice were similar within 3 weeks of birth (Fig. 3B). We measured riboflavin concentration in plasma (Fig. 4A) and tissues, including the upper and lower small intestine, liver, kidney, lung, heart, muscle, and brain in 16-week-old WT and Slc52a2+/− mice (Fig. 4B). No differences in plasma or tissue riboflavin concentrations were observed between Slc52a2+/− and WT mice. DISCUSSION In this study, we attempted to produce Slc52a2-mutant mice as a pathological model of SLC52A2-mutant BVVLS. However, Slc52a2−/− mice were not observed among newborn pups or E10.5 embryos, including those that died due to maternal neglect. RFVT2 is widely expressed in tissues throughout the body. Therefore, the complete deletion of Slc52a2 expression resulted in embryonic lethality in the early stages of embryonic development. In in vitro functional analyses, SLC52A2 mutations p.G306R and p.L312P show a moderate, but significant, decrease in Modified from Haack et al., 12) Foley et al., 13) and O'Callaghan et al. 7) In in vitro functional analyses, the mutations with a white background showed a moderate decrease in function, while the mutations with a gray background showed almost complete loss of function. transport activity. 12) These mutations have been detected in 30 BVVLS patients (Table 1). Except for one patient, previous studies have shown that BVVLS patients with SLC52A2 mutations have one allele that encodes functional RFVT2. 12,13) A previously described patient with mutations in p.L123P and p.L339P is thought to have survived due to the retention of a low level of RFVT2 activity. Taken together, these data suggested that RFVT2 is essential for embryonic cell survival in vivo, and complete deletion may lead to embryonic lethality. Phenotypic analysis showed no difference between WT and Slc52a2+/− mice. In addition, the riboflavin concentrations in plasma and tissues were unchanged compared with those in WT mice. These results revealed that Slc52a2+/− mice show normal growth, which is consistent with the results reported for Slc52a3+/− mice. 8) In clinical reports, parents or sibling with heterozygous mutation are healthy, suggesting an autosomal recessive mode of inheritance. 14) Therefore, the Slc52a2+/− mouse phenotype may mimic the phenotype of parents of BVVLS patients. When the Slc52a2 gene was completely deleted by homologous recombination with a long-chain sequence, Slc52a2−/− mice were not generated. Creating a single-nucleotide polymorphism animal model, in which some Rfvt2 function is retained, may be an alternative method for producing a pathological model of RFVT2-mutant BVVLS. In conclusion, RFVT2 is an essential transporter for growth and development, and its deletion may influence embryonic survival.
2021-02-03T06:19:21.375Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "797c5d93d5b1ff8ca51bad9fabe2708bd01449a6", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/bpb/44/2/44_b20-00751/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "de09245fc4b273a1458cfe845ffa42e6a96dfa9c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234530822
pes2o/s2orc
v3-fos-license
Neural Network Model Predictive Control (NNMPC) Design for UPFC : Neural Network Model Predictive Control (NNMPC) is like almost like the model predictive control but the used inboard plant is designed based on using the concept of the artificial neural network to predict the behavior of the plant. The predicted values are fed to the optimizer in order to obtain better control variables. This type of controller will be used instead of the conventional controller in the most versatile FACTS devices, which is the Unified Power Flow Controller (UPFC). UPFC has the capability of controlling the transmission line parameters and consequently the flow of the active and reactive power in the transmission line. So, this type of adaptive controller, which is based on Artificial Neural Network (ANN) concept, will be implemented in UPFC, and will be investigated to ensure its robustness, effectiveness and the capability to accommodate any sudden load change in the system of Single Machine to Infinite Bus (SMIB). In addition, the dynamic performance of NNMPC will be compared with another type of adaptive controller scheme called Model Productive Controller (MPC). Introduction Unified Power Flow Controller (UPFC) has the ability to control, independently or simultaneously, all parameters that affect the active and reactive power flow on the transmission line such as the voltage magnitude, impedance and phase angle [1]. The UPFC have a positive impact in enhancing the power quality and system performance [2] and [3]. An important issue in the design of controllers for such a device is robustness, i.e., the controller should achieve the desired damping over a wide range of system operating conditions [4]. The controllers which are being used in UPFC are very important to control all those parameters as desired. An adaptive scheme called Neural Network Model Predictive Control will be used in this study. NNMPC is on Artificial Neural Network (ANN) concept. ANN is considered as a model of how the human brain works. A biological neural network is an essential part of human brain. It is a highly complex network with the ability to process huge amounts of information simultaneously. The input impulses travel via the sensory portion of the peripheral nervous system to the central nervous system for higher level interpretation to response and convey the action through the peripheral nervous system to relevant part in the human body. So, human brain contains of an enormous number of nerve cells and neurons. The combinations of these cells are together creating a very complex network of signal transmission. Each cell collects inputs from all other neural cells it is connected to and if the collected cell information reaches a certain threshold, then it will be conveyed to all the cells it is connected to. So, the interconnection of the large number of neurons in the Biological neurons network architecture will allow a rapid communication spanning throughout all areas of the body. Although, Biological neural networks are complex, but Artificial Neural Network model will be basic structure representation as shown in Table 1 [5] TABLE 1 Basic Structure of Biological Neuron Structure Function Dendrites Input Cell body Integration Axon Conduction Pre-Synaptic terminals Output ANNs, like human, learn by example. ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. ANN can be trained after implementation and needs a trainer designed in hardware or software to provide punishments or rewards for the adopted weights. Each input has an associated weight 'w', which can be modified so as to model synaptic learning by using the concept of back propagation. Back Propagation Concept The ANN cell/unit computes some function 'f' of the weighted sum of its inputs: (1) Its output, in turn, can serve as input to other units as illustrated in Fig. 1. Fig.1. Artificial Neuron The weighted sum ∑ w ij y j j is called the net input to the 'net'. Note that 'wij' refers to the weight from unit j to unit 'i'. The function 'f' is the unit's activation function as could be as linear, sigmoid, step, …etc. So, in the feed forward neural network the inputs are multiplied by the weights then will be summed in the neural cell where the result of the summation will also pass through the activation function 'f'. The outcome from the neural cell will be multiplied again with the next weights and the process will continue up until the final result is obtained. One the final result is obtained it will be compared with the actual result in order to determine the error and train the model. Back propagation will be used to train the network. An example will be extracted from Fig 1, in order to clarify the concept and the equation that will be used in the feed forward and back propagation method. So, for simplicity one string which is in green colour will be analysed as illustrated in Fig. 2. Fig.2. Data Flow in One String of Artificial Neuron The calculation starts from the last output neuron all the way back to the input: The output from neuron 'i' is Gradient Error (δ i ) = ∂Yi ∂xi * (W ij * δ j ) After getting the gradient error 1 and 2 from equation number (4) and (6) respectively, the ∆W and ∆θ will be calculated in order to update the existing weights and biases. Hence the ∆W and ∆θ are obtained, the weights and biases will be updated as follows: The next input will be introduced to the network and same procedure will be followed to obtain the outputs and correct the weights and biases. UPFC Study Gyupyi introduced the UPFC in 1991 [6]. It is composed of two voltage source converters linked by common d.c link as illustrated in Fig. 3. Fig.3. UPFC in SMIB Mathematical models for the steady state and dynamic model will be needed in order to inspect the performance of the UPFC in the system. The steady state model is concerned to determine the initial condition of the system to perform the load flow analysis. While, the dynamic model will be performed to ensure that the performance of the UPFC and its controllers during disturbance and any sudden load changes are acceptable and met the expectations. A. Nabavi-Niaki and M. R. Iravani [7] the model is considered in this study. The system control variables are controlled via the amplitude of the modulation ratios and and the phase angle of the voltage source converter control signal and . The control variables , , and are selected to be connected to the control output signal to control , 2 , and 2 respectively. System Study The UPFC is incorporated in a Single Machine to Infinite Bus (SMIB) system to test and analysis the entire system performance. Model number 1.0 of a synchronous generator with IEEE ST1A excitation system will be adopted as it is used in most of the dynamic studies of power system such as the studied performed by M. Abido [8], M. Abido et. Al. [9] and S. A. Alqallaf [10]. Matlab platform will be used to perform the system simulation. The Concept of NNMPC Controller Model Predicative Control (MPC) is widely used approach which relies on solving a numerical optimization problem online, but due to the complexity of nonlinear control problems it is in general necessary to apply various computational or approximative procedures for the solution. The main drawback of the MPC is that the optimization problem may computationally quite demanding for nonlinear systems. So, in order to reduce the on-line computational requirements, another approach is applied as off-line function approximators to represent the optimal control law such as artificial neural network [11]. Neural networks have been applied very successfully in the identification and control of dynamic systems. The universal approximation capabilities of the multilayer perception make it a popular choice for modelling of nonlinear systems and for implementing of nonlinear controllers. Two-layer networks, with sigmoid transfer functions in the hidden layer and linear transfer functions in the output layer, are universal approximators. The Neural Network Model Predictive Controller is based on the concept of the Artificial Neural Network. NNMPC uses a neural network model of a nonlinear plant to predict future plant performance. Then the controller calculates the control input which will optimize plant performance over a specified future time horizon. The training data were obtained from the nonlinear model of the system. The used model predictive control method was based on the receding horizon technique. The neural network model predicted the plant response over a specified time horizon. The predictions were used by a numerical optimization program to determine the control signal that minimizes performance criterion over the specified horizon [12]. The following steps will be followed to design the NNMPC: 1) System identification: the model predictive control is used to determine the neural network plant model. The prediction error between the plant output and the neural network output is used as the neural network training signal as shown in Fig. 4 [13-15]. Fig.4. NNMPC System Identification 2) The neural network plant model to predict future performance: The past inputs and past plant outputs will be used to predict future values of the plant output as illustrate in Fig. 5. [12]. u(t) is the system input, yp(t) is the plant output, ym(t) is the neural network model plant output, the blocks labeled TDL are tapped delay lines that store previous values of the input signal, IW i,j is the weight matrix from the input j to the layer i. LW i,j is the weight matrix from the layer j to the layer i. Train the NNMPC by back propagation method. 4) Optimization algorithm will be used to determine the control signal that minimizes the cost function in equation (15) over the specified horizon. Where, N1, N2, and Nu define the horizons over which the tracking error and the control increments are evaluated. u′ variable is the tentative control signal. y r is the desired response. y m is the network model response. ρ value determines the contribution that the sum of the squares of the control increments has on the performance index. 5) The block diagram [16][17][18][19][20] in Fig. 6 shows the model predictive control process. The controller consists of the neural network plant model and the optimization block, where the optimization block determines the values of u′ that minimize J in equation (18), and then the optimal u is input to the plant. Fig.6. NNMPC Model Predictive Control Process In this study the real power in line 2 is considered as a reference signal which will be fed to the NNMPC. The output of real power in line 2 from the SMIB will be also fed to the NNMPC in order to simulate and give the proper control signal to the plant as illustrated in Fig. 7. Fig.7. NNMPC Matlab Model Part of NNMPC actions is to perform the system identification to determine the neural network plant model in order send the output to the optimization program to generate the control signal. In order to get an acceptable performance, the number of the neural network hidden layer was selected to be 30 and 10000 numbers of training samples were used to train the neural network model. MPC Concept Design MPC refer to a type of computer control algorithms that utilize an explicit process model to predict the future response of the plant [7]. The concept behind MPC is that it takes the reference signals and the plant outputs and generate control outputs just like any other controller except it is using the inboard model of the plant to predict the behaviour of the plant in future by any of the following method such as Kalman Predictor, BJ model, ARX, ARMAX or …etc. [8]. Future output predication is affected by the past state on future outputs, future inputs on future outputs and model mismatch. The predicted behaviour of the plant will be fed to an optimizer to adjust of the value of the control outputs to make sure that the predicted plant outputs track the reference signals. MPC is considered as a popular controller in industrial applications because at every time step the process executed in the control algorithm, there is optimization involved to give better control outputs. NNMPC Performance in Case of Sudden Step Change It can be seen that, both types of controllers are efficient to stabilize the system. Table 2 shows that, the dynamic performance of MPC is slightly better than NNMPC in raising time, setting time and overshoot for only in which parameter has been tested (P2). In addition, it has been noticed from Fig. 12 that, the 10% reduction in power flow in line 2 is diverted to line number 1 in order to meet the total load required which is equal to 1 p.u. So, the power flow manoeuvre is achieved in this case satisfactorily. terminal line voltage (VEt) respectively. In this case, a sudden system disturbance at time second number 70 has been done for the real power (P2). It can be seen that, both types of controllers are responding to the system change satisfactorily. Table 3 shows that, the dynamic performance of MPC is much better than NNMPC in raising time, setting time and overshoot for only in which parameter has been tested (P2). Conclusion The UPFC based Neural Network Model Predictive Control (NNMPC) has been designed to control the system parameters in the transmission line. The robustness, controllability and the effectiveness of the proposed adaptive controller (NNMPC) has been proven. In addition, the dynamic performance of NNMPC has been tested and compared with another type of adaptive controller scheme called Model Productive Controller (MPC).
2020-11-19T09:13:04.100Z
2020-11-16T00:00:00.000
{ "year": 2020, "sha1": "f5185f7b8cb9cc793cd77911c00e1d0f0e5a84b8", "oa_license": null, "oa_url": "https://doi.org/10.37394/23205.2020.19.25", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "00996650b3683251b6eaf918c53197df926afef9", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
240578454
pes2o/s2orc
v3-fos-license
Economic development of a region with a unique ecological system . The article considers the possibility of developing the economy of the Arctic zone of the Russian Federation, taking into account the preservation of the ecological system of the region and the national and cultural characteristics of the indigenous population. The analysis of the resource potential of the region is given, recommendations for the development of the region's economy using the labor potential of the population living there are offered. The article analyzes the development of the region and the possibility of creating eco-friendly enterprises, the work of which does not violate the natural balance of the Arctic. The article highlights the main elements of the Arctic economic system that require support at the level of strategic development of the state and determine the key positions in the development of the region. The article considers the constraints and problems that hinder the economic development of the Arctic zone of the Russian Federation and the conditions for preserving the uniqueness of the nature and culture of the indigenous peoples of the region. The potential of the economic development of the region is investigated. Innovative options for the development of the region in combination with modern technologies for preserving the unique ecological system and the identity of the indigenous population are considered. Promising directions of economic development of the Arctic region while preserving the uniqueness of nature and cultural traditions of indigenous peoples are proposed. Introduction The Arctic as a special natural zone is interesting and attractive for various sectors of the economy of all countries located on the territory of this natural zone. The richness and diversity of the Arctic nature is an object of national interests of many states. All countries represented in the Arctic consider the use of natural resources as part of their economic development strategy. However, the uniqueness of the natural conditions of this region can be considered an international heritage of all mankind. The history of research and development of the Arctic is primarily associated with the development of natural resources. The Russian Federation has the longest border in the Arctic. The development of the natural resources of the Russian Arctic has a long history. The Arctic zone is of strategic importance for the economy of the Russian Federation [1]. Mineral resources are important not only for the development of the branches of the national economy of Russia, but also represent a promising export potential [2]. In terms of the number of minerals of strategic importance for Russian industry, the Arctic zone is the most important object that needs to be developed taking into account the preservation of natural conditions and the identity of the indigenous population [3]. The Arctic region is a unique nature reserve, the nature of which is an ecological resource that requires careful treatment and maintenance of natural conditions. The Arctic is an element of the global climate system, associated with its other elements-the transport of heat, moisture, salt and water due to the circulation of the atmosphere and the ocean [4]. The spontaneous and uncontrolled development of the natural resources of the region led to a violation of the ecological balance and caused damage to the indigenous population. Materials and methods The richness of the Arctic's natural resources is the basis for the economic development of the countries present in the region. On the one hand, the refusal to develop natural resources is impossible due to the depletion of the potential of the economy of the region and the country as a whole. On the other hand, industrial production in a unique natural system can lead to a violation of the ecological balance. Technogenic impact and industrial expansion can change the traditional way of life of the indigenous population, which will lead to both an outflow of labor resources [5] and a distortion of the cultural heritage of the peoples of the Arctic zone. It can be assumed that the main task in the development of the Arctic region and the development of the economy will be to preserve the uniqueness of the ecological system in combination with the support of traditional folk crafts and the identity of indigenous nationalities [6]. The main purpose of the study was to combine promising directions for the development of the Arctic zone of the Russian Federation and to preserve the uniqueness of the natural and cultural features of the region. The objectives of the study were determined in accordance with the goal: to study the promising directions of economic development of the Arctic zone of the Russian Federation; identify the problems of anthropogenic and technogenic impacts in the Arctic zone; unlock opportunities of development of the Arctic region with the preservation of the natural environment, culture and life of indigenous peoples; to present the results of a statistical analysis of current trends in the maintenance and preservation of the Arctic ecological system in the Russian Federation. To solve these tasks, we used the reports of the Official Statistics Service of the Russian Federation (Rosstat), statistical reports of public opinion research in Russia. Comparative, descriptive and statistical methods of research, as well as the method of analyzing scientific literature. The statistical data were subjected to a comparative analysis, which allowed us to draw conclusions that correspond to the tasks set. The main sources of economic growth in the Arctic region of the Russian Federation are the following: development of economic sectors related to the extraction of minerals, which are rich in the region; creation of transport chains connected with the activities of the Northern Sea Route, providing industry and the social sphere of the region [7]; using the human potential of the indigenous peoples inhabiting the Arctic region, expanding and supporting the local population in combination with using the labor potential of the attracted labor force [8]; preservation and maintenance of the natural environment as a resource of ecological and extreme tourism [9]. The Russian Federation currently produces 91% of natural gas and 80% of industrial gas in the Arctic from all gas production in the country. In addition to the diluted gas reserves, almost all undiscovered Arctic gas reserves are located off the coast of Russia. The total value of mineral resources in the bowels of the Arctic regions of Russia exceeds 30 trillion dollars, and two-thirds of this amount is accounted for by energy carriers. And the total value of the explored reserves is 1.5-2 trillion dollars. This indicates a low degree of exploration, and even more so the development of subsurface resources, and does not allow the full use of the natural resources of the Arctic. Figure 1 shows the percentage of mineral resource development in the Arctic in the Russian Federation. As shown in Figure 1, the percentage of mineral development is only 6.2% of the estimated reserves. A natural question arises: why are the rich resources of mineral raw materials not used? Technological progress and digitalization of production allows us to develop hard-to-reach regions, severe climatic conditions are leading among the factors limiting the development of natural resources in the Arctic. The second most important factor is the need to develop and use complex and expensive technologies to preserve the unique ecological system of the region. Some of them cannot be used in the conditions of the far North or require specialization that cannot be replicated. If such technologies are developed, there will be investors interested in such developments, and the technological process must be adapted to the ecological system of the Arctic region. The combination of complex natural conditions with the need to preserve a unique ecosystem requires unique technical developments that can only be used in a specific region, only to perform certain tasks. There are currently no such scientific and technological developments. In addition, if the projects of the 1980s for the development of the Arctic region are unfrozen, they will be very costly, since it will require significant refinement and improvement of these projects in accordance with new technologies and new demands of society. The third constraint is the lack of a sufficient number of qualified specialists to develop the natural resources of the Arctic in accordance with environmental requirements. The region is also experiencing a shortage of labor resources. Representatives of indigenous peoples are not numerous, and workers who come to the region experience difficulties due to the imperfection of social and living conditions in difficult climatic conditions. The products produced in the Arctic region provide 11% of Russia's national income and more than 20% of all-Russian exports [10]. However, the number of employees operating in Arctic organizations is less than 3% of the total number of employees in the Russian Federation. This ratio is shown in figure 2. Figure 2 allows us to conclude that 11% of the national income is provided by 2.9% of the working-age population. This fact proves the lack of labor resources in the region. The lack of qualified personnel does not allow the development of technologies in the region that ensure the stability of the environmental and technological balance. The indigenous population of the Russian Arctic does not exceed 200 thousand people [11]. Due to the small number and lack of the required qualifications, the indigenous population cannot meet the requirements of the modern labor market in the region [12]. The centuries -old period of adaptation to extreme natural conditions has determined the specifics of their way of life, the uniqueness of culture and traditions. Most of them continue to use natural resources in traditional ways, preserving their identity. Violation of this way of life can lead to the disappearance of some indigenous peoples. At the same time, the adaptation of the indigenous population to the harsh climatic conditions of the region allows us to consider this small social group as a resource for the development of the Arctic [13]. As a positive trend, it can be noted that the life expectancy of the population in the Arctic has increased. The life expectancy of the indigenous population of the Arctic has always been lower than in Russia. Currently, there is an increase in the life expectancy of the population. So the life expectancy of people in the Arctic zone in 2014 was 70.15 years, in 2018 -an increase to 72.39 years was noted [14]. The Arctic is of interest to fans of extreme tourism [15]. The Russian Federation has unique recreational resources in the Arctic. The Arctic Economic Council of Russia, along with energy and infrastructure, has created a working group on tourism. However, the number of offers on the tourist services market is small. It offers almost the only tourist product-a cruise to the North Pole. This offer is unique and is an exclusive Russian tour. The development of ecological and cruise tourism in the Arctic depends on the modernization of transport hubs. Of particular importance is the Northern Sea Route, which began operating in the mid-30s of the twentieth century. The discovery of the Northern Sea Route was the result of numerous research works that were carried out over several centuries in the Russian Empire. The goal was to create a transport corridor for the supply of goods to the regions of the far North. The modern functions of the Northern Sea Route have changed, and cruise tourism and extreme tourism in the Arctic can be added to them. The use of the Northern Sea Route for the development of tourism in the Arctic can make a significant contribution to the economy of the region [16]. Discussion The awareness of the inextricable link between the economic development of the Arctic region and the preservation of the unique ecological system and the cultural heritage of indigenous peoples led to a new state strategy for the development of the Russian Arctic. the development of digital technologies in the field of mining and oil refining industry, the use of telecommunications to control technological processes in industry will reduce the share of heavy physical labor, reduce the time of performing labor operations [17]. In conditions of lack of labor resources and the need to minimize the anthropogenic negative impact on nature. Digitalization of the industry can be the best solution; innovative technologies in crop and animal husbandry will allow indigenous peoples to create jobs, facilitate work and improve the quality of life. Fishing and animal husbandry are traditional activities of the indigenous peoples of the North. Training the indigenous population in new technologies and professional skills will help to interest the younger generation in the development of basic activities; it is necessary to create conditions for indigenous people to receive higher education in their native region in the specialties that are in demand and necessary for the development of traditional activities. The connection with ethnic traditions leaves an imprint on all spheres of life of the indigenous nationalities of the Arctic. Getting an education requires a special vacation mode and a format of interaction with teachers. Digitalization of education will help representatives of indigenous nationalities to get an education without leaving the area of residence; issues of environmental safety of the region should be resolved jointly with representatives of the regional administration, federal authorities and representatives of the indigenous population; the use of the transport arteries of the Arctic zone to solve new problems for the development of new tourist destinations is transforming the activities of the Northern Sea Route and cross-Arctic air travel. The development of new logistics opportunities is associated with the modernization and renewal of transport hubs [18]. The largest and most famous -Murmansk, Pevek, Tiksi, Dixon, Dudinka. The Murmansk Region, which has historically been the "gateway" to the Arctic, can act as the flagship of Russian cruise tourism; Ecotourism and ethnographic tourism are promising areas for the Arctic. Ecotourism implies minimal anthropogenic impact on the places of residence and promotes the preservation of natural objects. Ethnographic tourism involves direct "immersion" of participants in the original culture of the small indigenous peoples of the Arctic, which will allow them to transfer their rich cultural heritage to other cultures and will contribute to the development of public interest in the life of the indigenous population [19]. To ensure the preservation of the Arctic ecological system in the industrial development of natural resources and the development of enterprises in the region in accordance with the state strategy for the development of the region, investments in fixed assets for the protection and restoration of natural resources are increasing. A comparative analysis of investments from 2017 to 2019 is presented in figure 3. Figure 3, there is an increase in investment in fixed assets in all areas of environmental protection to minimize anthropogenic impact: protection and rational use of water resources, protection of atmospheric air; protection and rational use of land; protection of the environment from pollution by production and consumption waste. The trend of increasing investment in the protection of the natural resources of the Arctic zone of the Russian Federation allows us to predict the development of the region in accordance with modern environmental requirements. Conclusions The development of the Arctic economy with the preservation and protection of the unique natural environment allows us to identify the following promising areas: development of raw materials industries, mining in combination with the environmental responsibility of enterprises engaged in the extraction of raw materials. The analysis of the amount of recycled waste by enterprises of the Arctic zone suggests that in 2019, compared to 2017, the amount of recycled industrial waste increased 33 times. This fact testifies to the increased environmental responsibility of industrial enterprises in the Arctic. The data is shown in the figure 4; development of the human potential of the indigenous peoples of the Arctic is linked to the maintenance of the natural environment in which they live and engage in traditional animal husbandry. Support for traditional activities of the local population is associated with land reclamation. Figure 5 shows that the amount of reclaimed land increased by 2 times from 2017 to 2019. This confirms the trend towards the preservation and increase in the number of indigenous peoples. Preservation and maintenance of not only the ecological system, but also the cultural heritage of the population historically living in the Arctic; the modernization of the transport arteries of the Arctic zone to meet new challenges will change the direction of the Northern Sea Route and cross-Arctic air travel. The renewal of transport hubs will help to develop the region's economy and minimize the negative impact on the environment; the development of digital technologies in industry, the use of telecommunications to control technological processes will facilitate the work of employees and reduce the need for labor resources [20]; the preservation of unique natural conditions that are of global value is a unique resource for ecological and extreme tourism. The Arctic zone of the Russian Federation combines factors that need to be taken into account in the future development of the region. The ethos is a lack of labor, difficult climatic conditions, a unique ecological system and the identity of the indigenous population. The integration of the rich natural potential of the region with the listed features will ensure the economic development of the region while maintaining the balance of the ecological system.
2021-08-27T16:41:23.540Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "ba6e78313d6b20f14b34cd95d32371806e9ef372", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/60/e3sconf_tpacee2021_07022.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "8989e215e5747a6b5bd264fbc0d26d71d023180e", "s2fieldsofstudy": [ "Economics", "Environmental Science" ], "extfieldsofstudy": [ "Political Science" ] }
258562259
pes2o/s2orc
v3-fos-license
Higher education in the crisis period: A comparative analysis of the Ukrainian experience of online or blended TEFL during the pandemic and the war The article is dedicated to the problem of teaching English as a foreign language (TEFL) in higher education during the crisis period by an example of a Ukrainian experience of teaching during the pandemic and the war. In the beginning, the research focused only on the impact of the COVID-19 pandemic on the TEFL at tertiary schools in Ukraine. However, after the outbreak of the war on the territory of Ukraine on 24 February 2022, it became clear that the experience could be extrapolated to the conditions of other crises, in particular, to the conditions of armed aggression. This article uses mixed quantitative and qualitative comparative methods to analyze the survey results of educators' attitudes toward online and blended learning. The blind questionnaire was held in May 2021 and then repeated in February 2023. The participants (n=70, n = 69) were representatives of about 50 higher education institutions (HEI) from all over Ukraine. The research is a case study with context-dependent knowledge; however, it may be with some reservations relevant to other countries under similar conditions. The instrument included 10 questions related to the participant's previous experience of online teaching, reflections on the most difficult elements of online teaching, advantages and disadvantages of online teaching, and types of learning activities that can easily be adapted and transferred into online mode of learning. The questionnaire also covers the role of a teacher in online learning, the overall assessment of the experience of online learning and preferences of the mode of teaching which the teachers would like to preserve. The structure of the questionnaire included 4 multiple-choice single-answer questions and 6 multiple-choice multiple-answer questions. Answers from the two polls of respondents were then compared using Fisher's correlation coefficient to prove the statistical significance of the received data. The results of the questionnaire have shown that the majority of participants (61 %; 42%) think that implementation of blended learning is the best option to use in the universities, thus presenting a need to the higher education system of Ukraine to develop and introduce blended learning curricula. Introduction In the United Nations ' Transforming Education Summit (2022) it was stated that 'The world is witnessing an alarming increase in the number of people affected by armed conflict, forced displacement, including large-scale refugee displacement, health and climate-induced disasters, and other crises. This means disrupted education for 222 million school-aged children and youth and education systems pushed to the brink of their capacities to deliver.' 1 Unfortunately, Ukraine's share in these statistics is quite big. The education system, and in particular, higher education of Ukraine, faced two consecutive challengesthe COVID-19 pandemic and the armed aggression of the Russian Federation, both having immense effects on the functioning of the system. On the background of war, the challenge of the pandemic in Ukraine faded. However, it can be said that the experience and the solutions developed in its conditions became quite useful. In fact, we can trace both similar and different features of these two calamities. The COVID-19 pandemic period can be characterized by seasonal nature, with periods of splashes and relatively steady periods; the disease also proved to be more dangerous to certain groups of people; some precautions (like wearing protective masks, following hygiene rules, and quarantine restrictions) could be taken to lessen the potential harm from the COVID; apart from some reasonable limitations (like using public transport, attending crowded places, etc.), the systems of infrastructure and communication were not influenced. In contrast, under conditions of war, people are in danger all the time; missile or other types of strike is unpredictable, equally threatening children, youth, adults and older people. Critical infrastructure and communication systems also face damage or become irreparable, causing substantial difficulties in ensuring the basic needs of citizens. One factor that is common in the mentioned crisis periods is the inability of students (partial or complete) to regularly attend lessons at the premises of the university for the reasons of danger to health and life and emotional and psychological stress. In this article, we will trace the cumulative effect of higher education institutes' work in the crisis period on educators' attitudes towards remote teaching. With the help of a questionnaire, we will discover the attitude, evaluation and approach to online or blended teaching in their development, starting from the outbreak of COVID to the year of war on the territory of Ukraine. Literature review The first case of COVID-19 infection was diagnosed in Ukraine on 03rd March 2020. Soon after, on 11th March, the all-Ukrainian quarantine was announced, which meant suspending the activity of all educational establishments. In April, the need to secure the basic right of people to education and continue teaching led to resuming studying in a remote, mostly online way. However, it was quite a difficult and stressful process. Teachers faced a lot of difficulties, starting from the basic lack of technical equipment or proper Internet connection on the part of a teacher or/ and a student (especially in rural areas), lack of experience and skills in using technical means of education, and a lot of emotional and psychological issues related to the feeling of isolation. The results of research and surveys conducted in Ukraine revealed a number of problems with ensuring the right to quality education, including social, economic, organizational and educational, psycho-emotional, technical and sanitary and hygienic (Novikov, 2021). Similar challenges are experienced teachers worldwide (Elangovan & Parayitam (2022), Rana & McBee-Black (2022); Roy & Covelli (2020). The possibility to restart offline classes varied in time in different parts of Ukraine, as a flexible approach to imposing quarantine restrictions was applied in the state. Higher education institutions in Ukraine enjoy the right to relatively independently organize their mode of study, so when the state quarantine restrictions were lessened, some of them came back to offline study (with multiple interruptions), and some applied blended or distance learning modes of instruction. Almost three years after the start of the pandemic, as of 01st March 2023, the number of all-time cases of COVID in Ukraine accounted for 5,382,095, among which there were 111,175 deaths 2 . Ukraine took 10 th place in Europe by the number of cases and 23rd place in the world by the number of deaths due to COVID. Assessing the impact of the pandemic on education, researchers worldwide agree that the coronavirus pandemic has changed the landscape of higher education (Singh et al., 2021) and has become not only a health crisis but also an educational one (Bryant et al., 2022). And although for schools, COVID:19 scale of education loss is 'nearly insurmountable', as warns UNICEF 3 , the experience of online learning in higher education seems to demonstrate some positive results. Grose (2021) reviewed a number of pandemic-forged changes that universities in North America are likely to keep in post-pandemic time, which included videos and simulations, inverted classrooms, learning management systems, AI-assisted grading, the addition of social workers to address mental health issues. Reflecting on the impact of COVID on Irish tertiary schools, Kinsella (2020) states that 'both directly and indirectly, recent events call into question the rationale of universities in their present form'. However, admitting the unsustainability of this form, the author concludes that the crisis may also allow to reaffirm the true and broader identity of universities. Abdus-Saboor (2020) emphasizes the role of COVID-19 in spotlighting problems of an inequitable learning environment and the positive transformation of old pedagogy in Georgia. In India, the pandemic opened the opportunities of including web-based teaching. Moreover, as Elangovan & Parayitam (2022) state, 'as the online instruction is expected to continue for some more time, as the global pandemic is slowly becoming endemic, blended or hauntological teaching may eventually substitute the traditional chalk-and-talk face-to-face teaching'. The idea of retaining some of the elements of remote learning is also shared by Ukrainian academics. The research of the attitudes of students toward distance learning during the pandemic showed that, on the whole, students positively estimated the transformations introduced in the learning process (Melnychenko, Zheliaskova, 2021). However, the impact of the war on education in Ukraine is drastic. It meant not only the disability to continue studying but also jeopardized the very life of participants in the education process leading to their forced displacement and physical destruction of educational establishments. The number of refugees who fled Ukraine to other countries since the beginning of the war is more than 16.6 million persons. The Ministry of Education and Science of Ukraine reports 2,619 educational institutions have been damaged by bombing and shelling, and another 406 are completely destroyed (the statistics are given as of 23rd December 2022) 4 . Some negative experience was also shared by other education systems worldwide during conflicts and was reflected in relevant research works. Among the devastating effects of war in Yemen on education, Muthanna, Almahfali, and Haider (2022) list the displacement and discrimination, the use of children as fighters for the future, the conflict of identities among children, the destruction of children's physical and mental health, the exploitation of education for financial benefits, the normalization of negative behaviors, and the destruction of teacher's dignity. Assessing the potential effect of civil conflict in the Basque region on the incentives to acquire education, de Groot and Goksel (2011) come to the conclusion that individuals living in conflict areas tend to acquire higher education with a high probability of migration of this group in the future. Abdulbaqi (2009), on the example of higher education in Afghanistan, stresses the need for investments in education, devising education policy and taking revolutionary steps. However, the authors come to a sad conclusion that nothing constructive can be fully realized as long as the country is in a state of war. For Ukraine, a way to continue education in the conditions of the war was found in the wide application of online and blended models of teachinga trend that is developing rapidly worldwide and supported by the UN, which declares that teaching must be transformed and the digital revolution is key to transformation. Its report of 2022 indicates that 'If properly harnessed, connectivity and openly accessible digital teaching and learning resources can contribute to the transformation and democratization of education.' 5 Blended learning is a mode of instruction that combines electronic and online media and traditional face-to-face teaching. Blended teaching presupposes that a student attend some of the lessons in the brick-and-mortar classroom and some of the lessons be given online. It is up to the situation and the conditions in which students and teachers are to decide how to create the 'blend', which lessons to allocate to online work, and whichto traditional classroom work. In online learning, all of the study is placed online, in a virtual environment, using technological means of communication (video conferencing tools, messengers, e-mailing, etc.) In turn, online education can be performed in synchronous or asynchronous mode. The former means that students need to be online during video lectures or lessons or any other kind of learning activity; the latter means the possibility of studying offline with subsequent submission of the works. The problem of blended learning was studied from a variety of sides. Very profound research done by Liu et al. (2016) was aimed at assessing the effectiveness of blended learning for health professional learners compared with no intervention and with nonblended learning. The authors came to the conclusion that blended learning appeared to have a consistently positive effect in comparison with no intervention and to be more effective than or at least as effective as nonblended instruction for knowledge acquisition in health professions. The main aim of Lozano-Lozano et al. (2020) was to examine the short-term effects of a blended learning method using traditional materials plus a mobile learning app on knowledge, motivation, mood state, and satisfaction among undergraduate students enrolled in a health science first-degree program. The problems of language education during the COVID-19 pandemic were also in the spotlight of such researchers as Tafazoli, 2021 (professional development of teachers); Shen, 2020 (teaching planning); Alamer, 2022 (psychological factors). Taking into consideration the research in the spheres of education in emergencies and crisis periods, teaching under conditions of the COVID pandemic as well as investigations into the use of blended and online learning, we can say that Ukraine demonstrates a unique experience of consecutive exposure to both health and war emergencies, combined with the possibilities to implement different types of online and blended modes of study. The education system proved to be stable and functional through these challenges, demonstrating a positive example of sustainability and continuity, the ability to react in the short term to changing circumstances and unpredictable restrictions. The aim of this research is to conduct a comparative analysis of the attitude of higher education teachers towards online and blended learning (2021) and to analyze how this experience influenced the adaptation of education to the conditions of study during the war (2023). This led to the formulation of the research questions RQ1) How have the attitudes of educators towards online /blended teaching changed after COVID-19 and the war? RQ2) Do more teachers have experience of online teaching and believe their skills are sufficient to teach online? RQ3) Do educators want to retain online or blended modes of teaching? Research design The research was organized in four stages, from March 2021 (a year after the start of the pandemic) to February 2023 (a year after the start of the war). The research instrument comprised 10 mixed questions. Before answering the questions, consent to analyze and interpret the collected data was received from the participants. No data regarding age, gender or personal information was requested. Method The research combined mixed quantitative and qualitative, comparative, and statistical methods to analyze the survey results of educators' attitudes towards online and blended learning in Ukraine during the COVID-19 pandemic and the Russian-Ukrainian war. Due to the nature of the tools of the research, we address some issues quantitively, giving percentages of the answers and statistically analyzing their value. However, some answers are unquantifiable which dictates the need to apply a qualitative approach to investigate teachers' perception of some of the issues, generalizing the open answers and conceptualizing them. As stated by Merriam (2009) a qualitative methodology might be used to research what meaning participants attribute to their experiences, so we applied it to describe the change of attitudes towards online and blended learning. The visualization was applied to illustrate the geographical dissemination of the received responses and build the comparative graphs. Participants The participants of the study were 70 educators (first wave) and 69 educators (second wave) from a wide variety of Ukrainian institutes and academies. The invitation to participate in the questionnaire was sent via social messengers in the professional groups of educators (i.e. European League of Professional Development) and among colleagues on the 'snowball' principle. The key role in the dissemination process played the geographic location of the participants as we wanted the survey to cover the whole territory of Ukraine. Of course, some scientific and educational centres of Ukraine (like the capital -Kyiv, Kharkiv, Lviv, etc.) are represented more (See Pics. Experience of the respondents The first question addressed the issue of online or blended teaching experience prior to the pandemic and the war. In 2023, after the first crisis and the year since the beginning of the war, the number of teachers with relevant online/blended experience increased significantly to 88,4 percent compared to 22,9 percent in 2021. It could be assumed that the remaining part of the participants who didn't have the relevant experience were either newly recruited staff or were forced to suspend their activities during the COVID-19 pandemic. Attitude toward online study The attitude towards online study shifted only in some aspects. The respondents generally still see time-saving, opportunities to use multimedia content, access to information, and possibilities of automated checking of the tests as the advantages of online study. However, in 2023, fewer respondents believe that online study can offer such advantages as broader possibilities of learning process organization (44,3 percent in 2021 compared to 33,3 percent in 2023); modern means of study materials presentation (65,7 % in 2021 -50,7 % in 2023); as well as easier contact with students (30% in 2021 -18,8 % in 2023). This interesting result is probably due to the fact that online study during the war was often impaired by power outages, which hindered some of the benefits of online. Easier contact with students 21 (30%) 13 (18,8%) None of the above 4 (5,7%) 3 (4,3%) As for the disadvantages of online study, the majority (68,6% in 2021 and 71% in 2023) named technical issues. An interesting shift occurred in the assessment of the impossibility of properly presenting and explaining materials online. In 2021, 24,3% believed quality presentation is impossible, but in 2023 this number fell to only 5,8%, demonstrating positive dynamics allegedly due to newly acquired skills. Changes in teaching style Reflecting on the changes in the personal style of teaching, in both waves of the survey, the respondents mentioned that they plan a lesson in a different way and reorganize types of work during the lessons (58,6% in 2021; 60,9% in 2023) and conduct a more active search for new materials and use more Internet-resources (55,7% in 2021 and52,2% in 2023). Interestingly, with time educators started to allocate less time to hold students` attention and discipline (25,7% in 2021 and 8,7% in 2023). The number of those who research new methods of content presentation and organization of group work also fell (from 74,3% in 2021 to 59,4% in 2023). Feel less confident, experience struggles connected to online work 9 (12,9%) 3 (4,3%) No changes 1 (1,4%) 3 (4,3%) Educators were asked to choose from the roles of a teacher which they think are most relevant during online study. The majority chose the role of a tutor, stressing the importance of lesson conduct, selection of content, creation of feedback mechanism (75,7% in 2021 and 69,6% in 2023). The importance of the role of a teacher as a mentor who fosters intellectual development and offers emotional support was slightly raised in the eyes of the respondents (from 60% in 2021 to 63,8% in 2023). We also can observe the increase in the answers on the importance of the role of a teacher as a coordinator (from 27,1% in 2021 to 37,7% in 2023). Assessment of online and blended teaching Some interesting changes were observed in the general assessment of online teaching. While in 2021, the majority (60%) believed it to be developing (offering development of professional skills and competencies), in 2023, most respondents (58%) chose the option 'perspective', admitting high potential for future use. Validation of the results We applied the statistical method of Fisher correlation to assess the validity of the results obtained in the questionnaire. To address our research question of whether more educators have experience teaching online, we put forward two hypotheses: Hypotheses 0. The sample of participants who have experience of teaching online is not significantly bigger than the analog sample in 2021. Hypotheses 1. The sample of participants who have experience of teaching online is significantly bigger than the analog sample in 2021. The table of empirical frequencies for two samples is presented below. Values 1 and 2, which correspond to percentages in each of the samples are as follows: 1 ( Thus, we can support Hypotheses 1 that the sample of participants who have experience of teaching online is significantly bigger than the analogue sample in 2021. To address our research question of whether, after three years of online or blended teaching, more educators believe their skills are sufficient to teach online, we put forward two hypotheses: Hypotheses 2. The sample of participants who consider their skills of online teaching sufficient in 2023 is not significantly bigger than the analog share in 2021. Hypotheses 3. The sample of participants who consider their skills of online teaching sufficient in 2023 is significantly bigger than the analog share in 2021. The table of empirical frequencies for two samples is presented below. Values 1 and 2, which correspond to percentages in each of the samples, are as follows: 1 (52.9 %) = 1.629 2 (63.8 %) = 1.848 Empirical value equals 1.641. This means that as the obtained value 1.641 does not belong neither to the range of significance, nor insignificance, we cannot support or reject Hypothesis 2 and Hypothesis 3. Although we can see an increase in the number and percentages of respondents who think they have enough skills, some more research is needed to obtain statistically valid results. Discussion In general, the results obtained are in line with the recent research conducted on the topic of emergency transition to online teaching. A survey by Marshall, Shannon & Love (2020) of how teachers experienced the COVID-19 transition to remote instruction showed that a large majority of 92.4% of teachers had never taught online before the emergency transition, and very few had received any training from their school or school district. Respondents rated all of the job functions to be more challenging remotely, and they said they lacked adequate time to do the job well and, in some cases, lacked access to many of their pedagogical materials. There is also an ongoing global trend to utilize the advances of online and blended learning and teleworking. Some researchers observe the desire of participants to continue remote working in post-pandemic. For example, Donelly & Proctor-Thomson (2015) held a case study of a government agency that switched totally to home-based telework and revealed that staff members benefited from telework, in particular, increased incentive to return to work and improved work-family balance. Anthonysamy (2022) states that perceived usefulness, satisfaction, confirmation of expectations, and work-life balance add up to employees' desire to continue telecommuting post-pandemic. Shift in the role of a teacher and the overall paradigm of education is also supported by Strugielska, Guttfeld and Linke-Ratuszny (2021), who observed 're-configuration at the conceptual level of a language teaching method, where the central ingredient, cognitivism, seems likely to gravitate towards humanism and constructivism rather than the outlier, behaviorism'. Limitations This study suffers from several limitations: non-response bias, bias of perception. Moreover, an ideal empirical survey would include a larger sample of respondents to get solid statistical results. Conclusions Education plays one of the most necessary roles in modern life. Though in developed countries, it, along with other basic state services, has been taken for granted, it is a crisis period that can demonstrate the true value of such an achievement of a modern society. Ukraine's case study illustrates how it proved to be necessary to continue the education process even during the most acute crisis periods pandemic and war. The year 2020 will be marked in history by the unprecedented in 21 stcentury pandemic of COVID-19, which has drastically influenced literally every part of human life. Education, as one of the most irreplaceable and engaging spheres of human activity, faced the urgent need to rethink and re-organize the teaching and learning process overnight. Education has changed, and it has become common to have students learning distantly and teachers mastering virtual learning environments. Never could we predict that this experience would be invaluable during another crisis
2023-05-09T15:01:47.853Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "4ff6fc741ce06305af540bc8c76fabd798336d2e", "oa_license": null, "oa_url": "http://www.xlinguae.eu/files/XLinguae2_2023_8.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6f5390854d7709284a57d110233141014d99e2a3", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine" ] }
1362382
pes2o/s2orc
v3-fos-license
Quantum Minimum Distance Classifier We propose a quantum version of the well known minimum distance classification model called Nearest Mean Classifier (NMC). In this regard, we presented our first results in two previous works. First, a quantum counterpart of the NMC for two-dimensional problems was introduced, named Quantum Nearest Mean Classifier (QNMC), together with a possible generalization to any number of dimensions. Secondly, we studied the n-dimensional problem into detail and we showed a new encoding for arbitrary n-feature vectors into density operators. In the present paper, another promising encoding is considered, suggested by recent debates on quantum machine learning. Further, we observe a significant property concerning the non-invariance by feature rescaling of our quantum classifier. This fact, which represents a meaningful difference between the NMC and the respective quantum version, allows us to introduce a free parameter whose variation provides, in some cases, better classification results for the QNMC. The experimental section is devoted: (i) to compare the NMC and QNMC performance on different datasets; and (ii) to study the effects of the non-invariance under uniform rescaling for the QNMC. Introduction In recent years, we observed an increasing interest toward the use of quantum formalism in non-microscopic domains [1][2][3][4].The idea is that the powerful predictive properties of quantum mechanics, used for describing the behavior of microscopic phenomena, turn out to be particularly beneficial also in non-microscopic domains.Indeed, the real power of quantum computing consists in exploiting the strength of particular quantum properties in order to implement algorithms which are much more efficient and faster than the respective classical counterpart.For this purpose, several non standard applications involving the quantum mechanical formalism have been proposed, in research fields such as game theory [5], economics [6], cognitive sciences [7], signal processing [8], and so on.Further, particular applications, interesting for the specific topics of the present paper, concern the areas of machine learning and pattern recognition. Quantum machine learning aims at using quantum computation advantages in order to find new solutions to pattern recognition and image understanding problems.Regarding this, we can find several efforts exploiting quantum information properties for the resolution of pattern recognition problems in [9], while a detailed overview concerning the application of quantum computing techniques to machine learning is presented in [10]. In this context, there exist different approaches involving the use of quantum formalism in pattern recognition and machine learning.We can find, for instance, procedures that exploit quantum properties in order to reach advantages on a classical computer [11][12][13] or techniques supposing the existence of a quantum computer in order to perform in an inherently parallel way all the required operations, taking advantage of quantum mechanical effects and providing high performance in terms of computational efficiency [14][15][16]. One of the main aspects of pattern recognition is focused on the application of quantum information processing methods [17] to solve classification and clustering problems [18,19]. The use of quantum states for representing patterns has a twofold motivation: as already discussed, first of all it permits the exploitation of quantum algorithms for enhancing the computational efficiency of the classification procedure.Secondly, it is possible to use quantum-inspired models in order to reach some benefits with respect to classical problems.With regards to the first motivation, in [15,16], it was proved that the computation of distances between d-dimensional real vectors takes time O(log d) on a quantum computer, while the same operation on a classical computer is computationally much harder.Therefore, the introduction of a quantum algorithm for the purpose of classifying patterns based on our encoding gives potential advantages to rush the whole procedure. Even if in literature we can find techniques proposing some kind of computational benefits [20], the main problem to find a more convenient encoding from classical to quantum objects is currently an open and interesting matter of debate [9,10].Here, our contribution consists of constructing a quantum version of a minimum distance classifier in order to reach some convenience, in terms of the error in pattern classification, with respect to the corresponding classical model.We have already proposed this kind of approach in two previous works [21,22], where a "quantum counterpart" of the well known Nearest Mean Classifier (NMC) has been presented. In both cases, the model is based on the introduction of two main ingredients: first, an appropriate encoding of arbitrary patterns into density operators; second, a distance between density matrices, representing the quantum counterpart of the Euclidean metric in the "classical" NMC.The main difference between the two previous works is the following one: (i) firstly [21], we tested our quantum classifier on two-dimensional datasets and we proposed a purely theoretical generalization to an arbitrary dimension; (ii) secondly [22], a new encoding for arbitrary n-dimensional patterns into quantum states has been proposed, and it was tested on different real-world and artificial two-class datasets.Anyway, in both cases we observed a significant improvement of the accuracy in the classification process.In addition, we found that, by using the encoding proposed in [22] and for two-dimensional problems only, the classification accuracy of our quantum classifier can be further improved by performing a uniform rescaling of the original dataset. In this work we propose a new encoding of arbitrary n-dimensional patterns into quantum objects, extending both the theoretical model and the experimental results to multi-class problems, which preserves information about the norm of the original pattern.This idea has been inspired by recent debates on quantum machine learning [9], according to which it is crucial to avoid loss of information when a particular encoding of real vectors into quantum states is considered.Such an approach turns out to be very promising in terms of classification performance compared to the NMC.Further, differing from the NMC, our quantum classifier is not invariant under uniform rescaling.In particular, the classification error provided by the QNMC changes by feature rescaling.As a consequence, we observe that, for several datasets, the new encoding exhibits a further advantage that can be gained by exploiting the non-invariance under rescaling, and also for n-dimensional problems (conversely to the previous works).To this end, some experimental results have been presented. The organization of this paper is as follows.In Section 2, the classification process and the formal structure of the NMC for multi-class problems are described.Section 3 is devoted to the definition of a new encoding of real patterns into quantum states.In Section 4, we introduce the quantum version of the NMC, called Quantum Nearest Mean Classifier (QNMC), based on the new encoding previously described.In Section 5, we show experimental results related to the NMC and QNMC comparison which generally exhibit better performance of our quantum classifier (in terms of error and other meaningful classification parameters) with respect to the NMC.Further, starting from the fact that the QNMC is not invariant under uniform coordinate rescaling (contrary to the corresponding classical version), we also show that for some datasets it is possible to provide a benefit from this non-invariance property.Finally, the last section includes conclusions and probable future developments. The present work is an extended version of the paper presented at the conference Quantum and Beyond 2016, Vaxjo, 13-16 June 2016 [23], significantly enlarged in theoretical discussion, experimental section and bibliography. Minimum Distance Classification Pattern recognition [24,25] is the machine learning branch whose purpose is to design algorithms able to automatically recognize "objects". Here, we deal with supervised learning, whose goal is to infer a map from labeled training objects.The purpose of pattern classification, which represents one of the main tasks in this context, consists in assigning input data to different classes. Each object is univocally identified by a set of features; in other words, we represent a d-feature object as a d-dimensional vector x = [x (1) , . . ., x (d) ] ∈ X , where X ⊆ R d is generally a subset of the d-dimensional real space representing the feature space.Consequently, any arbitrary object is represented by a vector x associated with a given class of objects (but, in principle, we do not know which one).Let Y = {1, . . ., L} be the class label set.A pattern is represented by a pair ( x, y), where x is the feature vector representing an object and y ∈ Y is the label of the class which x is associated with.A classification procedure aims at attributing (with high accuracy) to any unlabeled object the corresponding label (where the label attached to an object represents the class which the object belongs to), by learning about the set of objects whose class is known.The training set is given by S tr = {( x n , y n )} N n=1 , where x n ∈ X , y n ∈ Y (for n = 1, . . ., N) and N is the number of patterns belonging to S tr .Finally, let N l be the cardinality of the training set associated to the l-th class (for l = 1, 2, . . ., L) such that ∑ L l=1 N l = N.We now introduce the well known Nearest Mean Classifier (NMC) [24], which is a particular kind of minimum distance classifier widely used in pattern recognition.The strategy consists in computing the distances between an object x (to classify) and other objects chosen as prototypes of each class (called centroids).Finally, the classifier associates to x the label of the closest centroid.So, we can resume the NMC algorithm as follows: 1. Computation of the centroid (i.e., the sample mean [26]) associated to each class, whose corresponding feature vector is given by: where l is the label of the class; 2. Classification of the object x, provided by: where d E is the standard Euclidean distance.In this framework, argmin plays the role of classifier, i.e., a function that associates to any unlabeled object the correspondent label. Generally, it could be that a pattern of a given class is closer to the centroid of another class.This fact can depend on the specific data distribution for instance.Consequently, if the algorithm would be applied to this pattern, it would fail.Hence, for an arbitrary object x which belongs to an a priori not known class, the classification method output has the following four possibilities [27]: (i) True Positive (TP): pattern belonging to the l-th class and correctly classified as l; (ii) True Negative (TN): pattern belonging to a class different than l, and correctly classified as not l; (iii) False Positive (FP): pattern belonging to a class different than l, and incorrectly classified as l; (iv) False Negative (FN): pattern belonging to the l-th class, and incorrectly classified as not l. Generally, a given classification method is evaluated via a standard procedure which consists of dividing the original labeled dataset S of size N , into a set S tr of N training patterns and a set S ts of (N − N) test patterns, i.e., S = S tr ∪ S ts where S ts is the test set [24], defined as S ts = {( x n , y n )} N n=N+1 .As a consequence, we can examine the classification algorithm performance by considering the following statistical measures associated to each class l depending on the quantities listed above: Further, other standard statistical coefficients [27] used to establish the reliability of a classification algorithm are: 1−Pr(e) , where Pr(a) = TP+TN N −N , Pr(e) = (TP+FP)(TP+FN)+(FP+TN)(TN+FN) The classification error represents the percentage of misclassified patterns, the precision is a measure of the statistical variability of the considered model and the Cohen's Kappa represents the degree of reliability and accuracy of a statistical classification and it can assume values ranging from −1 to +1.In particular, if K= +1 (K= −1), we correctly (incorrectly) classify all the test set patterns.Let us note that these statistical coefficients have to be computed for each class.Then, the final value of each statistical coefficient related to the classification algorithm is the weighted sum of the statistical coefficients of each class. Mapping Real Patterns into Quantum States As already discussed, quantum mechanical formalism seems to be promising in non-standard scenarios, in our case to solve for instance pattern classification tasks.To this end, in order to provide our quantum classification model, the first ingredient we have to introduce is an appropriate encoding of real patterns into quantum states.Quoting Schuld et al. [9], "in order to use the strengths of quantum mechanics without being confined by classical ideas of data encoding, finding 'genuinely quantum' ways of representing and extracting information could become vital for the future of quantum machine learning." Generally, given a d-dimensional feature vector, there exist different ways to encode it into a density operator [9].As already mentioned, finding the "best" encoding of real vectors into quantum states (i.e., outperforming all the possible encodings for any dataset) is still an open and intricate problem.This fact is not so surprising because, on the other hand, in pattern recognition is not possible to establish an absolute superiority of a given classification method with respect to the other ones, and the reason is that each dataset has unique and specific characteristics (this point will be deepened in the numerical section). In [21], the proposed encoding was based on the use of the stereographic projection [28].In particular, it uniquely maps a point r = (r 1 , r 2 , r 3 ) on the surface of a radius-one sphere S 2 (except for the north pole) into a point x = [x (1) , x (2) ] in R 2 , i.e., whose image plane passes through the center of the sphere.The inverse of the stereographic projection is: where , we consider r 1 , r 2 , r 3 as Pauli components of the density operator ρ x ∈ Ω 2 (where the space Ω d of density operators for d-dimensional systems consists of positive semidefinite matrices with unitary trace) associated to the pattern x = [x (1) , x (2) ], defined as: x (1) − ix (2) x (1) + ix (2) 1 . ( The proposed encoding offers the advantage of visualizing a bi-dimensional vector on the Bloch sphere [21].In the same work, we also introduced a generalization of our encoding to the d-dimensional case, which allows to represent d-dimensional vectors as points on the hypersphere S d by writing a density operator ρ as a linear combination of the d-dimensional identity and d 2 − 1 (d × d)-matrices {σ i } (i.e., generalized Pauli matrices [29,30]). However, even if it is possible to map points on the d-hypersphere into d-feature patterns, they are not density operators as a rule and the one-to-one correspondence between them and density matrices is guaranteed only on particular regions [29,32,33]. An alternative encoding of a d-feature vector x into a density operator was proposed in [22].It is obtained by: (i) by mapping x ∈ R d into a (d + 1)-dimensional vector x ∈ R d+1 according to the generalized version of Equation ( 4), i.e., SP −1 : [x (1) , . . ., where (ii) by considering the projector ρ x = x • ( x ) T .Here, a different kind of quantum minimum distance classifier is considered, based on a new encoding again and we show that it exhibits interesting improvements by also exploiting the non-invariance under feature rescaling.Accordingly with [9,15], when a real vector is encoded into a quantum state, in order to avoid a loss of information it is important that the quantum state keeps information on the original real vector norm.In light of this fact, we introduce the following alternative encoding. 1. We map the vector x ∈ R d into a vector x ∈ R d+1 , whose first d features are the components of the vector x and the (d + 1)-th feature is the norm of x.Formally: x = [x (1) , . . ., 2. We obtain the vector x by dividing the first d components of the vector x for || x||: x 3. We compute the norm of the vector x , i.e., || x || = || x|| 2 + 1 and we map the vector x into the normalized vector x as follows: Now, we provide the following definition. Definition 1 (Density Pattern).Let x = [x (1) , . . ., x (d) ] be a d-dimensional vector and ( x, y) the corresponding pattern.Then, the density pattern associated with ( x, y) is represented by the pair (ρ x , y), where the matrix ρ x , corresponding to the feature vector x, has the following form: where the vector x is given by Equation ( 10) and y is the label of the original pattern. Hence, this encoding maps real d-dimensional vectors x into (d + 1)-dimensional pure states ρ x .In this way, we obtain an encoding that takes into account the information about the initial real vector norm and, at the same time, allows to easily encode arbitrary real d-dimensional vectors. Clearly, there exist different ways to encode patterns into quantum states by maintaining some information about the vector norm.However, the one we show has been inspired by simple considerations concerning the two-dimensional encoding on the Bloch sphere, naturally extended to the d-dimensional case.To this end, in [21] it was analytically proved that the encoding of x = [x (1) , x (2) ] into the density operator ρ x given by Equation ( 5) can be exactly recovered if we consider as starting point the vector [x (1) + ix (2) , || x||] and by applying the set of transformations given by Equations ( 9)-( 11). Density Pattern Classification In this section, a quantum counterpart of the NMC is provided, named Quantum Nearest Mean Classifier (QNMC).It can be seen as a particular kind of minimum distance classifier between quantum objects (i.e., density patterns).First of all, the use of this new quantum formalism could provide potential advantages in reducing the computational complexity of the problem if we consider a possible implementation of our framework on a quantum computer (as already explained in the Introduction).Secondly, it permits to fully compare the NMC and the QNMC performance by using a classical computer only.About the second point, we reiterate that our aim is not to assert that the QNMC outperforms all the other supervised classical procedures, but to prove (as we will show by numerical simulations) that it performs better than its "natural" classical counterpart (i.e., the NMC). In order to provide a quantum counterpart of the NMC, we need: (i) an encoding from real patterns to quantum objects (defined above); (ii) a quantum version of the classical centroid (i.e., a sort of quantum class prototype), that will be named quantum centroid; and (iii) an appropriate quantum distance between density patterns, corresponding to the Euclidean metric for the NMC.In such a quantum framework, the quantum version S q of the dataset S is given by: where (ρ x n , y n ) is the density pattern associated to the pattern ( x n , y n ).Consequently, S q tr and S q ts represent the quantum versions of the training and test set respectively, i.e., the sets of all the density patterns corresponding to the patterns in S tr and S ts .Now, we can naturally define the quantum version of the classical centroid µ l , given in Equation (1). Definition 2 (Quantum Centroid).Let S q be a labeled dataset of N density patterns such that S q tr ⊆ S q is a training set composed of N density patterns.Further, let Y = {1, 2, . . ., L} be the class label set.The quantum centroid of the l-th class is given by: where N l is the number of density patterns of the l-th class in S q tr , such that ∑ L l=1 N l = N. Let us stress that the quantum centroids are generally mixed states and we cannot get them by mapping the classical centroids µ l , i.e., ρ l = ρ µ l , ∀l ∈ {1, . . ., L}. Therefore, the quantum centroid has a completely new meaning because it is no longer a pure state and does not have any classical counterpart.This is the main reason that establishes the deep difference between both classifiers.At this purpose, it is easy to verify [21] that, unlike the classical case, the expression of the quantum centroid is sensitive to the dataset dispersion.Now, we recall the definition of trace distance between quantum states (see, e.g., [34]), which can be considered as a suitable metric between density patterns. Definition 3 (Trace Distance). Let ρ 1 and ρ 2 be two arbitrary density operators belonging to the same dimensional Hilbert space.The trace distance between ρ 1 and ρ 2 is: where |A| = Clearly d T , as the true metric for density operators, satisfies the standard properties of positivity, symmetry and triangle inequality.The use of the trace distance in our quantum framework is naturally motivated by the fact that it is the simplest possible choice among other possible metrics in the density matrix space [35].Consequently, it can be seen as the "authentic" quantum counterpart of the Euclidean distance, which represents the simplest choice in the starting space.However, the trace distance exhibits some limitations and downsides (in particular, it is monotone but not Riemannian [36]).On the other hand, the Euclidean distance in some pattern classification problems is not enough to fully capture for instance the dataset distribution.For this reason, other kinds of metrics in the classical space are adopted to avoid this limitation [24].To this end, as a future development of the present work, it could be interesting to compare different distances in both quantum and classical framework, able to treat more complex situations (we will deepen this point in the conclusions). We are ready to introduce the QNMC procedure consisting, as the classical one, of the following steps: • Constructing the sets S q tr , S q ts by mapping each pattern of the sets S tr , S ts via the encoding introduced in Definition 1; • Calculating the quantum centroids ρ l (∀l ∈ {1, . . .L}), by using the quantum training set S q tr , in accordance with Definition 2; • Classifying a density pattern ρ x ∈ S q ts by means of the optimization problem: where d T is the trace distance introduced in Definition 3. Experimental Results This section is devoted to showing a comparison between the NMC and the QNMC performances in terms of the statistical coefficients introduced in Section 2. We use both classifiers to analyze twenty-seven datasets, divided into two categories: artificial datasets (Gaussian (I), Gaussian (II), Gaussian (III), Moon, Banana) and the remaining ones which are real-world datasets, extracted both from the UCI (UC Irvine Machine Learning Repository) [37] and KEEL (Knowledge Extraction based on Evolutionary Learning) [38] repositories.Further, among them we can find also imbalanced datasets, whose main characteristic is that the number of patterns in a given class is significantly lower than those belonging to the other classes.Let us note that, in real situations, we usually deal with data whose distribution is unknown, then the most interesting case is the one in which we use real-world datasets.However, the use of artificial datasets following known distribution, and in particular Gaussian distributions with specific parameters, can help to catch precious information. Comparison between QNMC and NMC In Table 1 we summarize the characteristics of the datasets involved in our experiments.In particular, for each dataset we list the total number of patterns, the number of each class and the number of features.Let us note that, although we mostly confine our investigation to two-class datasets, our model can be easily extended to multi-class problems (as we show for the three-class datasets Balance, Gaussian (III), Hayes-Roth, Iris). In order to make our results statistically significant, we apply the standard procedure which consists in randomly splitting each dataset into two parts, the training set (representing the 80% of the original dataset) and the test set (representing the 20% of the original dataset).Finally, we perform 10 runs for each dataset, with a random partition at each experiment.Let us stress that the results appear robust with respect to different partitions of the original dataset.Further, we consider only 10 runs because, for a greater number, the standard deviation of the classification error mean value is substantially the same.In Table 2, we report the QNMC and NMC performance for each dataset, evaluated in terms of mean value and standard deviation (computed on ten runs) of the statistical coefficients, discussed in the previous section.For the sake of simplicity, we omit the values of FPR and FNR because they can be easily obtained by TPR and TNR values (i.e., FPR = 1 − TNR, FNR = 1 − TPR). classes have the same number of patterns), patterns have the same dispersion in both classes, and only some features are correlated [40].Gaussian (II) is an unbalanced dataset (i.e., classes have a very different number of patterns), patterns do not exhibit the same dispersion in both classes and features are not correlated.Gaussian (III) is composed of three classes and it is an unbalanced dataset with different pattern dispersion in all the classes, where all the features are correlated. For this kind of Gaussian data, we remark that the NMC does not offer the best performance in terms of pattern classification [24] because of the particular characteristics of the class distribution.Indeed, the NMC does not keep into consideration the pattern dispersion.Conversely, by looking at Table 2, the improvements of the QNMC seem to exhibit some kind of sensitivity of the classifier with respect to the data dispersion.A detailed description of this problem will be addressed in a future work. Further, we can note that the QNMC performance is better also for imbalanced datasets (the most significant cases are Balance, Ilpd, Segment, Page, Gaussian (III)), which are usually difficult to deal with standard classification models.At this purpose, we can note that the QNMC exhibits a classification error much lower than the NMC, up to a difference of about 12%.Another interesting and surprising result concerns the Iris0 dataset, which represents the imbalanced version of the Iris dataset: as we can observe looking at Table 2, our quantum classifier is able to perfectly classify all the test set patterns, conversely to the NMC. We remark that, even if it is possible to establish whether a classifier is "good" or "bad" for a given dataset by the evaluation of some a priori data characteristics, generally it is no possible to establish an absolute superiority of a given classifier for any dataset, thanks to the No Free Lunch Theorem [24].In any case, the QNMC seems to be particularly convenient when the data distribution is difficult to treat with the standard NMC. Non-Invariance Under Rescaling The final experimental results that we present in this paper regard a significant difference between NMC and QNMC.Let us suppose that all the components of the feature vectors x n (∀n = 1, . . ., N ) belonging to the original dataset S are multiplied by the same parameter γ ∈ R, i.e., x n → γ x n .Then, the whole dataset is subjected to an increasing dispersion (for |γ| > 1) or a decreasing dispersion (for |γ| < 1) and the classical centroids change according to µ l → γ µ l (∀l = 1, . . ., L).Therefore, pattern classification for the rescaled problem consists of solving: argmin l=1,...,L d E (γ x n , γ µ l ) = γargmin l=1,...,L d E ( x n , µ l ), ∀n = N + 1, . . ., N . For any value of the parameter γ it can be proved [22] that, while the NMC is invariant under rescaling, for the QNMC this invariance fails.Interestingly enough, it is possible to consider the failure of the invariance under rescaling as a resource for the classification problem.In other words, through a suitable choice of the rescaling factor is possible, in principle, to get a decreasing of the classification error.To this end, we have studied the variation of the QNMC performance (in particular of the classification error) in terms of the free parameter γ and in Figure 1 the results for the datasets Appendicitis, Monk and Moon are shown.In the figure, each point represents the mean value (with corresponding standard deviation represented by the vertical bar) over ten runs of the experiments.Finally, we have considered, as an example, three different ranges of the rescaling parameter γ for each dataset.We can observe that the resulting classification performance strongly depends on the γ range.Indeed, in all the three cases we consider, we obtain completely different classification results based on different choices of the γ values.As we can see, in some situations we observe an improvement of the QNMC performance with respect to the unrescaled problem (subfigures (b), (c), (e), (h)), in other cases we get worse classification results (subfigures (a), (d), (g), (i)) and sometimes the rescaling parameter does not offer any variation of the classification error (subfigure (f)).In conclusion, the range of the parameter γ for which the QNMC performance improves, is generally not unique and strongly depends on the considered dataset.As a consequence, we do not generally get an improvement in the classification process for any γ ranges.On the contrary, there exist some intervals for the parameter γ where the QNMC classification performance is worse than the case without rescaling.Then, each dataset has specific and unique characteristics (in complete accord to the No Free Lunch Theorem) and the incidence of the non-invariance under rescaling in the decreasing of the error, in general, should be determined by empirical evidences. Conclusions and Future Developments In this work we have introduced a quantum minimum distance classifier, named Quantum Nearest Mean Classifier, which can be seen as a quantum version of the well known Nearest Mean Classifier.In particular, it is obtained by defining a suitable encoding of real patterns, i.e., density patterns, and by recovering the trace distance between density operators. A new encoding of real patterns into a quantum objects have been proposed, suggested by recent debates on quantum machine learning according to which, in order to avoid a loss of information caused by encoding a real vector into a quantum state, we need to consider the normalized vector keeping some information about its norm simultaneously.Secondly, we have defined the quantum centroid, i.e., the pattern chosen as the prototype of each class, which is not invariant under uniform rescaling of the original dataset (unlike the NMC) and seems to exhibit a kind of sensitivity to the data dispersion. In the experiments, both classifiers have been compared in terms of significant statistical coefficients.In particular, we have considered 27 different datasets having different nature (real-world and artificial).Further, the non-invariance under rescaling of the QNMC has suggested to study the variation of the classification error in terms of a free parameter γ, whose variation produces a modification of the data dispersion and, consequently, of the classifier performance.In particular we have showed as, in the most of cases, the QNMC exhibits a significant decreasing of the classification error (and of the other statistical coefficients) with respect to the NMC and, for some cases, the non-invariance under rescaling can provide a positive incidence in the classification process. Let us remark that, even if there is not an absolute superiority of QNMC with respect to the NMC, the proposed technique leads to relevant improvements in terms of pattern classification when we deal with an a priori knowledge of the data distribution. In light of such considerations, further developments of the present work will involve the study of: (i) the optimal encoding (mapping patterns to quantum states) which ensures a better classification accuracy (at least for a finite set of data); (ii) a general method to find the suitable rescaling parameter range we can apply to a given dataset for further optimizing the classification process; and (iii) the data distribution for which our quantum classifier outperforms the NMC.Further, as discussed in Section 4, in some situations the standard NMC is not very useful as a classification model, especially when the dataset distribution is quite complex to deal with.In pattern recognition, in order to address such problems, other kinds of classification techniques are used instead of the NMC, for instance the well known Linear Discriminant Analysis (LDA) or Quadratic Discriminant Analysis (QDA) classifiers, where different distances between patterns are considered, taking the data distribution into account more precisely [24].To this end, an interesting development of the present work could regard the comparison between the LDA or QDA models and the QNMC based on the computation of more suitable and convenient distances between density patterns [35]. Figure 1 . Figure 1.Comparison between NMC (Nearest Mean Classifier) and QNMC (Quantum Nearest Mean Classifier) performance in terms of the classification error for the datasets (a-c) Appendicitis, (d-f) Monk, (g-i) Moon.In all the subfigures, the simple dashed line represents the QNMC classification error without rescaling, the dashed line with points represents the NMC classification error (which does not depend on the rescaling parameter), points with related error bars (red for Appendicitis, blue for Monk and green for Moon) represent the QNMC classification error for increasing values of the parameter γ. Table 1 . Characteristics of the datasets used in our experiments.The number of each class is shown between brackets.
2017-12-22T01:43:58.341Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "88df2147f48145cb781ab72ac6aa0a7c251c6979", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1099-4300/19/12/659/pdf?version=1512154541", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8d0ecba99978a787b5741ad009565eb6f1ea0ed1", "s2fieldsofstudy": [ "Computer Science", "Physics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
52071208
pes2o/s2orc
v3-fos-license
The Genomic Basis of Color Pattern Polymorphism in the Harlequin Ladybird Summary Many animal species comprise discrete phenotypic forms. A common example in natural populations of insects is the occurrence of different color patterns, which has motivated a rich body of ecological and genetic research [1, 2, 3, 4, 5, 6]. The occurrence of dark, i.e., melanic, forms displaying discrete color patterns is found across multiple taxa, but the underlying genomic basis remains poorly characterized. In numerous ladybird species (Coccinellidae), the spatial arrangement of black and red patches on adult elytra varies wildly within species, forming strikingly different complex color patterns [7, 8]. In the harlequin ladybird, Harmonia axyridis, more than 200 distinct color forms have been described, which classic genetic studies suggest result from allelic variation at a single, unknown, locus [9, 10]. Here, we combined whole-genome sequencing, population-based genome-wide association studies, gene expression, and functional analyses to establish that the transcription factor Pannier controls melanic pattern polymorphism in H. axyridis. We show that pannier is necessary for the formation of melanic elements on the elytra. Allelic variation in pannier leads to protein expression in distinct domains on the elytra and thus determines the distinct color patterns in H. axyridis. Recombination between pannier alleles may be reduced by a highly divergent sequence of ∼170 kb in the cis-regulatory regions of pannier, with a 50 kb inversion between color forms. This most likely helps maintain the distinct alleles found in natural populations. Thus, we propose that highly variable discrete color forms can arise in natural populations through cis-regulatory allelic variation of a single gene. In Brief More than 200 distinct color forms have been described in natural populations of the harlequin ladybird, Harmonia axyridis. Gautier et al. show that this variation is controlled by the transcription factor Pannier. Pannier is necessary to produce black pigment, and its expression pattern prefigures the coloration pattern in each color form. SUMMARY Many animal species comprise discrete phenotypic forms. A common example in natural populations of insects is the occurrence of different color patterns, which has motivated a rich body of ecological and genetic research [1][2][3][4][5][6]. The occurrence of dark, i.e., melanic, forms displaying discrete color patterns is found across multiple taxa, but the underlying genomic basis remains poorly characterized. In numerous ladybird species (Coccinellidae), the spatial arrangement of black and red patches on adult elytra varies wildly within species, forming strikingly different complex color patterns [7,8]. In the harlequin ladybird, Harmonia axyridis, more than 200 distinct color forms have been described, which classic genetic studies suggest result from allelic variation at a single, unknown, locus [9,10]. Here, we combined whole-genome sequencing, population-based genome-wide association studies, gene expression, and functional analyses to establish that the transcription factor Pannier controls melanic pattern polymorphism in H. axyridis. We show that pannier is necessary for the formation of melanic elements on the elytra. Allelic variation in pannier leads to protein expression in distinct domains on the elytra and thus determines the distinct color patterns in H. axyridis. Recombination between pannier alleles may be reduced by a highly divergent sequence of $170 kb in the cis-regulatory regions of pannier, with a 50 kb inversion between color forms. This most likely helps maintain the distinct alleles found in natural populations. Thus, we propose that highly variable discrete color forms can arise in natural populations through cis-regulatory allelic variation of a single gene. RESULTS AND DISCUSSION Ladybird species have long been studied by geneticists and evolutionary biologists to investigate the origin and maintenance of discrete color pattern forms in natural populations. In particular, the harlequin ladybird, Harmonia axyridis, is an emblematic species of elytral color pattern polymorphism, with more than 200 color pattern forms described from different localities [11,12]. However, four forms dominate natural populations with high frequencies ( Figure 1A) [14]: three distinct melanic forms harboring different patterns (from darkest to lightest, form [f.] conspicua, f. spectabilis, and f. axyridis, hereafter called Black-2Spots, Black-4Spots, and Black-nSpots, respectively) and a non-melanic form (f. succinea, called Red-nSpots). The striking array of color patterns documented in H. axyridis in the wild has been attributed to a combination of allelic diversity, interactions between allelic forms, and plastic response to environmental factors [11,14]. Genetic crosses have demonstrated that the majority of H. axyridis melanic forms result from variation of multiple alleles segregating at a single, uncharacterized, autosomal locus [9,10], hereafter referred to as the color pattern locus. To identify this color pattern locus and the mechanisms underlying discrete color pattern variation, we used a population genomics approach, taking advantage of the co-occurrence of multiple color pattern forms in natural populations. To that end, we first performed a de novo genome assembly of the H. axyridis Red-nSpots form (HaxR) using long reads produced by a MinION sequencer (Oxford Nanopore) (Table S1). Then, to fine map the color pattern locus on this assembly, we sequenced, on a HiSeq 2500 (Illumina), DNA from 14 pools of individuals that were representative of the world-wide genetic diversity (i.e., eight different geographic origins) and the four main color pattern forms of H. axyridis. Our aim was to characterize genetic variation associated with phenotypic differences across pool samples, using the proportion of individuals of a given color form in each pool as a covariate. Table S2 shows how individuals were split both by color form and by geographic location for most pools in order to maximize the proportion of alleles of a given color form in the pools (n = 40 to n = 100 individuals per pool). Because the trait is monogenic (and autosomal), the highest mapping power would have been achieved if all of the color pattern alleles were co-dominant (i.e., pool frequencies for each allele would then be directly derived from the color pattern of the pooled individuals). However, previous work demonstrated that the observed color pattern of an individual imperfectly predicts its genotype at the underlying color pattern locus due to the hierarchical dominance between all of the color pattern alleles, with Black-2Spots > Black-4Spots > Black-nSpots > Red-nSpots [9,11,14]. Since the Red-nSpots allele is the most recessive one (with seven pools including 100% of Red-nSpots individuals; Table S2), we performed a genome-wide association study, following Gautier [15], to identify SNPs associated with the proportion of Red-nSpots individuals in each sequenced pool as a covariate to achieve the highest mapping power. Among 18,425,210 SNPs we called on the 457 autosomal contigs (totaling 377.5 Mb), we found 710 SNPs strongly associated with the proportion of the Red-nSpots form (Bayes factor > 30 deciban [db]), the vast majority (86%) of which are located within a single 1.3 Mb contig, utg676 ( Figures 1B and 1C). The 56 SNPs with the strongest association signals (Bayes factor > 100 db) delineate an $170 kb region on HaxR, representing the strongest candidate region for the color pattern locus. Importantly, additional genome-wide association studies using the proportions of Black-4Spots, Black-2Spots, or Black-nSpots individuals in the pools as covariates pointed exclusively to the same region ( Figure S1), although these analyses were less powerful. The candidate color pattern locus extends from the first coding exon of the ortholog of the Drosophila gene pannier, including its first intron and first non-coding exon, to the end of the neighboring 5 0 gene, the GATAe ortholog ( Figure 1D). To test a possible role of pannier or GATAe in adult color pattern formation, we used RNA interference (RNAi) [16]. Because adult pigmentation patterns are specified during pupal development in insects, we injected larvae of the different H. axyridis forms, just before pupation, with double-stranded RNA (dsRNA) targeting the coding sequences of pannier or GATAe. We also targeted Figure 1B). The relative ordering of these contigs was derived from the de novo sequencing of the Black-4Spots allele extended region (see STAR Methods). (D) The gene content at the identified color pattern locus. Fifty-six SNPs with the strongest association signal delimit a candidate color pattern locus region of $170 kb (yellow boxes) that extends from the first coding exon of pannier (pnr) to the 5 0 upstream gene GATAe. Red and blue lines show conserved sequence blocks in forward or reverse direction, respectively, detected by [13]. The first intron of pannier contains the footprint of an $50 kb inversion (shaded boxes). Two alternative splice variants of pnr are produced (named 1 and 2). See also Figures S1, S3, and S4. eGFP as a negative control. Targeting of GATAe or eGFP had no effect on pigmentation, both in the Red-nSpots and in the Black-4Spots forms, suggesting that GATAe does not play any role in elytral pigmentation ( Figures S2A and S2B). In contrast, knockdown of pannier dramatically reduced the formation of black pigment in all different forms, resulting in adults with almost homogeneous red elytra ( Figure 2). Dark pigment formation is strongly reduced not only in the elytra, but also in the head and the rest of the body. Use of a different, non-overlapping dsRNA fragment of pannier ( Figure S3) produced similar results, ruling out RNAi off-target effects ( Figures S2C and S2D). These results show that pannier is necessary for the formation of black pigment in H. axyridis adults. Furthermore, combined with our genome-wide association study, our data indicate that pannier is the main gene responsible for color pattern polymorphism in H. axyridis and that different pannier alleles determine the color pattern in the different forms. To understand how pannier contributes to the formation of different color patterns, we compared its coding sequences between the Red-nSpots and Black-4Spots forms. We did not find any non-synonymous mutation, thus ruling out changes in Pannier protein composition ( Figure S3A). We next hypothesized that pannier might have evolved divergent expression patterns during the development of the elytra, resulting in different color pattern forms. We therefore compared pannier expression level by qRT-PCR in late-developing pupal elytra between Red-nSpots and Black-4Spots forms. We found that pannier is expressed at a higher level in the elytra of the Black-4Spots form compared to the Red-nSpots form ( Figure 3A). In order to determine how this difference reflected in Pannier spatial expression pattern, we compared the relative pannier expression levels in different parts of a Black-4Spots elytron. We found that pannier is expressed at a higher level in a presumptive black area, in the middle of the elytron, compared to the presumptive red areas ( Figure 3B). In order to map these differences onto spatial expression patterns, we stained late pupal elytra with an antibody raised against H. axyridis Pannier. We found that Pannier spatial distribution on the elytra is different between color pattern forms ( Figures 3C-3F). Strikingly, in all forms, areas with the strongest Pannier expression levels prefigure the adult elytral pattern of melanic elements. This tight spatial correlation, coupled with our genomic association study and the essential role of pannier in governing melanic patterns, provides strong evidence that cis-regulatory changes at the pannier locus drive divergent pannier expression patterns and, in turn, the polymorphic melanic patterns of H. axyridis. In addition to allelic variation, color pattern diversity in H. axyridis is shaped by the dominance relationships among color form alleles [11]. Indeed, similarly to other species (e.g., [17]), heterozygous individuals resulting from the cross of distinct homozygous H. axyridis forms produce black pigmentation in any part of the elytra that is black in either parental form ( Figure 4) [11]. Our results explain this phenomenon, known as mosaic dominance, at the molecular level. Since elytral Pannier expression patterns mimic adult black pigmentation patterns, and since each pannier allele carries its own cis-regulatory determinant to drive specific expression pattern, the expression pattern of pannier in heterozygotes reflect the sum of individual form patterns ( Figure 4). In other words, the mosaic expression of pannier during development, driven by the heterozygous alleles, produces a mosaic pigmentation pattern in adults. This phenomenon compounds the effect of pannier allelic variation, increasing the complexity of color patterns polymorphism in H. axyridis. We suggest that this explanation can account for other cases of mosaic dominance, providing a simple and general mechanism to expand the pigmentation pattern repertoire of any species. Finally, in order to precisely compare the sequences of pannier alleles between the Red-nSpots and another form, we sequenced the Black-4Spots form de novo. We chose the Black-4Spots form because it seemed the most divergent when compared to the Red-nSpots, based on genome-wide associa- tion results ( Figure S1) and careful inspection of read coverage of the pools representative of color forms ( Figure S4). We generated the Black-4Spots draft assembly (HaxB4) from Illumina sequencing reads (Table S1) and supplemented it by targeted bacterial artificial chromosome (BAC) sequencing to derive a high-quality 2.87 Mb sequence (% N = 0.96) spanning pannier and adjacent genomic regions. Strikingly, we found that the Red-nSpots and Black-4Spots sequences of the 5 0 non-coding DNA and the first intron of pannier align poorly, in contrast to adjacent regions ( Figures S3B and S3C). Furthermore, we detected the footprint of an $50-kb-long inversion within the first intron of pannier ( Figure 1D, Figure S3C). This comparison indicates that the sequences of the non-coding DNA of pannier have diverged extensively between these forms. Altogether, our results conclusively show that pannier plays a key role in the specification and the diversification of the main color pattern forms in H. axyridis. pannier has never been reported to play a role in pigmentation in insects. It has therefore been co-opted for this function in the lineage leading to H. axyridis, presumably through the evolution of new regulatory connections with downstream effector genes directly involved in black pigment production [18]. This contrasts with other insect groups, including butterflies and fruit flies, in which different regulatory genes have been co-opted to generate wing color patterns [1,2,18,19]. Furthermore, we show that polymorphic color patterns in H. axyridis arise from differential regulation of pannier spatial expression. The divergent non-coding sequences we have identified in the H. axyridis color pattern forms are quite large ($170 kb) and may host multiple discrete cis-regulatory elements. We propose that these cis-regulatory elements produce different pannier expression patterns and, ultimately, discrete melanic patterns, by interpreting differently a trans-regulatory landscape that is common to all H. axyridis color pattern forms. This model is reminiscent of the mechanisms underlying thoracic bristles or wing pigmentation pattern diversity in Drosophila species [18,20,21] and that have been proposed to explain wing color pattern evolution in butterflies [1]. We have demonstrated that the cis-regulatory region of pannier is highly divergent between the Red-nSpots and Black-4Spots alleles and that it includes an $50 kb inversion. Furthermore, our data provide evidence of large-scale sequence variation among all four alleles of the main color pattern forms in natural populations ( Figure S4). These results are in agreement with a recent, independent study [22]. We hypothesize that the numerous, rare color pattern forms that have been described in H. axyridis [11] are also determined by pannier cis-regulatory variation and that they might result from rare mutational events, including rare recombination between the alleles of the main forms. The striking sequence divergence among the pannier alleles of the main color forms brings into question their evolutionary origin. Possibilities include ancient mutational events or even across species introgression events [10,23]. Thorough characterization of the pannier genomic region from different color pattern forms both within H. axyridis and across coccinelid species (especially those harboring color pattern polymorphisms [7]) will illuminate the evolutionary origin of the genomic determinants of color pattern forms in H. axyridis and other ladybird species. Finally, if sequence divergence helps preserve distinct pannier alleles by reducing recombination among them, selective mechanisms are also suspected to maintain different color forms in natural populations of H. axyridis and to affect their frequencies. Both local adaptation to climatic factors and seasonal variations in temperature have been suggested to affect color forms proportions in space and time, possibly mediated by mate choice [10,12,24]. The identification of the genomic basis of color pattern polymorphism will help to better characterize the evolutionary mechanisms that shape the striking color pattern diversity in natural populations of H. axyridis and to reveal potential pleiotropic effects of pannier alleles on traits involved in survival and reproduction [24,25]. STAR+METHODS Detailed methods are provided in the online version of this paper and include the following: AUTHOR CONTRIBUTIONS M.G. conceived the project, designed the study, carried out bioinformatics and statistical treatments for genome-wide association studies and the identification of HaxR autosomal contigs, performed or supervised bioinformatic analyses and BAC contig construction, and wrote the manuscript. J.Y. carried out larval RNAi, cDNA sequencing, qPCR, immunohistochemistry, and imaging studies; processed various bioinformatics treatments; and wrote the manuscript. J.F. helped design the study and contributed to obtaining and maintaining various H. axyridis populations in the lab. A.L. processed molecular work for the de novo assembly, genome-wide association studies, and BAC library PCR screening. A.A. contributed to obtaining and maintaining various H. axyridis populations in the lab. B.F. helped design the study and obtained funding. B.G. analyzed the quality of the de novo genome assemblies and annotated the coding genes. J.L. designed the MinION study and processed associated bioinformatics treatments leading to the HaxR de novo assembly. E.L. processed bioinformatics treatments for the HaxB4 de novo assembly. H.P. and D.S. produced pool-sequencing (pool-seq) and individual next-generation sequencing (NGS) data. C.L.R., C.D., and M.M. helped designed the MinION study, produced the MinION data, and processed upstream bioinformatics treatments. H.B. developed and helped screening the Black-4Spots BAC library. K.G. helped design the study and produced some NGS data to construct the HaxB4 de novo assembly. L.L.H. provided H. axyridis individuals from Japan and contributed to drafting the manuscript. L.S.Z. provided H. axyridis individuals from China. H.V. helped design the study, obtained funding, and produced RNA-seq data and genomic resources for the HaxB4 de novo assembly. B.P. and A.E. designed and directed the project, obtained funding, interpreted results, and wrote the manuscript. All authors commented on the manuscript. DECLARATION OF INTEREST The authors declare no competing interests. CONTACT FOR REAGENT AND RESOURCE SHARING Further information and requests for resources and reagents should be directed to and will be fulfilled by the Lead Contact, Benjamin Prud'homme (benjamin.prudhomme@univ-amu.fr). EXPERIMENTAL MODELS AND SUBJECT DETAILS Harmonia axyridis individuals were collected in the wild from eight geographic locations (Jilin-China, Changchun-China, Kyoto-Japan, Novosibirsk-Russia, Bourgogne-France, Georgia-USA, Washington-USA, and BIOTOP biocontrol population -France) to carry out genome-wide scans studies for association with the proportion of individuals of a given color pattern form in pool samples. and three Red-nSpots females originating from a wild H. axyridis population from Mississippi (USA) were individually sequenced to identify the autosomal contigs of the HaxR assembly. We carried out four generations of full-sib crossings to obtain a Black-4Spots inbred line (origin: BIOTOP biocontrol population, France) in order to provide biological material for the de novo sequencing of the Black-4Spots allele and to build a Black-4Spots BAC library. Four H. axyridis strains homozygous for the color forms Red-nSpots, Black-2Spots (origin: Gard, France), Black-4Spots (origin: Gard, France) and Black-nSpots (origin: Oldrich Nedved's laboratory, Czech Republic) were produced and maintained in the laboratory to produce biological material for larval RNAi, qPCR and immuno-histochemistry experiments. Laboratory rearing conditions remained constant (24 C, 60% RH; L:D 14:10), individuals were fed ad libitum with irradiated eggs of Ephestia kuehniella. METHOD DETAILS Description and naming of the four color pattern forms that predominate in frequency in natural Harmonia axyridis populations (i) form conspicua (hereafter named ''Black-2Spots'' for clarity) has black background color of elytra with two red spots on each elytron and the top one larger than the bottom one. (ii) f. spectabilis (hereafter ''Black-4Spots'') has black background of elytra with one large red spot in the top-center of each elytron, (iii) f. axyridis (hereafter ''Black-nSpots'') has black background color of elytra with many red spots, and (iv) f. succinea (hereafter ''Red-nSpots'') has red background color of elytra with the number of black spots ranging from 0 to 19. See Figure 1A of main text for illustrations. De novo assembly of the Harmonia axyridis genome from individuals of the color pattern form Red-nSpots Four Oxford Nanopore Technologies (ONT) libraries were prepared using the Ligation Sequencing Kit 1D (SQK-LSK108 ONT), according to the manufacturer's protocol 1D Genomic DNA by ligation (SQK-LSK108). Briefly, 60 mg of genomic DNA was extracted using the Genomic-tip 500/G kit (QIAGEN) from a pool of thorax tissue belonging to 12 Red-nSpots males from a lab-reared population founded by Red-nSpots individuals originating from the biocontrol population BIOTOP (France). The DNA sample was divided into four aliquots and sheared into 25 kb (n = 3) or 30 kb (n = 1) fragments using the Megaruptor system (Diagenode). One of the 25 kb sheared DNA sample was additionally size-selected prior to library preparation using the BluePippin system (Sage Science, Beverly, USA) to remove fragments smaller than 10 kb. A DNA Repair (NEBNext FFPE Repair Mix M6630) step as well as an End-repair and dA-tail step (NEBNext End repair / dA-tailing Module E7546) were then processed on 0.34 pmols of the sheared DNA sample, followed by ligation of sequencing adapters. Then 0.07 pmols of library were loaded onto an R9. 5 Table S1 for additional HaxR statistics. Identification of HaxR autosomal contigs using a female to male read mapping coverage ratio Barcoded DNA PE libraries with an insert size of ca. 550 bp were prepared using the Illumina TruSeq Nano DNA Library Preparation Kit following the manufacturer's protocols using six DNA samples extracted from three Red-nSpots males and three Red-nSpots females originating from a wild H. axyridis population from Mississippi (USA). Libraries were then validated on a DNA1000 chip on a Bioanalyzer (Agilent) to determine size and quantified by qPCR using the Kapa library quantification kit to determine concentration. The cluster generation process was performed on cBot (Illumina) using the Paired-End Clustering kit (Illumina). Each individual library was further paired-end sequenced on a HiSeq 2500 or 2000 (Illumina) using the Sequence by Synthesis technique (providing 2x125 or 2x100 bp reads, respectively) with base calling achieved by the RTA software (Illumina). After removal of sequencing adapters, reads were mapped onto the HaxR assembly using default options of the mem program from the bwa 0.7.12 software package [33]. Read alignments with a mapping quality Phred-score < 20 and PCR duplicates were further removed using the view (option -q 20) and rmdup programs from the samtools 1.3.1 software [34], respectively. Read coverage at each contig position for each individual sequences was then computed jointly using the default options of the samtools 1.3.1 depth program. To limit redundancy, only one count every 100 successive positions was retained for further analysis and highly covered positions (> 99.9 th percentile of individual coverage) were discarded. The estimated individual median overall coverage ranged from 6 to 21 (see Data S1 for details). To identify autosomal contigs, we used the ratio r of the relative (average) read coverage of contigs between all females and all males (weighted by the corresponding overall genome coverage) expected to equal 1 for autosomal contigs and 2 for X-linked contigs [48]. Contigs smaller than 100 kb were discarded from further analyses because they displayed a high coverage dispersion of their coverages (Data S2) together with 12 of the remaining contigs with extreme values of r (r < 0.5 or r > 2.5). We further fitted the r distribution of the 492 remaining contigs (398 Mb in total) as a Gaussian mixture model with two classes of unknown means and the same unknown variance. The latter parameters were estimated using an Expectation-Maximization algorithm as implemented in the mixtools R package [32]. As expected the estimated mean of the two classes m 1 = 0.96 and m 2 = 1.90 were only slightly lower (see Data S2 for further details) to that expected for autosomal and X-linked sequences allowing to classify the different contigs. We could hence therefore assign classify with a high confidence (p value < 0.01) 457 contigs as autosomal (377.5 Mb in total) and 18 contigs as X-linked (16.85 Mb in total). Genome-wide scan for association with the proportion of Red-nSpots individuals using Pool-Seq data on 14 population samples Barcoded DNA PE libraries with insert size of ca. 450 bp were prepared using either the Illumina Truseq DNA sample prep kit (n = 2) or the Nextera DNA Library Preparation kit (n = 12) following manufacturer protocols using 14 DNA pools (each pool including the heador the leg for some pools -from n = 40 to n = 100 individuals) collected in eight populations representative of the world-wide genetic diversity [49] and the four main color pattern forms of the species (Table S2). Illumina sequencing, processing and mapping of reads to the HaxR assembly was performed as described above for individual data (see Data S1 for further details). The 14 Pool-Seq BAM files were processed using the mpileup program from the samtools v1.3.1 software [34] with default options and -d 5000 and -q 20. Variant calling was then performed on the resulting mpileup file using VarScan mpileup2cns v2.3.4 [35] (options-min-coverage 50-min-avg-qual 20-min-var-freq 0.001-variants-output-vcf. 1). The resulting VCF file was processed with the vcf2pooldata function from the R package poolfstats v0. 1 [36] retaining only bi-allelic SNPs covered by >4 reads, < 99.9 th overall coverage percentile in each pool and with an overall MAF > 0.01 (computed from read counts). In total, 18,425,210 SNPs mapping to the 457 autosomal contigs were used for genome-wide association analysis with a median coverage ranging from 18 to 45X per pool (Data S1). Genome-wide scans for association with the proportion of individuals of a given color pattern form in each pool were performed using the program BayPass 2.1 [15]. Capitalizing on the large number of available SNPs, we sub-sampled by taking one SNP every 200 SNPs along the genome, dividing the full dataset into 200 sub-datasets (each one including ca. 92,500 SNPs). These sub-datasets were further analyzed in parallel under the BayPass core model using default options for the Markov Chain Monte Carlo (MCMC) algorithm (except -npilot 15 -pilotlength 500 -burnin 2500). Three independent runs (using the option -seed) were performed for each dataset. The estimated model hyper-parameters were highly consistent across both runs and datasets. Support for association of each SNP with the corresponding prevalence covariate was then evaluated using the median Bayes Factor (BF) computed over the three independent runs. BFs were reported in deciban units (db) with 20 db corresponding to 100:1 odds, 30 db to 1000:1 odds, and so on [50]. De novo sequencing of the Black-4Spots color pattern allele Starting from Black-4Spots individuals from the low diversity BIOTOP biocontrol population (France), we carried out four generations of full-sib crossings to produce a Black-4Spots inbred line, hereafter referred to as B4sIL. This aimed to improving further assembly steps by reducing the overall genetic variability. Four DNA PE libraries with insert sizes of ca. 250 bp (n = 2), 400 bp (n = 1) and 600 bp (n = 1) and two DNA Mate Pairs (MP) libraries with insert sizes of ca. 3 kb and 8 kb were constructed from B4sIL DNA (3-4 individuals per library) using standard Illumina kits; and two Long Jumping Distance libraries (Eurofins MWG Operon) with insert sizes of ca. 3 kb and 8 kb. All these libraries were sequenced on a HiSeq2500 sequencer with either 2x100 bp or 2x150 bp reads (Data S1). Raw reads were filtered for bacterial and human sequence contaminants using deconseq [37] and trimmed using Trimmomatic v0.22 [38]. Genome assembly was then performed using AllPath-LG [39] with default options except Haploidify = True to account for residual polymorphism in the sequenced individuals. This led to a first assembly consisting of 6,883 scaffolds (N50 = 921 kb) totaling 378 Mb (%N = 14.8). To further improve the assembly, we generated long reads using the Pacific Biosciences RS II platform. To that end seven SMRT bell libraries were prepared using size fractionated (shearing size of 25 kb and size selection cut-off of 10 kb) high molecular weight DNA prepared from B4sIL individuals and loaded into SMRT Cell 8Pac v3 for sequencing on Pacific Biosciences RS II system with P6C4 chemistry by Treecode (Malaysia). The seven resulting sequence movie files were processed and analyzed using Pacific Biosciences SMRT Analysis Server v2.3.0 with default settings. After the filtering step, a total of 422,222 reads (N50 = 21,521 kb) were used to carry out gap filling and scaffolding using PBJelly v1.3.1 [40]. The final Black-4Spots assembly, referred to as HaxB4, consisted of 6,586 scaffolds (N50 = 978.4 kb) totaling 393 Mb (%N = 5.84). Both genome-wide association studies conducted as described above but with HaxB4 as a reference and sequence alignment of the utg686 contig of the HaxR assembly using Mummer [41] (and BLAST+ [42]) tools allowed unambiguous identification of a 5.96 Mb HaxB4 scaffold that included the color pattern locus and adjacent genomic regions. Because the HaxB4 assembly remained less contiguous and accurate than the HaxR assembly (see Table S1 for a comparison), we performed a finishing step relying on a newly developed BAC (Bacterial Artificial Chromosomes) physical map covering the Black-4Spots allele of the color pattern locus. To that end, a BAC library of 13,824 BACs (141+/À41 kb insert size after sizing of 48 BACs) with ca. five genome equivalent coverage was constructed in the vector pIndigoBAC-5 using high molecular weight DNA extracted from 300 B4sIL larvae (of developmental stage L1), as described in [51]. The BAC library, deposited at the CNRGV (INRA, Toulouse, France; https://cnrgv.toulouse.inra.fr/Library/Asian-ladybird), was organized in 2-dimension pools for PCR screening. A total of 98 PCR primers pairs designed against the HaxB4 assembly or newly generated BAC-end sequences were screened on the library. This allowed defining a Minimum Tiling Path of 14 BACs covering 1.9 Mb for local correction of scaffold mis-assembly. Shotgun sequencing was carried out using the Pacific Biosciences RS II system and Illumina MiSeq sequencer for 3 and 10 BACs, respectively [51]. Finally, we manually edited the HaxB4 assembly using the BAC sequences to derive a high quality 2.87 Mb sequence (% N = 0.96) including the candidate region of the Black-4Spots allele of the color pattern locus as well as adjacent regions. Alignment of the latter sequence to the HaxR assembly (i.e., Red-nSpot allele) using nucmer from the package Mummer [41] allowed the scaffolding of the five neighboring HaxR contigs represented in Figure 1C. Larval RNAi We synthesized double-stranded RNAs (dsRNA) with T7 polymerase as described previously [16]. DNA fragments for the transcription were amplified by PCR using primers containing T7 polymerase promoter sequences at their 5 0 ends (see Data S1for primer sequences). We used cDNA from Black-4Spots or Red-nSpots forms. Sense and antisense transcripts were simultaneously synthesized using RiboMax express RNAi System (Promega), annealed, treated with RQ1 DNase (Promega), and precipitated with ethanol. The quality of dsRNA was examined by agarose gel electrophoresis, and the concentration was roughly measured by spectrophotometer ND-1000 (NanoDrop Technologies), and 2 mg/mL in nuclease free water were used for injection. Larvae were anesthetized just before pupation on a CO 2 pad, and 400-600 nL of dsRNA was injected into hemolymph using Nanoject (Drummond Scientific). cDNA sequencing Fragments of pannier for Black-4Spots and Red-nSpots forms were amplified separately by PCR from cDNA of elytra of Black-4Spots or in Red-nSpots forms. Total RNA was extracted using TRI reagent (ThermoFisher), followed by DNase I treatment. cDNA was generated using First Strand cDNA Synthesis Kit (New England BioLabs). Resulting PCR fragments were approximately 1.8 kbp in length for both melanic forms when using the primer set (Ha_pnr-F1; CGGTACGAGATAAGCGAATAAGG, Ha_pnr-R1; TTACCATTTACAAATATATTTACATGGTTGTTG). Each PCR product was inserted into cloning vector pGEM-T Easy (Promega) for Sanger sequencing. Quantitative PCR (qPCR) Total RNA was extracted from whole elytron of homozygous Black-4Spots (n = 7) or Red-nSpots (n = 6), or dissected elytron of homozygous Black-4Spots (n = 5) at late pupal stage (96 hr after pupation) with TRI reagent (Invitrogen). RNA samples were reverse transcribed using First Strand cDNA Synthesis Kit (New England BioLabs). We omitted DNase I treatment, because all pairs of forward and reverse primer for qPCR were designed in different exons of each gene, which are separated by long introns. Furthermore, for accurate comparison, we confirmed the absence of nucleotide substitutions in the primer sequences in the different color pattern forms. qPCR and data analysis were performed on StepOne Real-Time PCR System (Applied Biosystems) with Power SYBR Green Master Mix (ThermoFisher). The data was normalized using eukaryotic initiation factor 4A (eIF4A) and 5A (eIF5A), and statistical significance of expression differences was established using two-tailed-t test. All primer sets and each R 2 value of standard curves are listed in Data S1. Immuno-histochemistry An antibody against H. axyridis Pannier (Ha_Pannier) was raised by Genscript, using as antigen the first 384 amino acids of the protein. To test that this antibody recognizes Ha_Pannier we ectopically expressed Ha_Pannier using the Gal4/UAS system in Drosophila melanogaster. Specifically, we stained engrailed-Gal4, UAS-GFP; UAS-Ha_Pannier larval imaginal disks with anti-Ha_Pannier and anti-GFP using standard procedures. We observed co-localization of the signal in the posterior compartement of the disk (data not shown), as expected, showing that the Ha_Pannier antibody recognizes Ha_Pannier in vivo. Late pupal H. axyridis elytra are covered with a cuticle layer that is impenetrable for antibodies. Therefore, before staining, we split each elytron into two halves, separating the dorsal and ventral halves. For this we followed the protocol that has been developed for Drosophila wings [18] with some modifications. Elytra were dissected from pupae at late stage (around 96 hr after pupation) in PBS, and fixed in 4% paraformaldehyde (5-10min at room temperature). The edges of the elytra were trimmed off with a razor blade before transferring the elytra on a piece of adhesive tape (Tesa). Another piece of adhesive tape was positioned on top of the immobilized elytra, and then gently removed to separate the two faces of the elytra. The two pieces of tape with split elytra (one is dorsal, the other is ventral side of the elytra) were fixed 4% paraformaldehyde again (1-5min at room temperature) and stained (overnight at 4 C) with anti-Ha_Pannier antibody at 1:70 dilution in 1% bovine serum albumin (BSA), followed by visualization with Alexa-dye-conjugated secondary antibodies (ThermoFisher) at 1:100 dilution in 1% BSA (1 hr at room temperature). Cell nuclei were stained with DAPI. The pieces of tapes with stained elytra were mounted on microscope slides with VECTASHIELD (VECTOR). Imaging Anti-Pannier stainings were imaged under LSM510 confocal microscope (Zeiss) with identical settings (e.g., objective lens, pinhole size, laser power, number of stacks) for all samples. All raw confocal images were processed identically in ImageJ 1.51 [47], and then enhanced separately with (Adobe Photoshop). Mean intensities of anti-Pannier signal were measured in rectangular sections using the Plot Profile command of ImageJ 1.51. Adult H. axyridis (2 days post-eclosion) were imaged on a Leica Z6Apo macroscope equipped with a ProgRes C5 ccd camera (Jenoptik). Several images were taken at different Z positions, and stacked together using HeliconFocus. Images were enhanced using Adobe Photoshop. Genomic sequence divergence and gene structure at the color pattern locus Genomic sequences of Red-nSpots (utg676) and Black-4Spots (HaxB4) contigs including pannier were visualized with dot plot using GenomeMatcher [46]. Conserved sequence blocks were further detected and visualized with Artemis Comparison Tool (ATC) [13] for Figure 1D. To identify reliable orthologous positions between the two contigs we first extracted long homologous blocks using blast2seq under high stringency parameters (blastn, e-value < 0.01, alignment length R 2 kbp), and plotted them on the dot plot. The linear approximation was y = x+357764 (R 2 = 0.973, y axis; utg676, x axis; HaxB4). There was blank region (where there are no plots) in the middle of this linear approximation, which corresponds to a highly diverged region (202 kb and 234 kb in utg676 and HaxB4, respectively). We subsequently checked shorter homology blocks (R1000 bp) around the breakpoints toward the center of the blank region to determine the borders more precisely. We considered continuous two or more conserved blocks (R1000 bp) within 20 3 20 kb sliding window along the line approximation. Thus, we defined two breakpoints for the boundary between continuous conservation and divergent regions, the latter one spanning 173,272 bp on utg676 (554,399 -727,671), in line with the ca. 170 kb region delimited by our genome-wide association study, and 209,085 bp on HaxB4 (913,592 -1,112,677). To identify gene structures around the divergent genomic region (173 kb on utg676), we mapped RNA-seq reads PRJEB13023 [26] (100 bp paired-end, adult and larva Harmonia transcripts) deposited in Sequence Read Archive (SRA) to repeat-masked genome contigs using Tophat 2.1.0 [43], followed by assembling transcripts by Cufflinks v2.2.1 t [44] using default parameters on the NIG supercomputer at ROIS National Institute of Genetics. Resulting genes were named after sequence homology to protein database of Drosophila melanogaster (dmel-all-translation-r6.07) and Tribolium_castaneum (GCF_000002335.3_Tcas5.2). For the Black-4Spots sequence, since the number of mapped reads on exon-1 of pannier isoform-1 was low, we validated this gene structure using additional RNA-seq reads (H.V., unpublished data). The mapped reads were further confirmed by eye on Integrative Genomics Viewer (IGV) [45] to determine the gene structures. Assessing large-scale sequence divergence among the four color form alleles of Harmonia axyridis based on read mapping coverage of Pool-Seq data on both the HaxR and HaxB4 assemblies Comparing de novo sequences surrounding pannier for the Red-nSpots and Black-4Spots alleles highlighted large-scale divergence in the upstream region covering ca. 170 kb in the HaxR and HaxB4 assemblies ( Figure 1D, Figure S3). This explained the clustering of SNPs harboring strong weights of evidence in favor of association with the proportion of Red-nSpot or Black-4Spots individuals in each sequenced pool ( Figure S1). Interestingly, a similar clustering of strongly associated SNPs in the same genomic region was also observed in the genome scan for association with the Black-2Spots or Black-nSpots forms ( Figure S1) suggesting extended sequence divergence in the upstream region of pannier for the Black-2Spots and Black-nSpots alleles too. To further assess the level of sequence divergence between the alleles of the four main color pattern forms, and especially for the Black-2Spots and Black-nSpots alleles that were not de novo sequenced, we examined read mapping coverage of the Pool-Seq data representative of these forms. The rationale of this approach was that extended divergence in the sequences of an allele represented at high frequency in a given pool, relatively to the reference assembly on which reads are mapped, is expected to translate into a local decrease in read coverage. We thus considered sequence data available for: (i) the four pools (CH2-R, BRG-R, WAS-R and BIO-R) including Red-nSpots individuals only (i.e., with a Red-nSpots allele frequency equal to 1); (ii) the three pools (CH2-B4, BIO-B4 and BRG-B4) consisting of Black-4Spots individuals only (i.e., with an expected Black-4Spots allele frequency R 0.5 due to the possible presence of Red-nSpots alleles with an hidden expression in heterozygous Black-4Spots/Red-nSpots individuals); (iii) the CH2-B2 pool consisting of Black-2Spots individuals only (i.e., with an expected Black-2Spots allele frequency R 0.5 due to the possible presence of Red-nSpots or Black-4Spots alleles with an hidden expression in heterozygous Black-2Spots/Red-nSpots or Black-2Spots/ Black-4Spots individuals); and (iv) the NOV-Bn pool consisting of Black-nSpots individuals only (i.e., with an expected Black-nSpots allele frequency close to one due to the fixation of the Black-nSpots form in the corresponding wild population) ( Figure S4). The five other remaining DNA pools that were used for the genome-wide association study were not considered here because they either consisted of mix of individuals from two melanic color pattern forms (e.g., CH2-B2 pool) or they displayed a genome-wide median coverage % 25 (Data S1). Based on the mpileup file combining the mapping results of the pool-seq data onto the Red-nSpots assembly HaxR, we computed read coverages (using default options of the samtools 1.3.1 depth program) at each position of the utg676 contig covering the Red-nSpots allele for the above nine DNA pools of interest. After discarding positions covered by <10 reads over all the pools or with a within pool coverage >95 th percentile in at least one pool, the utg676 contig was divided into consecutive windows of 10,000 positions with an overlap of 5,000 positions. Let c i,p represent the window coverage (computed as the average coverage over the 10,000 window positions) for window i in pool p; m p the mean genome-wide coverage (computed over all HaxR autosomal contigs excluding utg676 and over-covered position) in pool p; and s i,p = c i,p /m p the standardized window coverage for window i in pool p. To identify regions with lowered read mapping efficiency due to high sequence divergence with respect to the Red-nSpots allele (HaxR assembly), we computed a relative (standardized) window coverage as rc i
2018-08-23T18:51:56.375Z
2018-10-01T00:00:00.000
{ "year": 2018, "sha1": "18f16ead4411949d9bda8c86cabb2ca643f36364", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S0960982218310686/pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "8412f9149368445c58cb6eaf42521ad130203848", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
117138253
pes2o/s2orc
v3-fos-license
5-axes modular CNC machining center The paper presents the development of a 5-axes CNC machining center. The main goal of the machine was to provide the students a practical layout for training in advanced CAM techniques. The mechanical structure of the machine was built in a modular way by a specialized company, which also implemented the CNC controller. The authors of this paper developed the geometric and kinematic model of the CNC machining center and the post-processor, in order to use the machine in a CAM environment. Introduction CNC machine-tools represent the backbone of nowadays machine-building industry. Consequently, using and programming CNC machine-tools are important subjects within the curriculum in technical universities. While 3-axes CNC machine-tools are widely used in industry, currently we are witnessing the introduction of 5-axes CNC machine-tools in more and more industry sectors. In order to offer the students state of the art training with regards of CNC machinetools, universities have to equip their laboratories with both 3-axes and 5-axes machine tools. While industrial 5-axes CNC machine-tools may be a rational choice where complex and highly accurate parts are required, for teaching purposes, this king of equipment may be too expensive. So, the problem tackled by this research work was to develop a 5-axes machining center, with similar technological capabilities as industrial state of the art solutions, at an affordable level of development costs and suitable to be used for teaching the future engineers how to use and program 5-axes CNC machine-tools. Open architecture CNC machine-tools Industrial CNC machine-tools are supplied as closed architecture systems. Their structure, kinematic and CNC controllers are designed to be used solely for their basic purpose, without any possibility to change it. Practically these systems have no re-configurability and cannot be adapted in order to perform other machining processes other the one they were designed for. The re-configurability is seen here as the ability to add or remove modules from the machine, to equip it with supplementary degrees of freedom (axes). Their CNC controllers have also closed architecture which cannot be modified. As opposite to the closed architecture, researches were reported in the literature of open systems CNC architecture [1][2]. One of the most known approaches of developing such systems are OSACA (Europe), OMAC (USA) and OMEC (Japan) [3]. These systems are based on building a CNC system form modular components and standardized interfaces. Other approaches of developing open architecture CNC systems are relying on the STEP NC concept, which tries to develop a machine independent programming language [4]. The structure of the 5-axes CNC machining center The workspace of CNC machine-tools is defined as a Cartesian one, with three translational axes (X, Y, and Z) and three rotational axes (A, B, C), as presented in Figure 1. A large amount of the CNC machine-tools on the market can only perform translational movements on X, Y and Z axes and are called 3-axes CNC machine-tools. In order to machine complex parts sometime one or even two supplementary rotational movements in combination with translation along X, Y and Z axes are required. In order to perform such motions 5-axes CNC machine-tools are used. A simplified diagram of a 5-axes CNC machine-tool, with X, Y, Z translational axes and A, C rotational axes is presented in Figure 2. In order to develop the 5-axes modular machining center, a requirements list regarding the machine's characteristics was build. The list is presented below: -the kinematic if the machine has to allow simultaneous movements on 5-axes; -the machine has to be built in a modular way, in order to allow its use both as 3axes and 5-axes machine-tool; -the machine has to be programmed using G-code (formalized as ISO 6983); -the control loops on each axis has to be closed loop ones (using servomotors for actuation and encoders for feedback -no stepping motors were allowed); -the overall developing costs of the system must not exceed 25.000 Euro. Mechanical structure The machine structure was designed modularly, by combining the following modules: A. Custom built modules: -mechanical frame; fixed table; movable portal unit for X-axis; movable slide for Y-axis; movable slide with Z-axis. B. Special units from ISEL company: -rotary tilting unit DSH-S combined with rotary unit RDH-S; -spindle motor unit iSA 1500 (with manual tool exchange). The overall view of the 5-axes CNC machining center is presented in Figure 3. The custom-built modules also include in their structure specific ISEL components such as linear guideways and linear slides. The general assembly of the machining center was made by company General Numeric, from Brașov, Romania, a company specialized in manufacturing CNC profiling machines (oxy-gas and/or plasma cutting) and CNC routers. In Figure 4 b, 1 represents the counter bearing, 2 the swivel unit and 3 the rotary indexing table RDH-S. The CNC controller In order to provide open-architecture capabilities to the machining center the LinuxCNC software system was used as CNC controller. It can drive milling machines, lathes, 3D printers, laser cutters, plasma cutters, robot arms, hexapods, and more. LinuxCNC is also compatible with many popular machine control hardware interfaces and provides to the user several graphical user interfaces [6]. The integration of LinuxCNC as controller of the 5-axes machining center was also made by General Numeric company. The GUI used for this machine-tool is presented in Figure 5. The geometric and kinematic model 5-axes machining operations rises a series of problems which do not appear during normal 3-axes machining. While during 3-axes machining only reciprocal movements between tool and workpiece occur and have to be taken into consideration for collisions avoidance, things are very different for 5-axes machining. The number of movements are higher, because of the supplementary rotational movements, and collisions may occur not only between tool and workpiece but also between tool, workpiece and machine slides and platters and between machine slides and platters themselves. The simulation interface from Linux CNC is only able the represent the toolpaths, without taking into consideration the workpiece and machine geometry and the potential collisions. Moreover, most of CAM software packages do not take into consideration the geometry and kinematics of the machine when simulating the machining process. To address this problem, the authors of this work have developed a geometric and kinematic model of the 5-axes machining center, by performing the following steps [7]: -building a 3D model of the machine-tool; This step was performed by a joint modelling and import process. While the custom build modules of the machine were modeled, the 3D models of specialized ISEL modules were imported form the files provided by the manufacturer. Figure 6 shows the 3D model of the machining center, developed in CATIA. -separating each kinematic axis of the machine form the 3D model and saving it as an individual module; During these two first steps, some alterations of the geometry of the components are allowed in order to reduce the size of the files. The user has to keep in mind that only the components which may lead to collision are important, while inside components, or components which cannot interfere with others components may be simplified or even removed. For example, from Figure 6 it can be noticed that the height of the machine has been reduced, because the machine supports cannot collide with any moving parts. -exporting the axis modules as igs file and re-importing them in a CAM software package; -using a specialized module of the CAM software to build the kinematic model of the machine; During this final step, the user has to define the hierarchies between each axis (for example, axis X supports both axes Y and Z, axis Y supports axis Z) and the kinematic dependencies between the axes (for example, axis X cannot be moved without moving also axes Y and Z). Also, the stroke length (and angular limits) for each axis have to be defined. After building the geometric and kinematic model, the CAM software was able to realistically simulate the machining processes unfolded on the machine. Figure 7 presents a screenshot form a simulation of a 3+2 axes indexed milling process (the rotational movements involved in the process are indexing movements, performed outside the milling process, used only for positioning the part). Conclusion During this research program, a modular 5-axes machining center was developed and implemented. The machine was built by a specialized industrial company according to a requirements list made by the authors of this work. In order to be able to use the machine By using the geometric and kinematic model, the CAM program is able to generate the complex toolpaths required for 5-axes machining operations, by taking into consideration the real kinematic capabilities of the machine (hierarchies and dependencies between machine axis). Also, the simulation process is able to detect not only tool-workpiece collisions, but also the collisions between the mobile elements of the machine.
2019-04-16T13:21:57.391Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "1822cd7298d2a0298233f22870be4c328066d0c0", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2017/26/matecconf_imane2017_06004.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cc4ee406fdac992dfd5be4156f07571907c745f9", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
13304105
pes2o/s2orc
v3-fos-license
Evaluation of a New IFN-γ Release Assay for Rapid Diagnosis of Active Tuberculosis in a High-Incidence Setting Blood-based interferon-gamma (IFN-γ) release assays (IGRAs) have been proven to be useful in the diagnosis of Mycobacterium tuberculosis (Mtb) infection. However, IGRAs have not been recommended for clinical practice in most low-income settings due to cost-intensive limitations and shortage of clinical data available. The established T-SPOT. TB assay containing Mtb-specific antigens ESAT-6 and CFP10 are widely used for immunodiagonsis of Mtb infection, but the high cost is one of the restricting factors against its clinical application in the developing countries. More recently, a cost-saving IGRA assay, TS-SPOT, was approved in China. This new assay contains an additional antigen Rv3615c. Rv3615c contains broadly recognized CD4+ and CD8+ epitopes, and T-cell responses to Rv3615c are as specific for Mtb infection as the responses to ESAT-6 and CFP10 in both Mtb-infected humans and M. bovis-infected cattle. Therefore, we assessed the likely effect of inclusion of Rv3615c as stimulus besides ESAT-6 and CFP10 in an IGRA assay and evaluated the performance of TS-SPOT for diagnosis of Mtb infection and active TB compared with T-SPOT.TB. We tested 155 active TB patients, 90 non-TB lung disease patients, and 55 healthy individuals. The results presented an improved positive rate for diagnosis of active TB and Mtb infection, that could be attributable to inclusion of Rv3615c in the mixture of stimulatory antigens. The diagnostic efficiency of TS-SPOT assay for active TB was as follows: sensitivity 80.00%, specificity 83.45%, positive predictive value (PPV) 83.78%, negative predictive value (NPV) 83.45%, positive likelihood ratio (LR+) 4.83, and negative likelihood ratio (LR−) 0.24. The results were similar to those of T-SPOT.TB, with an excellent agreement (κ = 0.91, 95% CI: 0.85–0.95) being observed between these two assays. The sensitivities of the TS-SPOT assay varied for patients with different forms of active TB, with the highest sensitivity for patients with culture-positive pulmonary TB (92.16%) and the lowest for those with tuberculosis meningitis (50.00%). Taken together, the current evidence indicates that this new TS-SPOT assay is a useful adjunct to the current tests for rapid diagnosis of active TB and Mtb infection in low-income and high-incidence settings due to its characteristics of cost-effectiveness and high-quality. INTRODUCTION Tuberculosis (TB), the disease caused by Mycobacterium tuberculosis (Mtb), still remains a major global public health concern. According to the updated WHO global TB report, approximately one-third of the world's populations are infected asymptomatically with Mtb, and about 5-10% of these people develop active TB. In 2015, there were an estimated 10.4 million new TB cases and 1.4 million TB deaths worldwide, and an additional 0.4 million deaths resulting from TB disease among people co-infected with HIV (WHO, 2016). Although the number of TB deaths dropped by 22% during the period of 2000-2015, TB remained one of the top 10 causes of death from any cause worldwide in 2015 (WHO, 2016). The results of the fifth Chinese national TB epidemiological survey in 2010 showed that the TB prevalence was 469/100,000 among population over 15 years old, and 4.99 million active TB cases were estimated across the country (The Office of the Fifth National TB Epidemiological Survey, 2012). In China, diagnosis of TB is usually based on a combination of regular doctor inquiry, clinical presentation, radiological and pathological changes, and bacteriological findings of acid-fast bacilli (AFB). However, the smear microscopy for AFB has a low sensitivity as only 44% of all new adult cases and 15-20% of childhood cases are identified by the presence of AFB in sputum smears (Pai and O'Brien, 2008). The gold standard for the definite diagnosis of TB is the detection of Mtb by bacterial culture, which usually takes >1 month to provide a diagnostic end-point and delays the treatment initiation (Richeldi, 2006). Moreover, diagnosis and treatment decisions may be difficult in cases with clinical suspicion of TB and negative AFB sputum smears. Being rapid and easy to apply, tuberculin skin test (TST) has been used as an immunodiagnostic tool to support the physician's decision process for decades, but it suffers from poor specificity due to the cross-reaction of non-tuberculosis mycobacteria (NTM) or Bacillus Calmette-Guerin (BCG) vaccination (Richeldi, 2006), especially in developing countries with high TB prevalence. In recent years, blood-based in vitro interferon (IFN)-γ release assays (IGRAs) have been developed as alternatives to TST for the rapid immunodiagnosis of Mtb infection. Both TST and IGRAs detect the presence of persistent Mtb-specific T-cell responses and represent indirect markers for past or present infection (Young et al., 2009). IGRAs measure IFN-γ secretion after in vitro stimulation of whole blood or peripheral blood mononuclear cells (PBMCs) with antigens (ESAT-6 and CFP10). These antigens are encoded in the region of difference-1 (RD1), a portion of Mtb genome absent among all BCG strains and most NTM species (Rangaka et al., 2012). Currently, two commercial systems are available: the QuantiFERON-TB Gold in-tube assay (QFT-GIT; Cellestis, Carnegie, Australia) that also includes a third antigen TB7.7 and measures IFN-γ using an ELISA method; the TSPOT.TB assay (Oxford Immunotec, Abingdon, UK) that quantitates IFN-γ-producing cells with the enzymelinked immunospot (ELISPOT) technique. Compared to TST, IGRAs are as sensitive and more specific for detecting latent tuberculosis infection (LTBI; and have better correlation with gradient of Mtb exposure (Ewer et al., 2003;Hill et al., 2005;Diel et al., 2006). However, neither IGRAs performed on blood nor TST appear to be able to distinguish between individuals with LTBI and active TB (ATB; Kang et al., 2007;. Although these tests were primarily developed for the diagnosis of latent TB, clinicians have also been searching for improved diagnostic tools and explored IGRAs for the immunodiagnosis of active TB (Kang et al., 2007;Nishimura et al., 2008;Winqvist et al., 2009). In 2010, the USA Centers for Disease Control and Prevention (CDC) updated their guidelines for testing for TB infection, and concluded that IGRAs may be used instead of TST in all situations in which CDC recommends TST as an aid to diagnosis of Mtb infection (Mazurek et al., 2010). Recently, IGRAs have also been evaluated for the diagnosis of active TB directly on extrasanguinous fluids from sites of infection, including bronchoalveolar lavage fluid (BALF; Nishimura et al., 2008;Winqvist et al., 2009), pleural effusion (PE; Metcalfe et al., 2010) and cerebrospinal fluid (CSF; Thomas et al., 2008). Nevertheless, in most developing countries including China, the clinical utilization of IGRAs is not recommended due to insufficient evidence of their performance in high TB burden settings (Metcalfe et al., 2011). Additionally, the high cost of QFT-GIT and T-SPOT.TB is one of the restricting factors against their clinical application in developing countries (Steffen et al., 2013). More recently, a domestic IGRA named TS-SPOT (Tongsheng Biotech, Beijing, China) was licensed by the China Food and Drug Administration (CFDA). This test includes a third Mtbspecific antigen Rv3615c as stimulus besides ESAT-6 and CFP10 and saves nearly half of cost compared to the two widely used IGRAs systems mentioned above. Antigen Rv3615c (Esx-1 substrate protein C, EspC), encoded outside RD1, is similar in size and sequence homology to ESAT-6 and CFP10 (MacGurn et al., 2005) and has high specificity to confer strong potential for T-cell-based immunodiagnosis (Sidders et al., 2008;Millington et al., 2011) and vaccine development (Kong et al., 2014;Teng et al., 2015). Rv3615c contains broadly recognized CD4 + and CD8 + epitopes and T-cell responses to Rv3615c are as specific for Mtb infection as the responses to ESAT-6 and CFP10 in both Mtbinfected humans (Millington et al., 2011) and M. bovis-infected cattle (Sidders et al., 2008). In this study, we aimed to determine whether inclusion of Rv3615c as an additional stimulus in the IGRA assay improved the diagnosis efficiency on Mtb infection compared with T-SPOT.TB in populations at various levels of risk in China. We also assessed whether this newly licensed TS-SPOT assay could be used for the diagnosis of active TB. A cheap assay with high quality will definitely facilitate the widespread clinical practice of IGRAs leading to the accumulation of more clinical data and clarification of its clinical value for ATB diagnosis in developing countries with high TB burden. Clinical Trial Design and Participants In this prospective clinical study, a total of 307 participants were recruited in Shanghai Public Health Clinical Center (SPHCC, Shanghai, China) from January to December 2015. Characteristics of all participants are shown in Table 1. Study protocols were approved by the Institutional Review Board of SPHCC, and written informed consent was obtained from all participants. All the participants were HIV-negative and had BCG vaccination in early childhood or during adolescence. As for the suspected TB cases, the diagnosis of active tuberculosis (ATB) patients was made on the basis of all the clinical, radiological, microbiological, and pathological information collected after recruitment and response to anti-TB therapy for at least 3 months. ATB subjects were further classified into cases of pulmonary tuberculosis (PTB) and extra-pulmonary tuberculosis (EPTB) subgroups. Recruited patients were classified as Non-TB and therefore excluded from ATB when they had an established alternative diagnosis, which referred to lung cancer, pneumonia, chronic bronchitis, and bronchiectasis that are easily confused with TB. For comparison, 55 healthy individuals with normal chest radiographs, no known history of TB infection and no symptoms of active TB, were also recruited. Ultimately, 155 ATB including 118 PTB and 37 EPTB and 145 NATB controls including 90 Non-TB pulmonary disease patients and 55 healthy individuals were tested with both TS-SPOT and T-SPOT.TB assays (Figure 1). Ex vivo IFN-γ ELISPOT Assay The fresh blood samples were collected before treatment or within 7 days at the beginning of treatment from participants and PBMCs were isolated from a whole blood sample by centrifugation over Ficoll density gradient (TBD Science, Tianjing, China) in a bio-safety level 3 (BSL-3) lab. Cells were re-suspended in AIM-V medium (Gibco, ThermoFisher, USA) and divided into two aliquots. A total of 2.5 × 10 5 cells/well were seeded in 96-well plates precoated with anti-IFN-γ capture monoclonal antibody, and ex vivo ELISPOT assays were performed according to the manufacturer's operating instruction. Briefly, cells were incubated for 18-20 h at 37 • C, 5% CO 2 with the different Mtb-specific antigens (one peptide pool including ESAT-6, CFP10, and Rv3615c peptides for TS-SPOT, two separate peptide pools including ESAT-6 or CFP10 peptides for T-SPOT.TB) to stimulate IFN-γ secretion by the effector T cells. PBMCs in medium alone or stimulated with phytohemagglutinin (PHA) at 2.5 µg/ml were used as negative or positive controls, respectively. Biotinylated anti-IFN-γ detection monoclonal antibody was then added for 4 h, and followed by the addition of streptavidin-enzyme conjugate for 1 h. After a washing step, the chromogenic substrate was added and the individual spots were counted by use of an automated image analysis system, ELISPOT reader (Champspot III; Sage Creation Science, Beijing, China). The result of TS-SPOT was considered to be positive if either Panel Test (containing ESAT-6/CFP-10/Rv3615c peptides pool) showed at least six spot-forming cells (SFCs) more than the negative control when the negative control ≤5 SFCs; or if the number of spots in Panel Test was at least double the number in the negative control when the negative control >5 SFCs. For the Oxford T-SPOT.TB assay, the result was positive if either of the two panels or both of them showed at least six SFCs more than the negative control when the negative control ≤5 SFCs; or if the number of spots in the Panel Test was at least double the number in the negative control when the negative control >5 SFCs. Radiographic Image Examination and Analysis Chest radiographs of PTB patients were reviewed by two experienced, board-certified radiologists who were blinded to the TS-SPOT results. Images were assessed for the presence and distribution of parenchymal abnormalities consistent with infiltrates and/or cavities to determine the extent and severity of disease. Pulmonary disease was defined as severe when lesions characterized by lobe infiltrate, pleural effusions, and cavities involved two or more lobes in one or both lungs (Geng et al., 2005). Detection of Mtb DNA in Sputum by Real-Time PCR Mtb DNA detection was routinely performed with aliquots of NALC-NaOH treated sputum specimens as a method to amplify a 254-bp fragment of Mtb IS6110 gene for TB diagnosis by using a real-time PCR kit (Daan Biotech, Guangzhou, China) according to the manufacturer's instructions. The sequences of primers and probe used are as following: 5 ′ -CGTGAGGG CATCGAGGTGGC-3 ′ , 5 ′ -GCGTAGGCGTCGGTGACAAA-3 ′ , and 5 ′ -TGCTACCCACAGCCGGTTAGG-3 ′ , respectively. The result was expressed as log 10 copies of Mtb DNA per ml of sputum. Statistical Analysis The sensitivity, specificity, PPV, NPV, LR+, LR-and diagnostic efficiency were calculated using SPSS statistical software (SPSS Inc., Chicago, USA). Positive rates in different groups were compared using χ 2 test. P < 0.05 was considered to be statistically significant by 95% confidence interval analysis. Agreement between tests was assessed by estimating Cohen's κ coefficient, and κ ≥ 0.75 indicates excellent agreement, 0.75 > κ ≥ 0.4 indicates fair to good agreement, and κ < 0.4 indicates poor agreement (Mantegani et al., 2006). Baseline Characteristics of Study Population Among 307 enrolled participants, 7 were excluded from the final analysis due to unqualified sampling, thus a total of 300 participants were successfully evaluated with both TS-SPOT and T-SPOT.TB assays in this study. The sample population included 118 pulmonary tuberculosis (PTB) cases, 37 extra pulmonary tuberculosis (EPTB) cases, 55 healthy controls (HC), and 90 non-TB pulmonary disease patients (Non-TB; Figure 1). Among ATB groups, patients with both EPTB and PTB were classified into the PTB group. The non-TB pulmonary disease patients and health individuals served as negative controls. The baseline characteristics of the enrolled subjects are summarized in Table 1. The mean ages of the patients with PTB, EPTB, Non-TB and healthy controls were 44.29 ± 20.29, 30.24 ± 20.11, 42.62 ± 22.54, and 28.47 ± 6.76, respectively. Male patients were a high proportion in the ATB group. All the participants had been BCG vaccinated. Diabetes was the most common underlying co-morbidity in patients with TB, liver disease, autoimmune diseases, and chronic renal disease followed ( Table 1). For 118 PTB cases, apart from those from whom a sputum sample could not be collected and omitting the samples that were contaminated during culturing, 103 sputum smear and 103 bacterial culture results were available for comparison as the "gold standard." Analysis of IGRAs Results Using Clinical Diagnosis as Reference Of the 155 clinically diagnosed ATB patients (76.13% PTB and 23.87% EPTB), 124 and 119 cases were detected positively by using the TS-SPOT and T-SPOT.TB, respectively, which resulted in the diagnostic sensitivity of 80.00% (124/155) and 76.77% (119/155), respectively. Of the 145 controls including 90 Non-TB pulmonary disease patients and 55 healthy individuals, 24 and 21 cases were detected positively by using the TS-SPOT and T-SPOT.TB, respectively, which resulted in the diagnostic specificity of 83.45% (121/145) and 85.52% (124/145), respectively ( Table 2). In comparison, the positive rate by smear microscopy was 36.89% (38/103) and was 49.51% (51/103) for bacterial culture. There was a significant difference in diagnostic sensitivity between the bacteriological methods and these two IGRA assays (p < 0.01). Comparison of the Performance of TS-SPOT and T-SPOT.TB on ATB Diagnosis When using non-TB pulmonary disease patients and healthy individuals as negative controls, the sensitivity and specificity were 80.00 and 83.45% respectively for TS-SPOT, and 76.77 and 85.52% for T-SPOT.TB. There was no statistically significant difference between of these assays in either sensitivity or specificity (P > 0.05, chi-square test). The PPV of TS-SPOT and T-SPOT.TB assays were 83.78 and 85.00%, respectively; the NPV were 79.61 and 77.50%; LR+ were 4.83 and 5.30; LR-were 0.24 and 0.27, and the diagnostic efficiencies for ATB diagnosis were 81.67 and 81.00%, respectively ( Table 2). Statistical analysis showed no significant difference between of these two IGRA assays. Agreement between TS-SPOT and T-SPOT.TB Assay For further comparison of TS-SPOT and T-SPOT.TB, the agreement and response correlation between these two assays were analyzed by SPSS 11.0 in the total 300 subjects. Concordance between the assays was measured using the Kappa index, and the correlation between TS-SPOT response and T-SPOT.TB response was analyzed non-parametrically by Spearman's correlation. In the 300 subjects, TB-SPOT and T-SPOT.TB were both positive in 137 and both negative in 149 subjects, which resulted in an excellent agreement (κ = 0.91, 95% CI: 0.85-0.95) between these two assays ( Table 3). Among 155 active tuberculosis patients, 117 subjects were positive by both assays, 9 had discordant results between the two assays: 7 were TS-SPOT + /T-SPOT.TB − and 2 were TS-SPOT − /T-SPOT.TB + (Figure 2A). Among 145 controls, 20 were positive by both assays, suggesting that they might be LTBI individuals. Of 145 control subjects, 5 had discordant results between the two assays: 4 were TS-SPOT + /T-SPOT.TB − , 1 was TS-SPOT − /T-SPOT.TB + (Figure 2B). These data suggest an excellent concordance between these two IGRA assays. The sensitivity of TS-SPOT assay was a little higher than T-SPOT.TB for diagnosis of Mtb infection including active TB and latent TB infection, although this did not reach statistical significance for either group. Performance of TS-SPOT Assay for Diagnosis of PTB Using Bacteriological Tests as the "Gold Standard" When the results of bacteriological tests were used as the "gold standard" to further evaluate the performance of the TS-SPOT assay for diagnosis of PTB cases, the SFCs in the bacteria-positive cases were significantly higher than those in the bacteria-negative cases (Figure 3), the sensitivity was 86.84% (33/38) in smear-positive cases, 92.16% (47/51) in culturepositive cases, and 94.28% (33/35) in both smear-and culturepositive cases. These rates were significantly higher than in total ATB cases (80% sensitivity). Additionally, TS-SPOT assay also detected 81.54% (53/65) in smear-negative cases, 73.07% (38/52) in culture-negative cases, and 74.46% (35/47) in both smearand culture-negative cases (Table 4), which was still significantly Clinical Characteristics Associated with False-Negative TS-SPOT Results in AFB-Positive PTB Patients The TS-SPOT assay gave false-negative results for 7 patients with sputum-or culture-positive TB in which included 2 patients with smear-and culture-positive TB. Compared to AFB-positive patients with positive IFN-γ TS-SPOT responses, those with false-negative TS-SPOT results were older (P < 0.01), had fewer bacterial DNA copies in sputum specimens measured by realtime PCR (P < 0.01), and had proportionally more severe disease although without significant difference, as defined by image examination (Table 5). IFN-γ Responses in Relation to the Clinical Manifestation of TB It has been suggested that different forms of TB may lead to different immune response profiles and therefore may affect the sensitivity of IGRAs (Nishimura et al., 2008). To address this question, 37 EPTB patients including various numbers of TB meningitis (TBM), lymph node TB (LNTB), TB peritonitis (TBP), TB pleurisy (TBPL), bone TB (BTB), intestinal TB (ITB), and renal TB (RTB; Figure 3C), were recruited and the IFN-γ responses of patients with different forms of TB were compared. In PBMCs from ATB patients the mean magnitude of IFN-γ response to the mixture of Mtb-specific antigens (ESAT-6, CFP10 and Rv3615c) was significantly higher than that of PBMCs from NATB controls (Figure 3A). Patients with extra-pulmonary TB had lower IFN-γ responses than those with PTB ( Figure 3B), with the highest sensitivity for patients with culture-positive pulmonary TB (47/51, 92.16%) and the lowest for those with tuberculosis meningitis (4/8, 50.00%; Table 4). However, there were no significant differences among the various EPTB forms ( Figure 3D). Thus, the positive rate of the TS-SPOT assay in EPTB (24/37, 64.86%) was also lower than that in PTB (100/118, 84.75%). DISCUSSION IGRAs are in vitro immunologic diagnostic tests to identify Mtb infection. Currently, two systems including QFT-GIT and T-SPOT.TB are commercially available, and their use in clinical practice is more and more widespread. Whereas, the IGRAs were initially conceived to support diagnosis of latent infection, an increasing body of evidence is published on its use in detection of active TB infection (Kang et al., 2007;Nishimura et al., 2008;Thomas et al., 2008;Winqvist et al., 2009;Mazurek et al., 2010;Metcalfe et al., 2010). A significant increase in the number of guidelines now includes recommendations for the use of IGRAs in the differential diagnosis of active TB in low-risk TB settings (Denkinger et al., 2011). China is one of the developing countries with high TB prevalence and widespread BCG vaccination, and TST detection has been demonstrated to be of poor specificity due to the cross-reaction with BCG vaccination (Richeldi, 2006). However, while IGRAs have been widely used to detect Mtb infections with high specificity and sensitivity, the high risk of Mtb infection and high cost of these assays limits their use in clinical practice in China, especially in low-income countryside. For the first time, a newly licensed PBMCs-based IGRA named TS-SPOT, was used for the diagnosis of active TB, and its performance in detection of Mtb infection was compared to that of T-SPOT.TB as control in this study. The results showed that the sensitivity, specificity, PPV, NPV, LR+, LR-and diagnostic efficiency on ATB diagnosis of this new assay was 80. 00, 83.45, 83.78, 83.45, 4.83, 0.24, and 81.67%, respectively, which were all similar to the imported T-SPOT.TB assay that had sensitivity, specificity, PPV, NPV, LR+, LR-and diagnostic efficiency of 76.77, 85.52, 85.00, 85.51, 5.30, 0.27, and 81%, respectively (Table 2). Consequently, there was a nearly complete concordance with an excellent Kappa value of 0.91 between these two PBMCs-based IGRAs, TS-SPOT and T-SPOT.TB. In direct comparison of these two IGRA assays, TS-SPOT detected a higher number of ATB patients than T-SPOT.TB (124 vs. 119; Table 2), indicating higher sensitivity of TS-SPOT assay (80.00 vs. 76.77%) for diagnosis of active tuberculosis. The Venn diagrams showed that most of the positive ATB (117/126) cases had consistent results between these two assays. The finding of 7 ATB cases with TS-SPOT + /T-SPOT.TB − results further indicated that TS-SPOT assay may have a higher sensitivity for diagnosis of active TB than T-SPOT.TB (Figure 2A), although there were no statistical differences between them. This may be because of inclusion of the third Mtb-specific antigen Rv3615c as stimulus besides ESAT-6 and CFP10 in the TS-SPOT assay; the additional positive cases may be having had unique Rv3615cspecific responses. Similarly, TS-SPOT also showed a higher positive rate in the NATB controls (24 vs. 21) than T-SPOT.TB ( Table 2) or 4 additional cases with TS-SPOT + /T-SPOT.TB − results ( Figure 2B). This resulted in a lower specificity (83.45 vs. 85.52%) on diagnosis of active tuberculosis when using non-TB patients with pulmonary diseases and healthy individuals as the control population ( Table 2). Due to the high prevalence of Mtb infection in China, it is plausible that the positive responses of control individuals could be due to LTBI. However, evaluating the accuracy of IGRAs in diagnosing LTBI remains a problem since there is no "gold standard" for such diagnoses. Actually, when analyzing the background information, we note that all of the four additional positive subjects (TS-SPOT + /T-SPOT.TB − ) are respiratory physicians/thoracic surgeons or ATB contacts who have no symptoms and are at high risk of infection with tuberculosis. Accordingly, the seeming lower specificity on diagnosis of active TB possibly reflected a higher sensitivity for positive detection of latent TB infection. However, there were also 2 ATB and 1 NATB cases with TS-SPOT − /T-SPOT.TB + results which remain unexplained. When further analyzing its performance for diagnosis of pulmonary TB using bacteriological tests as the "gold standard, " TS-SPOT assay showed up to 90% (80/89) agreement in the smear-/culture-positive cases, and 77.78% (91/117) positive rate in the smear-/culture-negative cases, which was significantly higher than the 36.89% (38/103) by sputum smear or 49.51% (51/103) by bacterial culture in the bacteriological tests (Table 4). Certainly, negative results were inevitably seen in 7 patients with sputum-and/or culture-positive TB. Nevertheless, those with false-negative TS-SPOT results were either older or showed fewer bacterial DNA copies compared to AFB-and TS-SPOT-positive patients ( Table 5). This was also reported previously from studies using an in-house IFN-γ ELISPOT assay (Chen et al., 2009). Thus, the findings indicate that TS-SPOT assay could be a useful tool for rapid diagnosis of active tuberculosis compared to the low-efficacy of sputum smear and time-consumption of Mtb culture. It should be noted that the sensitivity of IGRAs may vary among different forms of tuberculosis. Consistent with a recent study reporting a lower sensitivity of the T-SPOT.TB assay for diagnosis of extra-pulmonary TB compared with PTB (Wang et al., 2015), we also observed a lower magnitude of IFN-γ responses in patients with extra-pulmonary TB than in those with PTB (Figure 3A), which resulted in the higher positive rate in PTB cases (84.75%) than in EPTB cases (64.86%; Table 2). Our results also showed that the magnitude of IFN-γ responses did not differ significantly between EPTB and non-TB controls ( Figure 3A) or among the various EPTB forms (Figure 3D). A similar observation was seen in a reported study of Vietnamese patients, which argued against the use of the IFN-γ ELISPOT assay for diagnosis of tuberculous meningitis in blood (Simmons et al., 2006). The low IFN-γ responses seen in patients with severe TB reflects the complex roles of IFN-γ in immunity and the pathogenesis of TB and a comprehensive understanding of the role of IFN-γ responses should lead to better clinical indications, applications, and utilization of IFN-γ as a biomarker (Andersen et al., 2007). Although relatively low IFN-γ responses were seen in the patients with the EPTB forms compared to PTB cases, the positive rate of TS-SPOT in the EPTB cases was still high, up to 64.86% ( Table 2). This was significantly higher than the sensitivity (8/37, 21.62%) using the bacteriological tests based on the body fluid samples (Figure 1). Thus, TS-SPOT assay may be a useful adjunct to the current tests for rapid diagnosis of extra-pulmonary TB. The sensitivity might be substantially improved by using body fluid samples of EPTB cases instead of, or in addition to, the blood-based PBMCs; Wang et al. reported that T-SPOT.TB showed high sensitivity in the diagnosis of tuberculous pleurisy and peritonitis (82.35 and 80%, respectively) and high specificity (75 and 100%, respectively) when pleural effusion and ascites fluid were used as samples (Wang et al., 2015). Taken together, the evidence in this current study shows that TS-SPOT, a recently licensed IFN-γ release assay in China, has high sensitivity and specificity in diagnosis of active TB with essentially excellent agreement with the widely available T-SPOT.TB assay. Additionally, it may be useful to aid in the clinical detection and diagnosis of Mtb infection in patients with smear-and/or culture-negative TB and extra-pulmonary TB. Due to its cost-effective and high-quality characteristics, this assay may be a very useful tool for TB control, especially in low-income and high-incidence settings.
2017-05-17T18:50:27.420Z
2017-04-11T00:00:00.000
{ "year": 2017, "sha1": "6ceb212c2a1b5d6f499210d94c54ffdedd340db6", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcimb.2017.00117/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6ceb212c2a1b5d6f499210d94c54ffdedd340db6", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
219622102
pes2o/s2orc
v3-fos-license
Noise Removing in Medical Images by using Image Fusion Method : Image processing also made a graph in medical applications. During disease diagnosis medical practitioners are widely using digital images. At several states of art medical equipments are producing images of different organs, which are used at different stages of disease. Such medical images are X-rays, CT scans, MRI, ultrasound images etc. These images are produced by high frequency waves which also contains some part of noise. This noise is generated due to scattering of high frequencies. This noise is called as Speckle. Speckle alters the image quantity hence degrades the information in it which may cause any small error to the practitioner to disease diagnosis. Hence speckle removing made a vital role in medical images. This paper gives detail study of PCA image fusion to remove noise. INTRODUCTION Medical Imaging is nothing but a visual representation of internal body structures, tissues and organs. Now days image processing is playing a vital role in medical practitioner. MRI, CT scan, Ultrasound images and X-rays are medical images used to disease detection and helps practitioner to recover disease. Ultrasound (US) is nothing but a high frequency sound waves used to characterize different tissues. US have some properties such as compression, reflection, impedance etc. used to identify and characterize different tissues. Due to the random fluctuations of back scattering waves, a noise is generated which is nothing but speckle in ultrasound image. Due to speckle in the images contras resolution get decreased, due to this it is difficult to detect small or low contrast structures to diagnosis. Speckle reduction gives a better denoised image which helps to diagnose disease very easily from denoised image. The main advantage of reducing the speckle is to provide the radiologist with better view of the Ultrasound image through reducing the noise without destroying important features. This paper gives an algorithm for removing noise from digital medical images and comparison of input noisy image to the output denoised image generated by image fusion technique. Ultrasound is nothing but a high frequency waves which are normally higher than those audible humans (>20000 Hz). While taking an ultrasound image frequency pulses are sending into different tissues using a probe. These pulses echoes of tissues are having different reflection properties which are recorded to the other side and displayed as an image. Ultrasound images are carried out by sending a sound wave from the probe at the one end. These sound waves are nothing but high frequencies which get scattered from internal organs, tissues or obstacles. This scattering of wave produces unwanted signal in an image. There are two types of scattering diffusion scattering and coherent scattering. Diffusion scattering are those which are by random phase of scattering. Whereas scattering which are in phase with the ultrasound beam are coherent scattering and causes a dark spot in image. This dark spot, light and out of phase scatters are unwanted signals in an image which are called as Speckle in image. The speckle is nothing but unwanted signals which degrades image quantity. Sometimes these speckles are responsible for loss of information within the image. This loss can make a big problem to radiologist to detect the disease. Speckle reduction gives a better denoised image which helps to diagnose disease very easily from denoised image. The main advantage of reducing the speckle is to give a noise free image to the radiologist for better view image to detect any small kind of issue in the body. II. MATERIALS AND METHODS US Images: Ultrasound images are used for medical application to detect any swellings, pain, tumors or any other health issues of internal organs of a human body. The most important use of a ultrasound image is it is widely used for checking all the progress of a fetus in a pregnant woman. It gives all details of fetus with all the body parts like skin, heart, development of brain, liver, lungs, heart beats; even it can detect any kind of issue in the fetus. PCA is the method which gives a fused image without losing any information or data. PCA is a method for simplifying a multidimensional dataset to lower dimensions for analysis, visualization or data compression. PCA represents the data in a new coordinate system in which basis vectors follow modes of greatest variance in data. Noise Removing in Medical Images by using Image Fusion Method The above figure shows the proposed system. Input image is loaded in the first stagr of processing. Then preprocessing is carried out by histogram equalization, which transforms lower contrast area to higher contrast area. Speckle noise is multiplicative in nature which has maximum information in it. So it is important to convert multiplicative noise to an additive noise by using log transformation process. Neglecting the additive Gaussian's noise, then we have: f( a , b) = f 0 (a ,b ). η m (a, b) + η a (a, b) where , f(a , b) is noisy image, f 0 ( a , b )noise free image ,η a ( a , b) and η m (a , b) are additive and multiplicative noise. As additive noise have lower values as compares to multiplicative noise. So ignoring the additive noise we will get the image with speckle noise as expressed, f( a, b) = f 0 (a , b ) . η m (a, b) Image fusion method is applied on the outputs of filters. PCA technique is used for image fusion, which has the advantage like it reduces complexity in images grouping with the use of PCA algorithm. Due to reduction of noise maximum variations are removed and so the small variations in the background are ignored automatically. Apply the PCA algorithm on outputs of filter1 and filter2 as well as on filter3 and filter4, which gives two image fused outputs. The filters used are Median filter, Lee Filter, Butterworth Filter and Bays Filter. Lee filter is used to for edge preservation as compared to all filters. It is better in edge preservation. Lee filter operates on variance basis, when variance is low it performs the better smoothing operation, but when variance is high it does not perform well. It is in adaptive nature as it preserve in low as well as in high contrast image. This filter reduces speckle noise by applying spatial filter to each pixel. Lee filters are works on colour filters and colour gels for image lighting. Mathematical model for Lee filter is given in equation, lmg (i, j) = lm + W*(Cp-lm) Median filters are invented in 1990. In median filter centre value of pixel is replaced by the median value among all the pixels. It is a non linear filter. Due to this property it is used to reduce impulse noise in an image.. Bays filters are used for image filtering and smoothing the image. Another version of smoothing/ sharpening filters is the Butterworth filters. An advantage with the Butterworth filter is that we can control the sharpness of the filter with the order. A Butterworth filter of order n and cutoff frequency D0 is defined as, Again apply the PCA algorithm on these two outputs which gives the denoised image. After that perform the log inverse transform and will get the final denoised output with reduced form of speckles as compared to original image. III. RESULTS AND DISCUSSION For quality or performance checking of proposed work we are calculation some parameters for comparison. The parameters calculated are Signal to noise ratio, mean square error, structural similarity and power to signal ratio between the input ultrasound image and output denoised image. A. Signal to Noise ratio (SNR): The signal to noise ratio is often used for indicating the value of noise in an image. If the value of SNR is high then the image is good in quality with low noise and has the more information. Low value of SNR indicates noise in an image. Mathematically expressed as: SNR in dB = 10 log 10 B. Mean square error (MSE): The MSE is used to calculate the noise in an image. The MSE value should be low. For lower value of MSE it gives the better noise free image. As the higher value of MSE indicates the noise in an image and the filter is not in smoothing or noise reducing process. C. Structure similarity index measure (SSIM): The SSIM OR MISS gives the structural similarity between two images. It is always in between 0 to 1. Higher the value of SSIM gives the better quality or similarity in an image. D. Peak signal to noise ratio (PSNR): The PSNR is defines as, ratio of the maximum possible powers of a signal to the power of corrupting noise. It is calculated by, PSNR=10 log 10 *Result analysis first US images: Above tables indicates that all parameters of output images are fulfilling given conditions with respect to the input images. IV. CONCLUSION The main aim of proposed work is detection and reducing the noise contained in an ultrasound image. This research paper introduces an algorithm for PCA image fusion for reducing speckle noise in image.It is proven with the help of some comparison parameters like MSE, SNR, SSIM and PSNR which indicates that output image is reduced version of noise as compared to input image. Hence we can conclude that this proposed algorithm is capable of reducing noise in an ultrasound image and helps the diagnostics a better quality image. It enhances the quality of ultrasound image. It recovers the useful information in ultra sound image overlapped due to speckles. It will provide the radiologist with better view of the Ultrasound image without destroying important features. Also recovers the useful information in US overlapped due to speckles and gives radiologist a better view of image to diagnose the disease.
2020-02-20T09:13:00.516Z
2020-02-10T00:00:00.000
{ "year": 2020, "sha1": "8137d6ea90cd32730e5c2e137f9b940831bd649b", "oa_license": null, "oa_url": "https://doi.org/10.35940/ijitee.d1178.029420", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "34626dc6a6b4a76e97b73d808bae3fa91db7b30d", "s2fieldsofstudy": [ "Medicine", "Physics" ], "extfieldsofstudy": [] }
272594007
pes2o/s2orc
v3-fos-license
Long Noncoding RNAs MALAT1 and HOTTIP Act as Serum Biomarkers for Hepatocellular Carcinoma Background Circulating tumor markers with satisfactory sensitivity and specificity play crucial roles in cancer diagnosis and therapy. This prospective study aimed to evaluate the potential of circulating lncRNAs as biomarkers for hepatocellular carcinoma (HCC). Methods A total of 74 patients with HCC and 94 healthy controls were enrolled. The expression levels of candidate genes in serum were detected by qRT-PCR. Receiver operating characteristic (ROC) curve analysis and logistic regression were employed to investigate the diagnostic capacity of lncRNAs. The analysis of 3-year overall survival (OS) was conducted using the Kaplan-Meier method and log-rank test. Results Of the 9 candidate genes, 6 lncRNAs could be stably detected in serum. The expression levels of circulating MALAT1 and HOTTIP in HCC patients were significantly higher than those in controls (P < 0.001). ROC analysis showed that MALAT1 and HOTTIP were more effective than alpha-fetoprotein (AFP) (P < 0.010) in the diagnosis of HCC, with AUCs of 0.896 and 0.899, respectively. Additionally, a panel consisting of MALAT1, HOTTIP, and AFP was constructed to obtain an AUC of 0.968 with a sensitivity of 87.8% and specificity of 94.7% in HCC diagnosis. Moreover, the upregulation of MALAT1 was not only related to multiple tumor lesions, HCV infection, AST level, and AFP level, but also suggested shorter OS. A high expression level of HOTTIP was associated with metastasis. Conclusion Serum MALAT1 and HOTTIP play indicative roles as non-invasive biomarkers for HCC. Introduction Hepatocellular carcinoma (HCC) is one of the most fatal malignant tumors, ranking fifth in incidence and third in tumor-related deaths worldwide.The annual new cases of and deaths due to HCC in China account for approximately 51% of the world total. 1 Due to the insidious progression and insufficient typical early clinical symptoms of primary hepatic cancer, most patients lose the opportunity for radical treatment when they seek medical consultation, resulting in a 5-year survival rate of less than 20%. 2 Therefore, improving the screening and diagnosis of HCC plays a particularly important role in improving patient prognosis.Alpha-fetoprotein (AFP), a serum tumor marker for HCC, has been widely used clinically.However, it is still challenging to diagnose HCC at an early stage with a sensitivity limited to 65% and of <40% for preclinical prediction. 3PIVKA-II, an immature form of prothrombin, has recently been proposed as an effective serum HCC marker.However, the level of PIVKA-II could be influenced by various factors, such as vitamin K deficiency, coagulation disorders, and liver diseases. 4Therefore, finding novel biomarkers with higher specificity and sensitivity is necessary for HCC screening and diagnosis. Long noncoding RNAs (lncRNAs) are a group of nucleic acid sequences with lengths of more than 200 nucleotides and no protein coding ability. 50][11] In addition, lncRNAs have been confirmed to have satisfactory stability in plasma. 12Hence, finding appropriate lncRNAs in blood circulation to serve as markers represents a promising clinical research direction. Although circulating lncRNAs have been reported as biomarkers for HCC, different researchers yielded inconsistent results.Therefore, more experiments will help promote the progress in this field.We conducted the present study to identify appropriate lncRNAs as circulating HCC markers.Furthermore, their diagnostic efficacy and predictive value as noninvasive markers for HCC were also evaluated. Patients A total of 168 patients were enrolled in this study, namely, 74 consecutive patients diagnose with HCC at our institution from October 2018 to August 2019, and 94 healthy controls.All patients in the HCC group were confirmed by pathological examination or clinical diagnosis according to the World Health Organization (WHO) criteria, and patients who had received chemotherapy or radiotherapy were excluded.Data were collected prospectively on all patients from the medical records and personal interviews, including demographics, tumor features, laboratory data, and follow-up.Tumor features included number, size and vascular invasion.Laboratory data included levels of albumin (ALB), alanine transaminase (ALT), aspartic transaminase (AST), total bilirubin (TBIL), prothrombin time (PT), hepatitis B surface antigen (HBsAg), anti-hepatitis C virus (anti-HCV), and AFP.Patients from HCC group received ablation or chemoembolization treatment, which were chosen based on the Barcelona Clinic Liver Cancer (BCLC) approach.The follow-up time in HCC group ranged from 4 to 36 months (median 16 months).The research protocol was approved by the Ethics Committee of The First Hospital of China Medical University (2018-215-2).Written informed consent was obtained from every participant and all patient details have been de-identified.The reporting of this study conforms to REMARK guidelines. 13 Plasma Samples Whole-blood samples obtained with ethylene diamine tetraacetic acid (EDTA) anticoagulation were centrifuged at 3000 r/min for 10 min at 4°C to separate the blood cells.The supernatants were collected and centrifuged at 12,000 r/min for 10 min at 4°C to completely remove cellular components.Then, plasma was collected and stored at À80°C for further use. RNA Extraction Total RNA was extracted from 400 μL of cell-free plasma using a mirVana PARIS Kit (Ambion, Austin, TX, USA) and eluted with 100 μL of preheated (95°C) elution solution according to the manufacturer's protocol as described previously. 14RNA samples were stored at À80°C until further analysis. Quantitative Real-Time PCR (qRT-PCR) 6][17][18][19][20][21] Reverse transcription reactions were carried out in 100 μL of total RNA using the PrimeScript RT Reagent Kit with gDNA Eraser (TaKaRa, Shiga, Japan).The quality of RNA samples was assessed with a Nanodrop 2000 spectrophotometer (Thermo, CA, USA), and the 260/280 nm absorbance ratio was limited to 1.8-2.0.Then, real-time PCR was performed using SYBR Premix EX Tap II (TaKaRa, Shiga, Japan).GAPDH was evaluated as a housekeeping gene for the qPCRs.The primers used are listed in Table 1.All reactions were carried out on a LightCycler 480 Real Time PCR System (Roche, Basel, Switzerland) according to the manufacturer's instructions.A melt-curve analysis was carried out at the end of each reaction to verify the specificity.The 2 ÀΔΔCt method was used to evaluate the relative expression level of candidate genes in HCC blood samples in comparison to healthy controls.Each sample was repeated in triplicate.deficiency of normal samples, normal liver tissue samples in the GTEx database (https://xenabrowser.net/datapages/) were used.In addition, the clinical as well as prognostic information was obtained from the same website. Statistical Analysis Continuous variables are expressed as the means ± SDs, and categorical variables are expressed as a number.The differences were compared using t test for continuous variables and the chi-square test or Fisher's exact test for categorical variables.If lncRNAs were found to be significant, receiver operating characteristic (ROC) curve analysis was conducted to obtain the cutoff value, sensitivity and specificity.Combined ROC analysis was conducted on the basis of a logistic regression model.Pearson's correlation analysis was used to reveal the correlation between the serum lncRNA levels and biochemical parameters.Three-year overall survival (OS) was calculated using the Kaplan-Meier method, and the differences were compared using the log-rank test.The sample size justification of validation set was performed using an online simple size calculator (https://www.trialdesign.org/)with alpha value of 0.05 and power of 90%.The expression levels from TCGA and GTEx datasets between two groups were calculated by the Wilcoxon's test using R software (https:// www.r-project.org/).Statistical analysis was performed using SPSS Statistics 26.0 (IBM SPSS, Shanghai, China) and GraphPad Prism 9.0 (GraphPad Software, LLC., Boston, MA, USA).A two-sided P value of <0.050 was considered statistically significant. Patient Characteristics A total of 168 patients were enrolled in this study, namely, 74 patients in the HCC group and 94 healthy controls.Table 2 summarizes the clinical baseline characteristics of all participants.There were no significant differences in age, sex, smoking, alcoholism, ALB level, TBIL level or PT between the two groups at baseline.Compared with those in the control group, the AST and AFP levels in the HCC group were significantly increased (P < 0.050), and the ALT level was slightly higher.For the HCC group patients, 54.05% had a single intrahepatic lesion, others had multiple masses, and 13.51% had portal vein invasion.The proportions of subjects with BCLC stage 0, A, B, C, and D HCC were 2.70%, 36.49%,24.32%, 36.49%, and 0%, respectively. Screening of lncRNAs Related to HCC in the Training Set After numbering all serum samples, 20 pairs of samples in HCC group and control group were randomly selected and classified into the training set, and the remaining samples after randomization were used to form the validation set.The characteristics of training and validation sets are listed in Table 3.The results showed that 6 lncRNAs (HOTTAIR, MVIH, MALAT1, H19, HOTTIP and HEIH) could be stably detected in the plasma of HCC patients and healthy controls, and the other 3 lncRNAs were excluded due to their low detection levels (Figure 1).In addition, compared with the control group, the serum expression levels of MALAT1 and HOTTIP were significantly upregulated in HCC patients (P < 0.050).There was no significant difference in the expression levels of the remaining 4 lncRNAs (HOTAIR, MVIH, HEIH and H19) between the HCC group and the control group (Table 4, Figure 2). Confirmation of the Selected lncRNAs by the Validation Set The sample size justification for validation set was based on results from training set.The recommended minimum sample size for MALAT1 and HOTTIP were 73 and 69, respectively.The sample size of the validation set in this study was considered sufficient to reflect the differences in the selected genes.To validate the accuracy and specificity of MALAT1 and HOTTIP as HCC biomarkers, we next examined the expression level of two selected lncRNAs in the validation set composed of the 128 remaining samples.As shown in Figure 3, the relative expression levels of MALAT1 and HOTTIP in the plasma of HCC patients were significantly higher than those in healthy controls, and the results were consistent with those in the training set.Finally, we merged the total samples for further analysis. Relationship Between the Expression Levels of Plasma lncRNAs and Clinical Variables To further explore the value of MALAT1 and HOTTIP overexpression in HCC patients, the correlation between lncRNA expression levels and clinical variables was analyzed. Patients were categorized into high-and low-expression groups according to the median serum MALAT1 and HOT-TIP levels, respectively.Clinical data were classified into positive and negative groups based on different criteria.As shown in Table 5, the expression level of MALAT1 in plasma was positively correlated with multiple tumor lesions (P = 0.005) and HCV infection (P = 0.032).Patients with high serum HOTTIP expression levels showed increased metastasis formation (P = 0.042).The metastatic status was defined as either local lymph node infiltration or distant organ involvement.There was no obvious relationship between the expression levels of the two lncRNAs and other clinical characteristics, such as sex, age, tumor size, vascular invasion, HBV infection and Child-Pugh score.In addition, the relative expression level of MALAT1 was positively correlated with the levels of AFP (r = 0.301, P = 0.019, Figure 4A) and AST (r = 0.312, P = 0.015, Figure 4D) in 74 HCC patients.There was a slight positive correlation between the levels of ALT and MALAT1, but it was not significant (r = 0.239, P = 0.064, Figure 4C).No association between serum HOTTIP levels and biochemical variables was observed (Figure 5).Notably, the expression levels of measured lncRNAs may be affected by confounding factors such as liver disease severity, and we observed significantly higher AST values in HCC group than in controls, and a positive correlation between MALAT1 expression levels and AST.However, Child-Pugh scores, which represent liver function classes, were not found to correlate with MALAT1 and HOTTIP expression levels. Diagnostic Value of MALAT1 and HOTTIP in HCC Detection ROC curve analysis was conducted to evaluate the efficacy of the serum lncRNAs MALAT1 and HOTTIP as diagnostic indicators for HCC detection.As shown in Figure 6, when HCC patients were tested against healthy controls, the AUC for MALAT1 was 0.896 (95% CI: 0.848-0.945),with a sensitivity of 75.7% and specificity of 92.6% at the cutoff value of 0.0449.The AUC of plasma HOTTIP for distinguishing HCC patients from healthy controls was 0.899 (95% CI: 0.854-0.944)at the cutoff value of 0.0191 with optimal sensitivity and specificity values of 94.6% and 73.4%, respectively.AFP, as a commonly used biomarker for HCC screening, was also evaluated as the control.The AUC of AFP was 0.763 (95% CI: 0.685-0.842),with a sensitivity and specificity of 63.5% and 88.3%, respectively.The diagnostic efficacy comparison revealed that MALAT1 and HOTTIP were more effective than AFP (P < 0.010).Moreover, the combination of each lncRNA with AFP obtained a higher predictive power than AFP alone, and the merged AUC was 0.924 (95% CI: 0.881-0.967)for MALAT1 with AFP Prognostic Implication of Plasma MALAT1 and HOTTIP Expression in Overall Survival OS was defined as the time from diagnosis until death or the last follow-up time, which largely represents prognosis.To investigate the predictive value of serum MALAT1 and HOTTIP, a 3-year OS analysis was conducted on 74 HCC patients by using Kaplan-Meier analysis and log-rank tests.High-and low-expression groups were divided according to the median expression levels of lncRNAs in plasma.Patients with HCC in the high serum MALAT1 expression group had significantly shorter OS (Figure 7A, P = 0.002).No significant difference in prognosis between patients with high and low expression levels of HOTTIP was observed (Figure 7B, P = 0.284). Validation of the Predictive Role of MALAT1 and HOTTIP in Hepatocellular Carcinoma To further explore the diagnostic and prognostic effects of MALAT1 and HOTTIP, the mRNA expression of tumor tissues from patients with hepatocellular carcinoma in TCGA (n = 152) and normal liver tissues in the GTEx dataset (n = 152) were collected as external validation.The expression levels of MALAT1 and HOTTIP were significantly higher in hepatocellular carcinoma tissues than in normal liver tissues (Figure 8A-B).Consistently, patients with hepatocellular carcinoma exhibiting high expression of MATLAT1 had significantly shorter OS, and patients exhibiting high expression of HOTTIP also had a trend towards poor prognosis, which further suggested the reliability of these two lncRNAs as screening and prognostic predictors (Figure 8C-D). Discussion HCC is a common, life-threatening clinical disease with high morbidity and mortality. 22Currently, AFP and PIVKA-II are biomarkers for HCC screening and diagnosis.However, the sensitivity and specificity are not perfect in clinic applications, particularly for early-stage HCC, resulting in only 20% of HCC patients receiving curative treatment through surgical resection, ablation treatment, or liver transplantation. 3herefore, it is important to find promising biomarkers for HCC to improve patient prognosis.Several lncRNAs have been reported to be highly expressed in HCC tissue, and some of them can be detected in serum. 23However, the results from different studies are not exactly consistent.5][26][27] While Li et al. reported the relative expression levels of 8 HCCassociated lncRNAs, the authors noted that high expression level of HULC was observed, low expression level of UCA1 in HCC patients was identified and no significance was demonstrated for MALAT1 in HCC patients in contrast to controls. 28Luo et al also reported the expression level of H19 with a result of no difference between HCC and controls. 29Therefore, identifying appropriate lncRNAs and This study compared the serum lncRNA expression between patients with HCC and healthy controls, resulting in the identification of two lncRNAs as candidate biomarkers for HCC diagnosis.As a common clinical biomarker in HCC diagnosis, AFP was used as the control.The AUC of AFP in our study was similar to those in previous studies. 29,31,32mportantly, both MALAT1 and HOTTIP showed increased discriminatory power for distinguishing HCC patients from non-HCC patients, with AUCs of 0.896 and 0.899, respectively (MALAT1 vs AFP P = 0.002, HOTTIP vs AFP P = 0.003).The results indicate that MALAT1 and HOTTIP are more effective than AFP with satisfactory potential as novel biomarkers for HCC diagnosis.Moreover, we further explored the combined application of MALAT1, HOTTIP and AFP to improve the diagnostic efficacy of HCC.The panel combining MALAT1, HOTTIP and AFP achieved the best discrimination power (AUC: 0.968, 95% CI: 0.945-0.991)for HCC.Therefore, combining serum lncRNAs, including MALAT1 and HOTTIP, with AFP is of great significance for improving the identification of HCC.It is important to point out that although the AUC of the panel is larger than the AUC of the combination of MALAT1 and HOTTIP, there is no significant difference between the two formulas (Panel vs MALAT1&HOTTIP P = 0.142).After analyzing the regression coefficients of the three markers in the formula of the panel (Y = À3.6 + 22.275× MALAT1 + 13.941 × HOTTIP +0.022 × AFP), the coefficient of AFP was lower than the two lncRNAs by a large margin, indicating its limited role in the diagnosis of HCC.The regression coefficient is a parameter that represents the influence magnitude of the independent variable X on the dependent variable Y. Thus, despite the addition of the marker AFP, there was no statistical difference between the panel and the combination of MALAT1 and HOTTIP. MALAT1, also known as metastasis-associated lung adenocarcinoma transcript 1, has a length of more than 8000 nt, is located on chromosome 11q13 and is highly conserved in multiple species. 33Tritaphy et al showed that MALAT1 can affect the phosphorylation of the serine/arginine (SR) protein at the cellular level, thereby regulating the splicing process of pre-mRNA and playing an important role in a variety of tumor biological behaviors. 34Here, we revealed that serum MALAT1 was significantly upregulated in HCC patients, which was consistent with previous studies. 14,25Moreover, we investigated the correlation between circulating MALAT1 and clinicopathological features of HCC patients.The high level of MALAT1 expression in plasma was positively correlated with multiple tumors and hepatitis C virus infection.Moreover, a significant association between serum MALAT1 and AFP as well as AST was observed in patients with HCC.MALAT1, one of the first lncRNAs identified, has been reported to act as an important metastasis-relevant oncogene. 35eta-analysis and multiomics analyses have demonstrated the upregulation of MALAT1 in HCC tissues and indicated that the overexpression of MALAT1 is significantly related to tumor number and AFP. 25,36,37However, there are few studies on the expression of MALAT1 in the plasma of patients with HCC, and the conclusions are inconsistent.Konishi et al confirmed that plasma MALAT1 levels were progressively and significantly elevated in hepatic disease as well as HCC patients and were associated with liver damage but not related to AFP.The researchers concluded that plasma MALAT1 might be derived from not only HCC tissues but also damaged hepatocytes due to hepatitis viral infection, steatosis, and other hepatic diseases. 14Toraih et al reported that serum MALAT1 overexpression in HCV-related HCC had an AUC of 0.79 to distinguish cancer patients from healthy controls.Correlation analysis showed a positive correlation of MALAT1 with TBIL and AST. 25 Huang et al. demonstrated a significantly higher level of MALAT1 in HCC patients than in healthy controls, whereas there was no association between serum MALAT1 levels and clinicopathological characteristics. 24Our study not only demonstrated the upregulation of serum MALAT1 in HCC patients but also confirmed the association between serum MALAT1 and clinicopathological features, including tumor number, HCV infection, AST level, and AFP level, highlighting the clinical significance of MALAT1 as a biomarker.MALAT1 could enhance cellular proliferation and inhibit apoptosis via modulating PI3K/AKT and JAK/STAT signaling pathways, upregulating LTBP3 gene, and inhibiting caspase3/7 activity. 25The putative mechanism for MALAT1 up-regulation could be the transcriptional activation of MALAT1 via the TF specificity protein Sp1/3 in HCC cells.Zhou et al reported that MALAT1 could promote HCC metastasis through the peripheral vascular infiltration by inhibiting the level of miRNA-613, which may be the reason for a significant association of MALAT1 with multi-tumor number and elevation in AFP reported by several studies. 11,36Hepatitis C virus infection not only causes liver cell damage and elevated transaminase levels, but also is related to the occurrence of HCC.Toraih et al reported serum MALAT1 profile was positively correlated with hepatic failure scores and confirmed MALAT1 oncogenic role in HCV-induced HCC in subsequent meta-analysis. 25ssal et al. reported MALAT1 could target CD-155/TIGIT and PD-1/PD-L1 axes, facilitating HCV-related HCC development by evading immune surveillance. 38Shao et al indicated that MALAT1 was most likely involved in regulating HCV-related HCC processes by acting as ceRNA regulation in the hsa-miR-193a-3p/BUB1 axis and confirmed in further in vitro experiment. 39Furthermore, the overall survival analysis with the Kaplan-Meier method showed that patients with HCC in the high serum lncRNA MALAT1 expression group had significantly shorter OS.Although the high expression levels of MALAT1 in HCC tissues have been demonstrated to be associated with poor prognosis, to the best of our knowledge, the current study is the first to report the potential of serum MALAT1 for predicting the prognosis of patients with HCC.HOTTIP (HOXA transcript at the distal tip), known as HOXA distal tip, is a long noncoding RNA located at the 5 0 end of the HOXA gene.HOTTIP can directly control the expression of the HOXA gene by interacting with the WDR5/ MLL complex and is associated with metastasis formation and poor patient survival in HCC. 402][43][44] However, the serum expression of HOTTIP in HCC is rarely reported.In the present study, we identified that the serum expression of HOTTIP in HCC patients was significantly higher than that in healthy controls and had a better diagnostic efficacy than AFP.Clinicopathological feature analysis showed that abnormal expression of HOTTIP was positively associated with distant metastasis in HCC patients, which was consistent with previous studies. 20,40sang et al concluded that HOTTIP could be a novel oncogenic lncRNA, which negatively regulated by miR-125b and might cis-regulate the expression of its neighboring genes residing in the HOXA cluster in HCC and thereby contribute to hepatocarcinogenesis.In addition, knock-down of HOTTIP also inhibited migratory ability of HCC cells and significantly abrogated lung metastasis in orthotopic implantation model in nude mice.This may be a potential mechanism for the association between lncRNA HOTTIP expression and clinical variables. 20The high expression level of HOTTIP in HCC tissues has been also reported as a candidate biomarker for predicting poor prognosis in HCC patients. 40,45However, we did not observe any associations between serum HOTTIP and biochemical parameters or overall survival.The possible reasons may be the lower expression level of HOTTIP in serum than that in HCC tissues and the short follow-up period in the present study. There are several limitations to the current study.First, the experiment contains a relatively limited sample size and is a single-center study.The sample size of this study is composed of consecutively collected cases, rather than sample size calculation.However, we conducted a sample size justification in the validation set, using the results of the training set.On the other hand, HCC is a heterogeneous disease, and the expression level of the same gene may be variable in different populations.Thus, large-scale and multicenter studies are recommended to evaluate the diagnostic and prognostic potential of lncRNAs in further investigations.Second, apart from AFP, PIVKA-II is also a common clinical biomarker for HCC.The comparison of the diagnostic value of lncRNAs with PIVKA-II warrants further research.Third, no HCC tissue or adjacent noncancerous liver tissue samples were obtained.A comprehensive transcriptome analysis of tissue and serum from HCC patients showed a correlation in ncRNAs between serum and tissue samples, suggesting that noncoding RNA export from tumors. 46Although the majority of studies have reported that MALAT1 11,34,[47][48][49] and HOTTIP 20,40,45,50,51 are highly expressed in HCC tissues and validated by public databases, the expression levels of the target genes in tissues were not verified in the present study.In addition, since the GAPDH expression level is usually not affected under experimental or physiological conditions, 52 GAPDH is widely used as an internal control in numerous similar studies. 53However, it is worth noting that there is no "one-size-fits-all" gene that can be used for the normalization of gene expression data.Therefore, some scholars proposed that gene expression analysis should be related to several housekeepers in parallel. 54Finally, we paid more attention to lncRNAs acting as novel biomarkers of HCC in this study, and in vivo and in vitro experiments are recommended to explore the functional role of MALAT1 and HOTTIP in HCC development in the future. Conclusion In summary, we identified that the serum lncRNAs MALAT1 and HOTTIP were significantly highly expressed in patients with HCC and exhibited a better diagnostic value for HCC than the traditional biomarker AFP.High levels of lncRNA expression were associated with clinical features and poor prognosis of HCC patients, showing potential as noninvasive biomarkers for the diagnosis and prognosis of HCC.Additionally, the combination of MALAT1, HOTTIP and AFP could provide a better diagnostic accuracy, and MALAT1 and HOTTIP play a major role in the diagnostic panel. Figure 1 . Figure 1.Heatmap of lncRNAs expression levels in plasma samples from 20 HCC patients and 20 healthy controls in the training set. Figure 2 . Figure 2. Relative expression levels of 6 stably detected lncRNAs in serum.The expression levels of MALAT1 (C) and HOTTIP (E) were significantly upregulated in HCC patients compared with healthy controls.There was no significant difference in the expression levels of the other 4 lncRNAs (A, B, D and F).Data on the relative expression of serum lncRNAs are presented after log10 transformation.*** indicates P < 0.001, **** indicates P < 0.0001. Figure 3 . Figure 3.Comparison of the selected lncRNA expression levels in the validation set.The serum expression levels of MALAT1 (A) and HOTTIP (B) in the HCC group were significantly higher than those in the control group.Data on the relative expression levels of serum lncRNAs are presented after log10 transformation.**** indicates P < 0.0001. Figure 4 . Figure 4. Correlation between serum MALAT1 and biochemical parameters in HCC patients.The expression level of MALAT1 in plasma was positively correlated with the levels of AFP (A) and AST (D).No significant relationships were found between MALAT1 and other biochemical variables (B, C, E and F). employing them to screen and diagnose HCC still need further investigation.The present study revealed that the lncRNAs MALAT1 and HOTTIP were significantly over-expressed in the serum of HCC patients, and both of them showed a higher diagnostic value than AFP.Additionally, the combination of MALAT1, HOTTIP and AFP demonstrated the best predictive power.At present, the detection kit based on 7 microRNA combinations has been used for early diagnosis of HCC.30MALAT1 and HOTTIP, which are also non-coding RNAs, are expected to constitute a lncRNA panel for HCC diagnosis and prognosis in the future. Figure 7 . Figure 7. Cumulative survival rate according to different serum lncRNA expression levels in 74 HCC patients.(A) High serum MALAT1 expression levels result in shorter patient survival.(B) No statistically significant difference was found between the high and low serum HOTTIP expression groups. Figure 8 . Figure 8. Validation of the predictive role of MALAT1 and HOTTIP in TCGA-LIHC cohorts.(A-B) Boxplot shows the expression difference of MALAT1 and HOTTIP between HCC and adjacent normal liver tissues.(C) High expression of MALAT1 is correlated with shorter OS in HCC patients.(D) Patients with high HOTTIP expression tend to have a shorter OS. Table 1 . Primers of 9 Candidate lncRNAs and Housekeeping Gene. Table 2 . Baseline Characteristics of Patients in the HCC Group and the Control Group. Table 3 . Baseline Characteristics of the Training and Validation Sets. Table 4 . The 2 ÀΔCt Values of 6 Stably Detected lncRNAs in Serum Samples. Table 5 . Clinicopathological Relevance Analysis of MALAT1 and HOTTIP Expression in HCC Patients.
2024-09-13T06:17:56.626Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "639e2efcb040952d13dfaa31a16e455709c13225", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "d75510bf2db5e64f613e8568fd0f42bb95aec4b9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250666627
pes2o/s2orc
v3-fos-license
Study on magnetic separation of nanosized ferromagnetic particles In recent researches in medicine and the pharmaceutical sciences, the magnetic separation technology using nanosized ferromagnetic particle is essential. For example, in the field of cell engineering, magnetic separation of nanosized ferromagnetic particles is necessary, but separation technology of nanosized particle using magnetic force has not been established. One of the reasons is that magnetic force acting on the object particles decreases as particle diameter becomes small, and makes magnetic separation difficult. In this study, magnetic force acting on the separation object was enlarged by the combination of superconducting magnet and the filter which consists of ferromagnetic particle. As a result of particle trajectory calculation and magnetic separation experiment, it was confirmed that the ferromagnetic particles of 15nm in diameter can be trapped in the magnetic filter under an external magnetic field of 0.5T. The ferromagnetic particles of 6nm in diameter which could not be separated under the same condition could also be trapped under 2.0T of external magnetic field. Introduction In recent researches in medicine and the pharmaceutical sciences, magnetic separation technology using nanosized ferromagnetic particle has gained importance [1]. In the field of cell engineering, nanosized ferromagnetic particles of several nanometers to 20nm in diameter is required, and magnetic separation technology of nanosized particle using magnetic force must be established [2][3][4][5]. However, magnetic force acting on the object ferromagnetic particle decreases as particle diameter becomes small, and makes magnetic separation difficult [6]. In this study, to enlarge magnetic force acting on the object ferromagnetic particles, we tried to design a magnetic filter consisting of densely packed ferromagnetic particles of 0.3mm in diameter in the glass tube of 4mm in inner diameter. First, two dimensional finite element model of the magnetic filter was constructed, and the magnetic field analysis and the fluid analysis were performed. Based on these results, the particle trajectory of the ferromagnetic particles of φ=6nm and φ=15nm in the vicinity of the magnetic filter were calculated by solving the dynamic equation of the object ferromagnetic particle in order to examine the possibility of magnetic separation. Next, to verify the result of simulation, magnetic separation experiment of ferromagnetic particles of φ=6nm (FePt) [7] and φ=15nm (Fe 3 O 4 ) using the developed filter was conducted, and possibility of separation in the actual system was studied. The magnetic field analysis and the fluid analysis of the magnetic filter The distribution of magnetic field intensity and flow velocity distribution in the model calculated by using ANSYS Ⓡ Ver.10.0(ANSYS. Inc.) is shown in Figure 1. As shown in Figure 2, two dimensional finite element model was constructed based on maximum distance between two particles in three dimensional close-packed structure of spherical particle. Experimental conditions are shown below. The maximum magnetic flux density at the center of coil were set to 0.5T and 2.0T, the particle diameter of the object ferromagnetic particles was set to 6nm and 15nm, the fluid was set to water (viscosity=1cP), and inflow velocity was set to 0.01m/s. The filter consisted of uniformly arranged ferromagnetic particles of 0.3mm in diameter (Figure1). Calculation of the particle trajectory of the ferromagnetic particles in the vicinity of magnetic filter Magnetic force and drag force acting on the object particles were calculated based on the parameters obtained from the numerical analysis in each node using the magnetic field analysis and the fluid analysis using ANSYS. Motion equations of the object particles in the node using those forces were solved, the acceleration was calculated, and the next node was searched. The particle trajectory of the ferromagnetic particles in the vicinity of magnetic filter was calculated by repeating this operation. Magnetic separation experiment of nanosized ferromagnetic particles To verify the result of simulation under 0.5T and 2.0T of external magnetic field in the experiment 2.1 and 2.2, magnetic separation experiment with actual the ferromagnetic particle was conducted. The developped magnetic filter was put under 0.5T and 2.0T of external magnetic field. Magnetic separation was performed by passing the fluid containing FePt (φ=6nm), Fe 3 O 4 (φ=15nm) ( Table 1) through the magnetic filter. The ferromagnetic particles were dispersed in n-hexane or toluene because nanosized particles are difficult to disperse uniformly in water. To measure the density of the ferromagnetic particles dispersed in organic solvent before and after the separation, the ferromagnetic particles were dissolved in 50% HCl after evaporating organic solvent, and measured by inductivelycoupled plasma atomic emission spectrometry. The separation efficiency was estimated from change in the density of ferromagnetic particles. The magnetic field analysis and the fluid analysis of the magnetic filter As a result of the magnetic field analysis, the magnetic flux density and the magnetic gradient are higher in oblique direction between particles compared with horizontal direction between particles under both 0.5T and 2.0T of external magnetic field. Moreover, the magnetic flux density increased towards outside of the glass tube (Figure 3a). From the result of the fluid analysis, fluid speed near the top of the particles is slowest, and that at the center between the particles in horizontal direction is fastest (Figure 3b). These results predict that the accumulation rate of the object particles on the magnetic filter is the highest in the vicinity of obliquely upward area of the magnetic filter towards the direction of the flow. Calculation of the particle trajectory of the ferromagnetic particles in the vicinity of magnetic filter The particle trajectory of the ferromagnetic particles of φ=6nm and φ=15nm showed that the ferromagnetic particles of φ=15nm could be trapped in the magnetic filter under 0.5T of external magnetic field (Figure 4a). In contrast, the ferromagnetic particles of φ=6nm could not be trapped in the magnetic filter under 0.5T of external magnetic field (Figure 4b). However, when external magnetic field was set to 2.0T, the ferromagnetic particles of φ=6nm also could also be trapped in the magnetic filter (Figure 4c). Magnetic separation experiment of nanosized ferromagnetic particles To verify the result of simulation discussed above, magnetic separation experiment under 0.5T and 2.0T of external magnetic field using FePt (φ=6nm) and Fe 3 O 4 (φ=15nm) which are dispersed in nhexane or toluene was conducted. Conclusion The possibility of the magnetic separation using nanosized ferromagnetic particles was considered by calculation of the particle trajectory of nanosized ferromagnetic particles examined and the magnetic separation experiments of ferromagnetic particles. As a result of particle trajectory calculation and magnetic separation experiment, it was confirmed that the ferromagnetic particles (φ=15nm) can be trapped in the magnetic filter consisting of densely packed ferromagnetic particles of 0.3mm in diameter in the glass tube of 4mm in inner diameter under 0.5T of external magnetic field. The ferromagnetic particles (φ=6nm) which could not be separated under the same condition could also be trapped under 2.0T of external magnetic field. In this study, it was confirmed that the ferromagnetic particles which could not be trapped under external magnetic field using permanent magnet become possible to separate by increasing external magnetic field using superconducting magnet.
2022-06-27T23:32:35.988Z
2009-01-01T00:00:00.000
{ "year": 2009, "sha1": "c7e7f79da66fc5c98094bca4df40fad312ceac2d", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/156/1/012032", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "c7e7f79da66fc5c98094bca4df40fad312ceac2d", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Medicine", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
150509104
pes2o/s2orc
v3-fos-license
Deciding on Discipline: The Importance of Parent Demeanor in the Transmission of Discipline Practices : Although child abuse is a social problem in the United States, many cases go unreported because there is not a consensus as to what disciplinary actions are deemed abusive. Thus, it is paramount to understand the demarcation between physical punishment and physical abuse among parents and their use of certain forms of discipline. This study examines how discipline experienced by adolescent respondents affects their choice of discipline practices in adulthood. A random sample of residents was selected from three South Carolina counties using the 2016 state voter registration list. Respondents were mailed a survey asking questions pertaining to their disciplinary practices and experiences. Analyses were conducted using the ordinary least squares regression. Those who experienced abusive discipline as a child were significantly less likely to report that they use the same discipline techniques as their parents. However, adding parenting traits into the model revealed a mediation effect. Abusive discipline no longer plays a significant role in how respondents discipline their own children once the perceived demeanor of their parent is taken into consideration. These findings suggest that disciplinary techniques are less important than a parent’s attitude when correcting their children’s behavior. Implications for the current research, limitations, and directions for future research are discussed. Introduction Child abuse is a major social problem in the United States, even though the U.S. is one of the most advanced nations in the Western world. In 2015, Child Protective Services (CPS) received four million referrals for alleged maltreatment of children and approximately 2.2 million of these referrals (4.1 million children) were screened by CPS and further action was deemed necessary (Child Welfare Information Gateway 2017). In addition, in 2015, approximately 683,000 children were victims of abuse for a rate of 9.2 per 1000 children. These numbers reveal a 3.8% increase in child abuse since 2011 (U.S. Department of Health and Human Services 2015). The rate of physical abuse specifically is 4.4 per 1000 children with children between the ages of six and eight having the highest rates of this form of abuse (Sedlak et al. 2010). What is more alarming is that although these numbers are high, they actually underreport child abuse (Rodriguez and Sutherland 1999). Some cases are not reported because many parents in the U.S. do not see certain discipline techniques involving physical punishment as inappropriate or abusive. There is a fine line between physical abuse and physical punishment, and precipitating factors seem to overlap (Straus 1983). As a result, knowing the elements that impact parents' choice of discipline can contribute to understanding the dynamics of abuse. Research has shown that factors such as the parent's own childhood experiences with discipline influence their parenting practices (Kaufman and Zigler 1993). In fact, the intergenerational transmission of violence hypothesis, which states that individuals exposed to abuse are more likely to perpetrate abuse themselves, is established in the literature (Craft and Serovich 2005). Moreover, experiencing harsh and abusive parenting during childhood is a risk factor for parents treating their own children negatively (Pears and Capaldi 2001). Finally, several studies have shown an intergenerational transmission of parenting attitudes and behaviors. For example, parents who take a harsh discipline approach to childrearing tend to transmit these beliefs and practices to their children (Seay et al. 2016). Research has also revealed several other factors related to child abuse, including educational attainment, gender, and religiosity (Gershoff et al. 1999). Research has shown that female parents and those with lower levels of education are more likely to use physical discipline than male parents and those with higher educational attainment (Jackson et al. 1999). Gershoff et al. (1999) found that parents with conservative Protestant values are more likely to use corporal punishment compared to parents with other religious values. Given what we know regarding the intergenerational transmission of violence and the social factors associated with discipline, this exploratory study seeks to examine how childhood experiences influence parental beliefs and disciplinary practices in adulthood for a small sample of respondents in three counties from the Pee Dee Region in Northeast South Carolina. Specifically, this paper evaluates the types of discipline respondents received as children, the perceived demeanor of their parents during discipline, and how these factors influence respondents' perceptions of how they discipline(d) their own children. Demographic characteristics were also examined to explore correlates of parenting beliefs and discipline. Intergenerational Transmission of Violence, Parental Beliefs, and Discipline The intergenerational transmission of violence refers to the notion that experiencing abuse or witnessing violence as a child leads to adult perpetration of violence toward their spouses and partners and their own children (Widom and Wilson 2015). Social Learning Theory is one major perspective used to explain how patterns of abuse and violence occur across generations. According to this theory, children partially learn behaviors by observing and imitating others and such behaviors are reinforced through social interactions. Parents provide a highly influential model of behavior and parents who are violent or aggressive, whether with each other or in their disciplinary choices, provide a social context where violence is normal, appropriate, and used to resolve conflict (Bandura 1973). Research supports a social learning model to explain the intergenerational transmission of violence. Baron and Richardson (1994) reveal that exposure to corporal punishment during childhood was related to violent behavior in adulthood. Additionally, Simons and Wurtele (2010) find that children who experience physical punishment tend to view aggression as an appropriate means for resolving conflict. Therefore, parents who resort to violence and corporal or physical punishment to solve problems and conflict may set the stage for their children's behaviors both in child and adulthood. There is a vast amount of literature that investigates the intergenerational transmission of violence to examine patterns of abuse across generations. In fact, research generally supports the notion that parents who experience abuse during childhood are at an increased risk for perpetuating child abuse (Craft and Serovich 2005;Heyman and Slep 2002;Pears and Capaldi 2001). However, the extent of that risk is not clear in the literature. For instance, Thornberry et al. (2013) find that parents who were exposed to abuse as children were 2.6 times more likely to abuse their own children than parents who were not exposed to abuse as children. Other research reveals a weak to moderate risk for the intergenerational transmission of abuse (Leve et al. 2015;Thornberry et al. 2012). Many studies are critiqued, however, due to methodological and/or sample limitations. Research conducted with a stronger methodology or more representative samples reveal inconsistent support for the intergenerational transmission of abuse/violence hypothesis (Thornberry et al. 2012). Nonetheless, more recent research reveals support for this thesis. In a sample of adolescent mothers, Putnam-Hornstein et al. (2015) find that a history of abuse or maltreatment predicted later abuse and involvement by Child Protective Services. Bartlett et al. (2017) also expose a relationship between experiencing abuse as a child and committing subsequent child abuse after becoming a parent; whether or not a report of abuse or maltreatment was substantiated, type of abuse, and perpetrator type influenced this relationship. Specifically, children of adolescent mothers with a history of abuse were 50% more likely to experience maltreatment. Moreover, the risk of child abuse increased more than 300% when mothers have at least one report of experiencing multiple types of abuse. In addition to the possibility of violence and abuse being passed down to the next generation, the literature reveals that both supportive and harsh disciplinary practices may also be transmitted from parent to child. Older research shows that children whose parents used discussion to resolve conflict tend to also use that same method as adults (Jorgenson 1985;Steinmetz 1977). Moreover, Chen and Kaplan (2001) find that supportive and warm discipline transcends generational lines and those who experience this type of parenting style as children are more likely to adopt such beliefs and practices with their own children. This intergenerational transmission of positive parenting was also found among mothers who experienced a trusting relationship with their parents, a positive family environment, and were not exposed to an authoritarian parenting style as children . Finally, Shaffer et al. (2009) collected longitudinal data on a sample of children when they were 10 years old (time 1) and approximately 20 years later (time 2) when the children were grown and had become parents. Results show that overall parenting techniques were transmitted between parent and child, as the individuals in the study tended to employ the same parenting practices they experienced as children. Parental Personality Traits Parental personality traits or demeanor is related to disciplinary and abuse outcomes. In fact, scholars suggest that parental traits or demeanor play a significant role in how parents interact with their children, the quality of the parent-child relationship (Belsky and Barends 2002;Vondra et al. 2005), and how discipline is administered (Socolar et al. 2007). Several traits have been suggested to influence both harsh discipline and child maltreatment. Difficulties controlling anger is found to be a key characteristic of an abuser (Ammerman 1990;Rodriguez and Green 1997) and research also shows that the level of physical discipline given to a child depended on how angry a parent was with their child's behavior (Peterson et al. 1994). Findings based on a meta-analysis on the risk factors for child abuse reveal a large effect size between parental anger and physical abuse. In addition, the quality of the parent-child relationship is found to be a significant factor for both physical abuse and neglect (Stith et al. 2009). Moreover, research reveals several other negative traits that influence child maltreatment, specifically physical abuse and neglect. These traits include, but are not limited to, low parental involvement, low father warmth, maternal alienation, dissatisfaction, and hostility (Brown et al. 1998). In addition to traits predicting discipline practices, there is some evidence that parental dispositions may be transmitted to their children. Patterson et al. (1989) suggest that children learn to use aggressive behavior toward others as a result of being exposed to an angry and hostile parenting demeanor. Moreover, research by Chen and Kaplan (2001) revealed that children exhibit the same demeanor as their parents when disciplining their own children later in life. Together, these findings suggest support for the intergenerational transmission of parenting traits. It stands to reason, therefore, that if children are internalizing and demonstrating the same traits as their parents, they may follow in their parents' footsteps regarding their own disciplinary practices. Thus, it is vital that parental personality traits or parental demeanor are examined in the present research. Learning violence or abuse and disciplinary choices from one's own experience as a child are only two pieces of the larger parent-child relationship puzzle. Other Factors That Influence Parental Beliefs and Discipline Choices Several other social factors contribute to child maltreatment and influence parental beliefs, along with the type of discipline they choose to utilize on their children. These social factors or characteristics include the religiosity, racial background, age, and gender of the parent (Wolfner and Gelles 1993). Religion is a predictor of discipline style as individuals from various religious affiliations cite the Bible as validation for using corporal punishment (Carey 1994). Indeed, research suggests that parents from a Conservative Protestant religious background are more likely to use physical discipline than other religions (Ellison et al. 1996;Grasmick et al. 1992). Gershoff et al. (1999) find that Conservative Protestant parents spanked their children significantly more than other religions and were less likely to report negative outcomes of spanking, such as an increase in their child's aggression. Hence, the literature consistently shows conservative religions back the use of harsh, physical discipline of children. Additionally, the race, age, and gender of the parent influences parental discipline practices. Regalado et al. (2004) reveal that African American parents were twice as likely to use corporal punishment as White parents. Additionally, Barkin et al. (2007) find that African American parents were marginally more likely to spank their children and less likely to use time-outs to correct their children's behavior than White parents. As for age, research has found adolescent mothers are at an increased risk of committing physical abuse, sexual abuse, and neglect (Brown et al. 1998). Finally, regarding gender, females have been found to be more likely to commit physical abuse, corporal punishment, or neglect given that mothers typically have more interactions with their children than fathers (Gelles 1997;Jackson et al. 1999). Given these differences, these demographic predictors are important to consider in the current study. Expectations for the Current Research Based on the literature summarized above, the cycle of violence is a key perspective to explain how harsh parenting and abuse may occur across multiple generations. The literature also suggests that parenting attitudes, beliefs, and discipline practices are transmitted to younger generations. An additional factor that influences this transmission is the traits or demeanor exhibited by the parents during discipline. Hence, this study explores the intergenerational transmission of violence and parenting practices by investigating different types of discipline experienced during childhood, the demeanor of parents while administering discipline, and whether this influences how one perceives they discipline their own children years later. To address these issues, the current study poses the following research questions: (1) How does experiencing different discipline techniques impact one's perception of their parent's influence on their own discipline practices? (2) How does a parent's demeanor during discipline impact one's perception of their parent's influence on their own disciplining practices? Sample For this exploratory study, a random sample of residents from three counties (Florence, Chesterfield, and Marlboro) in Northeast South Carolina was selected from voter registration lists acquired from the South Carolina State Election Commission. The voter registration lists included the resident's name, address, and select demographics. The total number of residents randomly selected for this study was 850 for Florence County, 450 for Chesterfield County, and 450 for Marlboro County. Given Florence County is larger in population and contains more registered voters than Chesterfield and Marlboro Counties, a larger sample was selected for Florence. The samples were randomly selected by (1) importing the registration lists into SPSS and (2) using the random sample of cases function (found within the select cases function in SPSS) to select respondents from each county. Data Collection The authors designed a 101-item questionnaire booklet that was sent via first class mail to each respondent in the Florence, Chesterfield and Marlboro County samples. A stamped, addressed return envelope and two copies of the consent form were included with the questionnaire booklet. Respondents were instructed to sign and return one copy of the consent form with the completed questionnaire and retain the other copy for their records. Respondents were given the chance to be entered into a random drawing to win one of eight $25.00 Wal-Mart gift cards. Respondents were instructed to check a box on the consent form to indicate if they wanted to be entered into the drawing. A $1.00 bill was paper-clipped to the questionnaire booklet sent to all respondents in the Chesterfield and Marlboro samples as further incentive to participate in the study. Due to budget limitations, however, only a subset (approximately 24%) of the Florence sample received a $1.00 token. Approximately two weeks after the initial mailing, a reminder postcard was sent to all non-responders. In total, 213 questionnaires were completed and returned, yielding an overall response rate of 12.2%. Data were collected from January 2017 to September 2017. Prior to engaging in data collection, the study was approved by the Francis Marion University Institutional Review Board. Those who volunteered to participate were not subject to risks beyond what is encountered in daily life. Confidentiality and anonymity were maintained through all phases of the data collection process. All identifying information was removed from the questionnaire and all results are reported in the aggregate. Sample Selection To determine the influence of disciplinary techniques on the transmission of discipline practices, respondents who indicated that they were parents were selected for inclusion in the data analysis. There were 179 parents within the larger sample. Upon removing cases with missing data, 130 respondents remained for analysis. Dependent Variable The dependent variable in this study is whether respondents perceive that their discipline techniques were influenced by the discipline they received as a child. Two questions were used in the construction of the dependent variable: "My parents'/guardians' method of discipline influences how I discipline(d) my child/children" and "I discipline(d) my child/children similar to how I was disciplined by my parents/guardian." These questions were measured on a six-point Likert scale (1 = Strongly Disagree to 6 = Strongly Agree). Respondents who did not answer both of these questions were removed from the analysis. Independent Variables The primary variables of interest measure the methods of discipline respondents experienced as a child. Respondents were asked how often their parent/guardian: (1) gave them a time-out; (2) took away privileges; (3) used humiliation to correct their behavior; (4) yelled at them; (5) cussed/swore at them; (6) threatened them; (7) hit them with a fist; (8) pushed, grabbed, or shoved them; (9) slapped them with an open hand; and (10) spanked them. These discipline techniques were measured on a five-point Likert scale (1 = Never to 5 = Always). In order to ensure an adequate sample size, mean imputation was used to estimate the answers of those who did not provide responses for three or fewer of the above discipline techniques. If more than three questions were left unanswered by a respondent, mean imputation was not conducted and those individuals were removed from the analysis. Parent traits or demeanor served as another independent variable. Respondents were asked to rate their parent's/guardian's demeanor when administering discipline. Specifically, they were asked if their parent/guardian was: (1) calm; (2) loving; (3) patient; (4) consistent; (5) angry; (6) disrespectful; and (7) indecisive. These characteristics were measured on a six-point Likert scale (1 = Strongly Disagree to 6 = Strongly Agree). Characteristics that are perceived to be negative traits (angry, disrespectful, and indecisive) were reverse coded so that 1 = Strongly Agree and 6 = Strongly Disagree. For consistency, mean imputation was used to estimate the answers of those who did not provide a response two or fewer of the above characteristics. Those failing to respond to more than two questions were removed from the analysis. Control Variables Several demographic characteristics were controlled, including race, gender, and age of the respondent. Race was dummy coded to 1 = white and 0 = nonwhite, while gender was dummy coded to 1 = female and 0 = male. Age was measured as the respondent's age on their last birthday. The role of religion was also controlled through two distinct measures. Respondents were asked to report their religion (1 = Baptist and 0 = All others) as well as how often they attend religious services a month. To take into account the importance of family, respondents were asked to report how close they are to their family on a five-point Likert scale (1 = Not at all close to 5 = Extremely close). Finally, the primary disciplinarian of the respondent was controlled. Mother was used as the reference category, while father and another disciplinarian were included in the model. Descriptive Statistics and Data Reduction Table 1 provides the descriptive statistics for the dependent, explanatory, and control variables. On average, respondents somewhat agree that their discipline practices are influenced by how they were disciplined and that they discipline similarly to how their parents disciplined them. Additionally, respondents reported that their parents yelled at them, took away their privileges, or spanked them more frequently than they used other forms of discipline. On average, these techniques were used rarely to sometimes, unlike all other discipline techniques, which were reported to occur never or rarely. Generally, respondents somewhat agreed to agreed that their parent's demeanor during discipline was calm, loving, patient, and consistent. Conversely, they somewhat disagreed to disagreed that their parent's demeanor was angry, and disagreed to strongly disagreed that it was disrespectful or indecisive. Analyzing the control variables reveals that more than two-thirds of respondents were female, nearly three-fourths were white, and on average, they were 57 years old. Less than half of the participants were Baptist and respondents reported attending religious services more than five times a month, on average. Additionally, respondents reported being somewhat to extremely close to their family. Finally, the mother was reported as being the disciplinarian by 60% of respondents. Several measures utilized in the current study are theoretically and statistically associated with each other (correlation matrix available upon request). To ensure similar concepts are analyzed together and multicollinearity is reduced, obliquely rotated principal components factor analyses were conducted separately for the dependent variable and the explanatory variables. By conventional standards, variables with a factor loading score of at least 0.50 are thought to load together. Additionally, factors that produce an eigenvalue of at least 1 can be kept for inclusion in the regression analyses (Land et al. 1990). Table 2 displays the factor loading scores and eigenvalues of the three distinct factors produced from performing principal components factor analyses. The first factor includes the two measures of the dependent variable, which load equally within the factor. The second factor measures what would be considered abusive disciplining practices (both physically and emotionally), including being hit with a fist, slapped, pushed, yelled at, cussed at, threatened, and humiliated. Reliability analyses support the use of a factor to measure this variable, revealing a Cronbach's alpha of 0.886. Furthermore, the removal of any of these discipline techniques would diminish, rather than improve, the reliability of the factor. The remaining discipline techniques (parent spanked, gave a time-out, and took away privileges) did not load into a factor, and thus, are analyzed individually. Prior research suggests that spanking should load with measures of physical and emotional abuse (Afifi et al. 2017) as it is a harsh discipline technique, and like abuse, is on a continuum of violence against children (Dussich and Maekoya 2007). However, the sample from the current study is drawn from a southern state and southerners tend to be more accepting of corporal punishment (Straus and Mathur 1996). Thus, respondents from this study may not consider spanking to be violent and abusive. The final factor measures the parent's demeanor during discipline. All measures of parental demeanor loaded together, including those that were reverse coded for ease of interpretation. Reliability analyses also support the use of this factor as is, with a Cronbach's alpha of 0.913. Removal of any measure would not improve the factor's reliability. Analytical Method To determine what may influence a respondent's perception that their own discipline techniques are influenced by or similar to those of their parents, ordinary least squares (OLS) regression analyses were employed. The first model includes only control variables to establish a baseline adjusted R 2 value. Adjusted R 2 is reported instead of R 2 because unlike R 2 , it takes into account the number of variables in the model and only increases if any added variables are contributing to the explanatory power of the model (Frost 2013). Variables measuring each independent variable were then analyzed with the control variables to determine how much, if any, additional variance these measures were capturing. This method also allowed for the detection of any potential mediating effects. Once complete, the final model including all control and explanatory variables was run. Table 3 presents the OLS regression results predicting respondents' perceptions of parental influence on their own discipline practices. Model 1, the baseline model, shows that the control variables explain 12.6% of the variance in the model with only frequency of attending religious services and perceived closeness to the family attaining significance. Those who attend religious services more frequently and those who report being closer to their family are more likely to perceive that their own discipline practices are similar to or influenced by those of their parents. In Model 2, the measures of discipline experienced by the respondent were included. Experiencing abusive discipline and being spanked were both negative and significantly related to the respondents' perceptions of their own discipline techniques. Those who experienced either disciplinary technique more frequently were less likely to report that their discipline practices are similar to or were influenced by their parents. Additionally, frequency of attending religious services and perceived closeness to family maintained a positive, significant relationship. Religious affiliation is negative and marginally associated with reporting that one's discipline practices are similar to or were influenced by their parents. Specifically, those who identify as Baptist are less likely to report similarities to or influence by their parents when disciplining their own children. Finally, age is negatively and significantly related to perceiving one's disciplinary techniques as being similar to or influenced by one's parents. As age increases, respondents are less likely to report parental similarities or influence in discipline techniques. This model explains 18% of the variance, which is an improvement in variance explained over the baseline model. Results Model 3 in Table 3 removes the measures of discipline experienced and replaces them with the factor representing parental demeanor while administering discipline. This model reveals a positive, significant relationship between parental demeanor and perceptions of parental influence on discipline practices. Those who viewed their parents' demeanor positively at the time of the discipline are significantly more likely to report that their own practices are similar to or influenced by those of their parents. Family closeness is the only control variable that maintains significance in this model. The adjusted R 2 shows that approximately 24% of the variance in the dependent variable is explained by this model. The final model, Model 4, is the full model and includes all control and explanatory variables. In this model, parent demeanor is the only explanatory variable that maintains significance, though the level of significance and the size of the effect are smaller than in Model 3. With regard to the control variables, family closeness, race, and religious affiliation are significantly related to perceptions of parental influence on discipline practices, though the latter two are only marginally significant. As seen in all prior models, as family closeness increases, perceiving discipline techniques to be similar to or influenced by one's parents increases as well. Whites are significantly more likely to perceive that their own discipline practices are similar to or influenced by those of their parents. Additionally, those who identify as Baptist are less likely to perceive that their own discipline techniques are similar to or influenced by their parents, as was seen in Model 2. The full model is shown to explain slightly more than 23% of the variance. Model 4 also indicates that parent demeanor may have a mediating effect on the relationship between two of the four forms of discipline experienced by respondents and their perceptions of parental influence on their own discipline practices. In this model, abusive discipline and being spanked both lose significance and their coefficients are absolutely smaller in size than they were in Model 2. As part of a post-hoc analysis, mediation was tested using a bootstrapping method (Shrout and Bolger 2002). Using the PROCESS macro in SPSS (Hayes 2019), 5000 bootstrap samples with replacement were performed. This test revealed that parent demeanor mediates the relationship between abusive discipline techniques and perceived parental influence on discipline practices at a 99% level of confidence. This variable also mediates the effect between being spanked and perceived parental influence on discipline practices at a 99% level of confidence. Discussion The intergenerational transmission of violence or abuse, in conjunction with the intergenerational transmission of parenting attitudes and behaviors, provides an explanation for parental choices in administering discipline (Craft and Serovich 2005;Pears and Capaldi 2001). Understanding the factors that influence parents' choice of discipline may shed light on the intricacies of child abuse and harsh parenting, including the impact one's experience with discipline during childhood has on such choices in adulthood. In order to further explore this issue, the current study utilized a sample of respondents from three counties in northeast South Carolina. Specifically, the types of discipline this sample received as children, the importance of parental demeanor during discipline, and their perception of how they discipline(d) their own children was examined. Results revealed an important finding involving parental demeanor, which was statistically significant in the final model. Specifically, respondents who perceived their parents to be calm, loving, patient, and consistent while administering discipline are more likely to perceive their own parenting practices to be similar to their parents. These findings are consistent with the previous literature on the importance of parental traits and demeanor on the parent-child relationship. Socolar et al. (2007) suggest that the type of discipline is only one factor affecting child outcomes and that research should also focus on how parents administer discipline. The findings from their longitudinal study on the parental discipline of young children reveal that both corporal punishment and a negative parental demeanor increase over time, indicating the notion that physical discipline may occur in conjunction with parents who may be stern, angry, rejecting, or use verbal aggression. Indeed, spanking and a negative parental demeanor are correlated (Socolar and Stein 1995) with research showing that anger affects how much physical discipline is given to a child (Peterson et al. 1994). Further, when children experience discipline with a parental demeanor that is warm and supportive, they tend to have a similar demeanor when parenting their own children later (Chen and Kaplan 2001). Thus, it is reasonable to expect that respondents from the current study who perceive their parents as having a positive demeanor while administering discipline would most likely have a good relationship with their parents and as a result would perceive that they discipline their own children in the same manner. Spanking and abusive discipline techniques were statistically significant in Model 2, but unlike parental demeanor, the relationship between both techniques and respondents perceiving their own parenting practices to be influenced by or similar to their parents was negative. In contrast to the intergenerational transmission of violence and discipline practices, respondents who reported experiencing spanking and abusive discipline practices during childhood report utilizing discipline practices that are dissimilar to their parents. Respondents in this sample perceive themselves as not following in the footsteps of their abusive parents or parents who used corporal punishment, thus indicating a break in the cycle of violence and abuse across generations for these individuals. Although research generally supports the intergenerational transmission of violence (Bartlett et al. 2017;Pears and Capaldi 2001), other literature yields findings that show either mixed support or no support for this phenomenon. For instance, studies that examine the level of risk for the intergenerational transmission of violence show either weak to moderate support or no empirical evidence (Leve et al. 2015;Thornberry et al. 2012). Hence, the findings from the current study are consistent with the latter literature on this phenomenon. Moreover, spanking and abusive discipline was no longer significant in the final model, suggesting that for this particular sample the type of discipline is not as important as parental demeanor when it comes to whether one decides to parent similarly to previous generations. While not an initial research question, the analysis revealed that parental demeanor has a mediating effect on the relationship between two measures of disciplinary techniques and the perception of parental influence on the respondent's own discipline practices during adulthood. Abusive discipline and being spanked were significant predictors of parental influence on one's perception of their parenting practices; however, these discipline techniques lost significance when they were added in the final model with parental demeanor. This suggests it is not the disciplinary techniques experienced as a child that impacts how one perceives they discipline their children; rather it is the parental demeanor during discipline that is key. In other words, parenting practices may be handed down to the next generation, but for these respondents the demeanor of the parent while administering discipline affects that transmission. Thus, a focus on demeanor may result in positive parenting outcomes in the intergenerational transmission of parental beliefs and practices and may result in reducing the cycle of violence/abuse across generations. This study is important in the context of childhood and adult mental health outcomes. Harsh discipline and spanking are linked to aggression among children (Weiss et al. 1992), more depression and externalizing behavior among adolescents (Bender et al. 2007), and an increase in suicide attempts and drug and alcohol use/abuse in adulthood (Afifi et al. 2017). However, this exploratory study reveals the importance of having a calm, loving, patient, and consistent demeanor while giving discipline. Indeed, McLoyd and Smith (2004) conducted a study of maternal emotional support among Caucasian, African American, and Latino children. They find that emotional support moderates the relationship between spanking and behavioral problems, suggesting the vitality of parental demeanor and discipline. Though not generalizable, the findings of this study align with prior research, further informing mental health counselors and other family advocates on how to better instruct and work with parents on their demeanor in this context. Although this study sheds light on the important relationship between the intergenerational transmission of violence/disciplinary practices and parental demeanor, there are several limitations that must be addressed. The results of this study are based on a small sample of residents from one region in South Carolina. The study also yielded a poor response rate despite mailing multiple reminders and utilizing tokens as an incentive to complete the questionnaire. As a result, the current study may suffer from non-response bias (Hager et al. 2003). Given the small sample size and poor response rate, these findings are not generalizable to the population from which this sample was drawn and must be treated as an exploratory analysis. Future research should explore these research questions using larger, representative samples that are generalizable to the U.S. population. Such studies will increase knowledge as to whether the patterns found in this study are applicable beyond a small, non-generalizable sample. Additionally, the dependent variable is based on respondents' perceptions of discipline practices. Due to survey limitations, data on actual discipline techniques used by respondents were not available. Furthermore, the data are retrospective as the items on the questionnaire ask respondents to reflect on the discipline they received as a child and their perceived parents' demeanor during discipline. Memory and recall of events during childhood may be faulty which can reduce both validity and reliability; however, there is research that shows that retrospective data, or data that involves recalling abusive discipline that occurred during childhood, is valid and reliable (Hardt et al. 2010;Widom and Morris 1997). Moreover, this study did not focus on the influence of socioeconomic status (SES) or the occupation of the respondent or the respondent's parents. Research consistently suggests that parental beliefs regarding discipline vary depending on SES and parental occupation (Gunnoe and Mariner 1997;Hoff et al. 2002). Furthermore, some research reveals that a low SES increases the risk of child abuse and neglect (Brown et al. 1998;Whipple and Webster-Stratton 1991). Future analyses need to incorporate the SES and occupation of both respondents and their parents. This will enable researchers to not only understand how these aspects influence discipline practices, but to also examine if intergenerational social mobility (i.e., a child has a higher SES or occupational prestige than his or her parents) plays a role in the intergenerational transmission of discipline practices. Finally, the sensitive nature of this study may yield responses influenced by social desirability, which is a bias that results from respondents who may feel threatened by a survey question or are trying to avoid embarrassing answers (Fisher 1993). Although the survey was anonymous, respondents may not be forthcoming on answering questions about harsh physical discipline or abuse. The current study contributes to the literature that recognizes the influence of parental demeanor on the intergenerational transmission of violence and parenting practices. However, additional research is needed to fully understand discipline practices across generations and establish causality. Longitudinal data from larger, representative samples would fill this void. Researchers could gain insight into why some individuals perpetuate harmful behaviors and further understand how parental demeanor mediates the transmission of disciplinary practices in families. Such knowledge is vital for reducing the incidence of child abuse in future generations.
2019-05-13T13:05:28.720Z
2019-03-13T00:00:00.000
{ "year": 2019, "sha1": "530f8b5cb8ccf0d65c9ec07a08fb61d774e674d0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-0760/8/3/95/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "765088db712b6f39b69f09a8fc1fd47e5970067f", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Psychology" ] }
239457519
pes2o/s2orc
v3-fos-license
A Data-Centric Augmentation Approach for Disturbed Sensor Image Segmentation In the context of sensor-based data analysis, the compensation of image artifacts is a challenge. When the structures of interest are not clearly visible in an image, algorithms that can cope with artifacts are crucial for obtaining the desired information. Thereby, the high variation of artifacts, the combination of different types of artifacts, and their similarity to signals of interest are specific issues that have to be considered in the analysis. Despite the high generalization capability of deep learning-based approaches, their recent success was driven by the availability of large amounts of labeled data. Therefore, the provision of comprehensive labeled image data with different characteristics of image artifacts is of importance. At the same time, applying deep neural networks to problems with low availability of labeled data remains a challenge. This work presents a data-centric augmentation approach based on generative adversarial networks that augments the existing labeled data with synthetic artifacts generated from data not present in the training set. In our experiments, this augmentation leads to a more robust generalization in segmentation. Our method does not need additional labeling and does not lead to additional memory or time consumption during inference. Further, we find it to be more effective than comparable augmentations based on procedurally generated artifacts and the direct use of real artifacts. Building upon the improved segmentation results, we observe that our approach leads to improvements of 22% in the F1-score for an evaluated detection problem. Having achieved these results with an example sensor, we expect increased robustness against artifacts in future applications. Introduction A key goal of image analysis is to automatically extract information contained in an image using a suitable algorithm [1]. The devices used for image acquisition are usually based on either charge-coupled device (CCD) sensors [2] or complementary metal-oxide-semiconductor (CMOS) sensors [3]. Although the specific properties of recording techniques differ, all types induce artifacts caused by the process of capturing images [4]. We refer to all image signal components as artifacts that are not intended to be part of an image. These artifacts impede an automatic or human evaluation of recorded images, especially when they are similar to signals of interest, which can cause them to be falsely recognized as such. Artifacts should compromise the analysis of images as little as possible. Therefore, methods to reduce the influence of artifacts on an image are of particular interest [5]. The effects causing artifacts are called disturbances. These include, for example, instabilities of the used recording devices and other connected electronics, environmental influence, or flaws in the preprocessing software. Table 1. Overview of common artifact types in sensor images, their properties, sources, and examples for algorithmic reduction methods. Correlated artifacts are also called structured noise, and uncorrelated artifacts are called unstructured. Temporally changing artifacts can vary in each frame. Shot noise [4,6] • • environment classic filters (e.g., median filter) [7], bilateral filtering [8], neural networks [9], wavelet/Fourier filtering [10] Readout noise [6] • • electronics Thermal noise [11] • • environment, electronics Salt and pepper noise [7] • • electronics Random telegraph noise [4] • • electronics Temporal contrast/ brightness inconsistencies [12] • • electronics, environment, software homomorphic filtering [13], stabilization algorithms [14], temporal filtering [12], neural networks [15] Line, stripe, wave and ring artifacts [16,17] • • electronics, environment, optics wavelet/Fourier filtering [10], spatial filtering [16], neural networks [18] Compression artifacts [19] • • software bilateral filtering [8], fuzzy filtering [20] neural networks [19,[21][22][23] Projective distortions [24] • • optics model-based calculations [25], neural networks [26,27] Out-of-focus effects [28,29] • • optics morphological filtering [30], neural networks [31,32] Fixed pattern noise [33,34] • • electronics, environment, optics reference imaging [33], neural networks [35] Aliasing [36] • • software anti-aliasing algorithms [36], neural networks [37] Rolling shutter effects [38] • • electronics neural networks [39] Artifacts are visually recognizable in a variety of shapes and intensities. Table 1 shows common artifact types occurring in sensor images, their sources, and algorithmic example methods which can be used to reduce these artifacts. The set of example artifacts can be divided into correlated and uncorrelated signals. Uncorrelated artifacts, also called random noises, are characterized by the absence of clear, detectable structures. Often, they originate from the sensor instruments themselves due to electronic instabilities or environmental influence [4,6,11]. Artifacts that show recognizable structures in the temporal, the spatial, or both dimensions are referred to as correlated. In distinction to random noise, these are also called structured noise [40,41]. In terms of their temporal behavior, most of the correlated and the uncorrelated artifacts are temporally changing, making them difficult to detect and reduce. Besides the determined differences of artifact types, it is worth noting that in practice, a signal does not only contain a single type of artifact but combinations of them. Image-related tasks like classification, segmentation, and object detection are increasingly solved using deep learning [42][43][44]. This holds, in particular, for the field of sensor imaging. Examples include astronomical imaging [45], autonomous driving [46], fluorescence microscopy [47], X-ray [48], magnetic resonance (MR) [49], computed tomography (CT) [50], and histological imaging [51]. While access to an arbitrarily large amount of data could be used to form all possible combinations of signals of interest and artifact signals during training, a common problem is the limited availability of data, particularly in medical imaging tasks [52]. It is caused by high time and material costs for recording examples and intensified by data privacy restrictions that create further hurdles for the data collection [52]. Additionally, the annotation of images can be a time-consuming task requiring experts' review [52]. For deep learning methods in sensor image analysis, it is therefore particularly desirable to develop approaches that deal with very limited data availability during the training stage. As an example of a sensor affected by different disturbances, Section 3 describes the Plasmon-Assisted Microscopy of Nano-Objects (PAMONO) sensor, which has been the subject of several research questions [53][54][55][56] and served as a starting point for the research presented in this paper. It is affected by disturbances during image acquisition, resulting in varying artifact characteristics, for which some are shown in Figure 1. Therefore, it offers a well-suited data basis to evaluate methods for increased robustness against artifacts. Table 1 Motivated by the observation above, we propose a data-centric approach that aims at increasing the robustness of learning methods against image artifacts. We use the term data-centric to describe that only training data is modified to maximize the performance of a learning procedure. At the same time, the existing model does not change. There is no deceleration or change in memory requirements during inference as only the learned weights are adjusted. We present an approach based on generative adversarial networks (GANs) [57], which overlays images with realistic but synthetically generated artifacts during the training of a segmentation network. The GAN is trained with real images containing only artifacts and learns to generate an arbitrary number of new artifact images. We do not need additional annotations for our approach. As an example for our method, we evaluate our GAN approach on PAMONO sensor data. We find that the effect of artifacts on a segmentation task is reduced significantly. We also show that the GAN approach is superior to alternative, non-learning approaches in the evaluated segmentation task. For comparison, we employ a procedural generation of combined wave artifacts based on qualitative observations and the direct use of real artifact images from recorded datasets. The structure of this paper is as follows. Section 2 mentions related methods for reducing artifacts in image signals and popular methods for generating synthetic images. Section 3 details the PAMONO sensor and its recorded data as the basis for evaluating the presented approaches. Section 4.1 describes our approach for an overlay composed of realistic but synthetic artifact patterns utilizing the StyleGAN2-ADA [58] architecture. For direct comparison, Sections 4.2 and 4.3 present methods for overlaying training images with real artifacts and the procedural generation of combined waves, respectively. We present the integration of our approach into experiments and the considered metrics in Section 5. The results are compared and discussed in Section 6. In the end, we give suggestions for future work in Section 7. State of the Art For the task of artifact reduction, examples for methods related to specific types of artifacts can be found in Table 1. It includes traditional as well as machine learning approaches. An overview focusing particularly on deep learning-based methods for image artifact removal is provided by Tian et al. [9]. It covers a wide range of approaches and structures them based on their methodological similarities. There are various traditional approaches such as Gaussian, median and bilateral filters [7,8], homomorphic filtering [13], methods based on physical models [25], morphological filters [30], Fourier-and waveletbased filtering [10]. An early application of convolutional networks for image denoising was published by Jain and Seung [59]. The proposed strategy introduced a specific artifact removal network that outputs a clean image with reduced artifacts [59]. Since this learning strategy demonstrated its potential to reduce various artifacts, further work has followed this approach [60][61][62]. Disadvantages of these methods include an introduction of additional computational costs, additional memory requirements, and in some cases, the need for clean images without artifacts. A different approach improves the robustness of an existing model against artifacts using augmentation methods [63]. The related methods are applied to an existing model by modifying or expanding training data during the optimization process. Since these methods only change data but not architectures, we refer to them as data-centric. This characteristic has the advantage that the methods can be applied during training and do not require the modification of an existing algorithm. Various methods for augmentation show drawbacks making them undesirable as they focus on uncorrelated noise [63], assume perfect artifacts [15], or rely on hand-crafted definitions for creating correlated artifacts [64]. In addition, reference images are rarely exploited. Reference images can be acquired without objects of interest and therefore contain only background and artifacts. They contain valuable information, especially for tasks with low data availability. We developed our approach to address these shortcomings. We exploit reference images and use both correlated and uncorrelated artifacts. Cubuk et al. [65] proposed AutoAugment, a method to learn sequences of augmentations from a set of parametrized operations to improve the training process for an underlying network. As our approach is comparable to an augmentation operation within AutoAugment, the methods do not form alternatives but are combinable. For tasks with low availability of labeled training data, various approaches augment existing data with synthetic images using GANs [66][67][68][69][70]. For example, Frid-Adar et al. [66] use a GAN to synthesize new images for CT scan data of liver lesions. Han et al. [67] follow a similar objective by generating synthetic brain MR images. Sandfort et al. [68] employ a CycleGAN [71] to expand a dataset of CT scans with synthetic images. Hee et al. [69] use a conditional GAN to generate brain metastases at desired locations in synthetic MR images. The mentioned methods do not use reference images but only images containing signals of interest. In contrast to that, our approach also uses reference images to take advantage of this information. For GANs, as state of the art for image synthesis, recent developments show that they can be trained even with very limited amounts of data [58]. Driven by these findings, we make use of a StyleGAN2-ADA network [58] to generate realistic artifacts, which we use for the augmentation of existing training data. PAMONO Sensor Image Streams The following explanations characterize the images recorded with the Plasmon-Assisted Microscopy of Nano-Objects (PAMONO) sensor [53]. Since each recording of the device shows different types of dominant artifacts, this data serves as the basis for our evaluation. The PAMONO sensor employs the effect of surface plasmon resonance (SPR) [72] to make individual nanoparticles visible as bright spots on preprocessed images. These spots become more difficult to detect with an increasing quantity or intensity of artifacts in the images. This functionality enables the use as a rapid test for the presence of viruses and virus-like particles (VLPs) and for counting nanoparticles in a sample [73]. The sensor visualizes particles of interest using a gold foil with an antibody coating on one side. The foil is attached to a flow cell containing a liquid sample, while the opposite side reflects a laser beam directed towards it. When specific particles in a sample attach to the antibody coating, the reflective properties of the gold foil change at this region, and the particles become visible in the reflected signal. This setup provides indirect imaging for the downstream detection of nano-sized objects. Further explanations of the technical aspects and application scenarios, such as detecting viruses, can be found in the literature [53][54][55][56]. While a high degree of reliability is essential for detecting nanoparticles, recording with the PAMONO sensor is prone to disturbances originating from its high sensitivity to changes in the nanometer scale, temperature dependence, sensitivity to external impacts, and contaminations of the analyzed samples [74]. This results in random noises originating from the electronics and the environment, wave and line artifacts resulting from air bubbles and dirt particles in a sample, and significant global and local brightness differences due to environmental changes or the preprocessing. In addition, local damages of the coated gold can introduce line artifacts and fixed pattern noises. Therefore, an applied segmentation approach must cope with different types of artifacts. Figure 1 shows example images gathered with the PAMONO sensor containing different characteristics of artifacts. The intensities and occurring types can change for each experiment and also during one recording. Since tests with particles involve high material costs, the availability of the related images is low. In contrast, reference images showing only background and artifacts can be provided more efficiently. This property and the occurrence of various artifacts make the data acquired with the PAMONO sensor a well-suited example for evaluating our approach. Methods For increasing the robustness against artifacts in the analysis of sensor images, we formally introduce our method. We assume an image I D j ,t ∈ [0, 1] X D j ×Y D j at a discrete timestep t originating from a data stream D j from the set of all image streams D to be composed of different signals in an additive signal model The signal consists of a particle signal P D j ,t , a background B D j ,t , which is constant for all positions (x, y) within a single image, a correlated artifact signal C D j ,t , and uncorrelated artifacts U D j ,t . Both artifact components can contain values outside of [0, 1]. For this work, we use images I D j ,t , which are already preprocessed with a sliding window method presented in previous work [56]. This preprocessing enhances the visibility of particle signals using temporal information for each image pixel and a dynamic contrast enhancement afterward. Figure 1 shows example images I D j ,t for different datasets D j and timesteps t where C D j ,t predominates with different artifact characteristics in each image. The goal here is to highlight all image positions containing a particle. Therefore, we want to find a function to realize a semantic segmentation [75] to learn a mapping from images I D j ,t onto a binary segmentation map. In order for f to achieve good results on a multitude of different datasets D j , a broad set of artifacts has to be handled. Our approach expands a low-artifact data basis by augmenting the training data with additional artifacts. We make use of datasets F k ∈ F without particles of interest so that a contained image can be written as I Such images can be created without the need for test objects and serve as a basis for learning realistic characteristics of artifact patterns. Having identified that wave-like artifacts are a factor that can heavily disturb detection methods, we also developed a method to generate wave-like artifacts directly to prepare the trained network towards being robust against possible correlated artifacts. This method serves as a basis for comparison to the presented GAN-based approach. Artifact Overlays Based on Synthetic Artifacts From an abstract perspective, we overlay an image containing object signals of interest with a composite synthetic noise signal to optimize a segmentation model. Figure 2 shows an overview of this procedure. The upper part of the system shows the learning of artifact characteristics from images without object signals. Tiles are extracted from a recorded image and used for training a GAN. The GAN learns to generate new tiles, which are then combined into an artifact image. The lower part shows the overlay of a recording with a composition of generated artifact tiles. In detail, we augment each training image I D j ,t with structured artifacts C (overlay) and uncorrelated artifacts U (overlay) . We combine both types to a single artifact signal and use it to create an augmented image In order to extract artifact signals from an image, we solve the assumed signal model of Equation (4) for artifact components Since we are only interested in the contained artifact signals, we use images without particle signals. Therefore, the only remaining unknown signal is the constant background signal. We assume that the artifact and noise signals are zero-centered. Consequently, we approximate the background as the mean intensity value of the full image. The artifact signal S (overlay) F k ,t can be formulated as for further use as an overlay. With these artifacts, the original images from a dataset D j can be augmented according to Equation (6). Despite the reduced costs of producing images without involving particles for real artifact tiles, the available images are still limited. In order to have access to an unlimited stream of new and distinct artifacts, we propose the synthetic generation of new images I F k ,t . With this, we can provide an arbitrary number of synthetic but realistic-looking artifact patterns. Currently, the state-of-the-art method for image synthesis are generative adversarial networks (GANs) [58]. GANs use a generator model G to mimic the distribution of a set of real images optimized with feedback from a discriminator model D. The discriminator is optimized to distinguish between real and synthetic images. As the input for training the GAN, we use real images from a dataset F k . In this work, we employ StyleGAN2-ADA [58], which is specifically designed for optimization with limited data. After optimizing the generative network, the generator function is used to create an arbitrary number of new artifact images. The generated artifacts can be smaller than the original image I D j ,t . In this case, larger artifact images can be composed of multiple smaller ones. A set of artifact tiles is generated where each artifact tile A (overlay) k is extracted from a synthetically generated image I (overlay) k with side lengths v and w. The tiles are then composed to a single artifact which has the needed size. For each training image, a new set A (overlays) is dynamically generated. Real Artifacts as Overlays For a direct comparison, we apply real artifacts directly to the training images instead of applying synthetic artifacts. To create overlays from recorded data directly, we modify the set of artifacts A (overlays) to not originate from the GAN but from random cutouts from real images. We make use of non-annotated images which do not contain signals of objects of interest but are still affected by artifacts. Unlike in the GAN-based approach, the available data is directly limited by the original set of input images. This allows a meaningful comparison of the effects of learned artifacts with the direct utilization of real artifacts. Procedurally Generated Artifact Signals We present another approach for generating artifact patterns which is based on the procedural generation of artifacts in an attempt to simulate real artifacts in the form of imperfect waves superimposed over an image. In our observations, we found sine waves to be suitable approximations for actually recorded artifacts. These calculations are rulesbased and can be varied using random parameter values. Given an image I with side lengths X and Y, n w waves are generated and added to this image for training. For a single sine wave centered around point = c w = (c w x , c w y ), we determine the amplitude h(x, y, c w , σ, ω) = sin(d(x, y, c w ) · σ + ω) (12) at every image position x ∈ [1, . . . , X], y ∈ [1, . . . , Y] using a frequency parameter σ, a phase shift ω and a distance We observed that the intensities of waves in an image are often not constant over the entire surface, so a term is included to add a fading effect starting from an independant center point c f from which the intensity decreases with a rate β ∈ [0, 1]. This term is applied to the original wave function h to receive a single fading wave Finally, all n w waves are composed and added to the image I to simulate a combination of different vanishing waves by using sets of wave centers C w = {c w 1 , . . . , c w nw }, fade centers C f = {c f 1 , . . . , c f nw }, frequency parameters S = {σ 1 , . . . , σ n w }, phase shifts W = {ω 1 , . . . , ω n w }, and fade rates B = {β 1 , . . . , β n w }. The influence of the waves in the resulting image is controlled via the wave strength factor γ. The parameter values for each wave are randomly chosen from a restricted interval. Figure 4 shows examples of randomly generated wave artifacts added to a low artifact image. The resulting wave artifacts approximate the visual appearance of real artifacts with parameters drawn from a manually defined interval. Although it is possible to find fitting intervals that result in a distribution similar to real artifacts, a procedural generation of artifacts requires the manual definition of the generating function and manual tuning to the artifact characteristics at hand. Experiments We evaluate our GAN-based method by applying it to image streams recorded with the PAMONO sensor that is described in Section 3. Individual image streams show different artifacts, so it offers a well-suited opportunity to evaluate this approach. The goal is to find a model that solves the segmentation of particles, as formulated in Section 4. Particles should be easily distinguishable from other image parts in the resulting segmentation, so we employ a blob detection based on Difference of Gaussians (DoG) [76] features for particle detection. To focus the evaluation on the augmentations only, we employ a plain 5-layer U-Net [77] with 16 filters in the first layer. We make no changes to this architecture during our experiments and only conduct changes for the data itself. In this way, we can evaluate the effectiveness of our proposed approach and compare it directly to the other introduced methods. This provides a concrete implementation of the abstract detection network shown in Figure 2. The different approaches are compared to each other based on correctly detected nanoparticles. We utilize the dice loss [78] in combination with the Adam [79] optimizer to train the U-Net. An initial learning rate of 3 × 10 −5 is halved after every 15 epochs with no improvement in the dice loss for designated validation datasets. We end the training after 30 epochs with no improvement. For this work, 23 annotated image streams containing particles of interest provide 30,782 images in total. Only one of these datasets with low intensities of artifacts and well visible particle regions containing 500 images is used for training. We employ five datasets as validation data. The remaining datasets are used as test data after the training is completed. Due to the preprocessing, each particle contained in the image streams can be seen not only on one but on several frames. We connect the particle locations on individual images to traces afterward. This means that sufficiently overlapping regions on consecutive frames are combined to one particle, which is especially important for counting particles to determine the viral load in a sample [56]. For measuring run times, an Nvidia Geforce GTX 1080 GPU is used. Random cutouts with side lengths of 128 pixels from 1157 images originating from a single reference image stream are used for training the GAN. About 5 GB of video memory are allocated. Using a batch size of 16, around 38 h are needed for training a StyleGAN2-ADA network consisting of a generator part with 23 × 10 6 parameters and a discriminator part with 24 × 10 6 parameters. The training times for the U-Net lie between 90 min with no augmentation and up to 360 min for the GAN-based augmentations. For better comparability, the same dataset for training the GAN is used for overlaying images with real artifacts. To compare the GAN-approach also to a direct and simple augmentation we apply a variation of image sizes relative to the sizes of particle regions in the samples. For each training dataset D j the median surface s D j ,med of annotated particle regions in the dataset is calculated to determine the overall minimum size and the maximum size s max analogously. The median operator is used to determine sizes within a dataset in order to compensate for possible outliers caused by manual annotation. By restricting the random factor F D j used to scale both sides of an image separately to for a dataset D j , the scaled images cover the range of particle sizes seen as plausible based on the available annotations. In each training step the side lengths u and v of a training image I D j ,t are scaled by a factor f d ∈ F d to u · f D j and v · f D j . Since this approach presents a simple strategy that has proven useful in combination with more complex approaches in preliminary tests, it is also applied in the case of procedural wave generation, real artifact overlays, and GAN-based overlays. For each evaluated configuration of augmentations, we consider two measures. The first measure is the F1-score [80] of particle traces which uses the number of true positives (tp), false positives ( f p), and false negatives ( f n) to indicate the extent to which the predicted traces and the annotations match. A predicted trace is seen as matching if its bounding box overlaps significantly with the box of an annotated trace. As two overlapping predictions can both be seen as true positives when overlapping with one annotated trace, this measure focuses on the accuracy of particle locations instead of matching trace counts. The second measure is the count exactness [56] e(n a , n p ) = 1 − |n a − n p | max(n a , n p ) (20) which compares the number of predicted traces n p with the number of annotated traces n a . As the count exactness does not consider where the single traces are located, false positives and false negatives can misleadingly balance each other out. Nevertheless, it is a simple and practice-oriented measure that is especially of interest in real use case scenarios, where an expert can interpret this information based on domain knowledge. In PAMONO sensor data, the determined particle count could be compared to expected concentrations of virus particles related to an infection of interest. We execute each training configuration three times to reduce the effect of outliers. The model with the median F1-score is selected for evaluating all presented metrics. We compare the proposed GAN-based approach in Table 2 with the alternatives based on F1-scores and count exactness values related to particle traces. The results vary heavily for different datasets depending on the intensities and prevalent types of artifacts in the contained images. Therefore, we also show results for datasets split into different groups of artifacts. A comparison broken down by the qualitative type of dominant artifacts is given in Table 3. We also compare the approaches using the binary distinction between samples containing particles of interest and samples free of them. The exact particle counts and locations are less relevant here. Instead, an effective separation between these two groups is sought, for which a low number of false positives in particle-free samples is essential. Results for samples of this type are conducted in Table 4, and the counts of predicted particles per image are compared for models trained with the different approaches. For this purpose, 12 particle-free datasets with 10,384 images in total showing diverse artifact types and intensities are analyzed. Discussion Aiming at high robustness of a learned segmentation against imaging artifacts, our approach using GANs to generate synthetic artifacts shows to be the most effective. Compared to the version with no augmentation, as shown in Table 2, this approach yields improvements of 22% in the F1-score, 26% in the average count exactness, and even greater improvements in the related minimum values. Table 3 shows that the results improve more with stronger visible artifacts and correlation within these. The GAN approach increases the F1-score by 63% and the average count exactness by 61% for datasets with wave-like artifacts. In the task of searching for particles in particle-free samples, this approach improves the average number of false positive particle traces from 0.87 to 0.02 per image, with the dataset performing the worst, only having 0.05 false-positive traces per image. Comparing the GAN-based approach with extracting artifacts directly from images, the span between the worst and best values is smaller. The augmentation by superimposing wave artifacts based on a hand-crafted, procedural function is approximately on par with the augmentation with real artifacts when considering average scores. However, minimum values show a slight improvement, which indicates greater stability of the detection after the appropriate training. The real and the procedurally generated artifacts improve the F1-score by 14% compared to the training without augmentations. This shows that the model benefits significantly from augmentation with correlated artifacts. Viewing the results in Table 2, it is noticeable that direct augmentation, representing the random size augmentation based on the particle sizes present in the training dataset, does not improve the F1-score and the count exactness for datasets containing particles. Compared to the basic version without augmentation, there is even a slight deterioration in the F1-score. If the evaluation is expanded to the datasets not containing particles of interest, the impression is different. Table 4 shows that the average rate of false positives per image can be reduced by 94.5% by just applying direct size augmentations. All in all, the augmentation by overlaying with artifacts generated by our GAN-based approach achieves the most significant improvements, both in the average and minimum values. The increase of the minimum values can be seen as better robustness against artifacts that do not occur in the training data. At the same time, despite the increased training time, the advantage of not having to define and adjust a function description by hand can be noted. This shows that the GAN-based generation of artifact images for data augmentation can be a worthwhile improvement to classic augmentations in image analysis. This holds especially when the exact artifact patterns can only be described with great effort, for example, when the application environment of the used sensor changes frequently while a lack of training data makes the determination difficult. Outlook Since our approach showed to be capable of increasing the robustness of a spatial learning system against image artifacts, the exploitation of temporal correlations can be investigated. In image data streams, objects of interest and artifact patterns are timedependent in most cases, so generating time-consistent artifacts could further improve the results for a downstream task. It needs to be considered that, while the complexity of the generation task increases, fewer spatiotemporal training samples can be formed from a set of images. Despite the potential problems, evaluating a generation approach incorporating the temporal dimension can further increase the robustness of a downstream, spatiotemporal image analysis. Our approach demonstrates that it mitigates the effects of artifacts in images of the PAMONO sensor. Further work should evaluate this method for images from other sensors. The approach has the potential to be applied to other sensors with little customization. Author Contributions: A.R. and K.W. conducted the investigation, development and design of methodology, analyzed literature and wrote the paper. K.W. curated the data. F.W. supervised the process and reviewed the paper. All authors have read and agreed to the published version of the manuscript. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: Example datasets with samples containing particles of interest and samples without such particles can be found at https://graphics-data.cs.tu-dortmund.de/docs/ publications/panomo/ (accessed on 5 October 2021).
2021-10-15T16:22:17.801Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "10b15a4b64aceb284d981488bdb96bdec2d15a5f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2313-433X/7/10/206/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1ed9e64ed54f843d268bea306d8826375071f1bf", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
265392151
pes2o/s2orc
v3-fos-license
The repercussions of digital bullying on social media users Objective This study aims to examine the repercussions of digital bullying on social media users, especially among university students in Saudi Arabia. Methods It adopts a descriptive approach based on a social survey method with a sample of 640 male and female students from selected universities. A questionnaire was used to collect the data and to measure the repercussions of digital bullying on the victims, their families, and the society. Results The findings reveal that most of the respondents agree that digital bullying has negative consequences for all the stakeholders involved. The results also indicate that female students are more aware of the repercussions of digital bullying than male students. Conclusion The study recommends enhancing public awareness through organizing conferences, seminars, and workshops on the issue of digital bullying, and implementing and enforcing strict laws and penalties to punish the perpetrators and to prevent and reduce the harms of digital bullying. Introduction The field of information and communication technology has undergone rapid and significant changes, especially with the advent of the information revolution.This has led to various transformations in the economic, political, and social spheres, as information can be transmitted quickly and accurately across different domains (Kumar and Khare, 2022).It has also enabled a wide range of communication and interaction opportunities through the creation of social networks and online platforms, which have become popular among various segments of society, especially the youth.These platforms have also influenced the cultural and cognitive aspects of individuals and communities. Digital bullying is a widespread issue that has a negative impact on the social media ecosystem (John et al., 2018).Although the terms are frequently used interchangeably, it is important to remember that cyberbullying and digital bullying refer to different forms of online abuse.Cyberbullying is a type of premeditated and targeted abuse that generally focuses on chosen targets (Raskauskas and Huynh, 2015).Digital bullying, on the other hand, refers to a broader spectrum of unpleasant online activities.These activities include a variety of undesirable behavior such as harassment, defamation, the distribution of false information, and even impersonation (Chun et al., 2020).The aforementioned minor distinction underscores how profound and intricate the issues that users of social networking sites confront.Understanding the many types of harm experienced by people in the constantly increasing realm of online Al-Turif and Al-Sanad 10.3389/fpsyg.2023.1280757Frontiers in Psychology 02 frontiersin.orgcontacts in the present atmosphere necessitates exploring the distinctions between cyberbullying and digital bullying.Social networks have many benefits for both individuals and society, as they facilitate the exchange of culture and knowledge, the access to information, and the use of educational and professional resources.They also play a role in social interaction, job creation, marketing, and trade, as well as forming virtual relationships that replace conventional social ties within a virtual social system that consists of individuals, groups, and organizations that share various personal, social, psychological, ideological, religious, and other factors (Abdulrahman, 2013). However, social networks also have some drawbacks and risks, as they can be mis-used for destructive, abusive, and violent purposes, such as cybercrime, hacking, terror-ism, addiction, rumormongering, social isolation, psychological problems, virtual communication illusion, privacy violations, and so on (Zaidi and Banai, 2022).One of the negative phenomena that occurs on social media is digital bullying, which is the focus of this study. Despite the plethora of academic research and literature that has investigated many elements of social networks and the accompanying risks, there is an urgent need for extensive investigations into the specific topic of cyberbullying.Previous research, which has made significant contributions, has explained the larger difficulties faced by social networks (Ioannou et al., 2018;Chan et al., 2021;Ademiluyi et al., 2022).However, in the contemporary research environment, it is critical to pay special attention to the complex dynamics, fundamental causes, and viable responses to cyberbullying. The investigation of digital bullying focuses mostly on social media users, as these platforms are the primary venues for the incidence of such abuse.By focusing on this specific demography, it is possible to apply accurate and personalized therapies (Abarna et al., 2023).Given the many dynamics and hazards connected with various online activities, including all internet users in the area of study may possibly reduce the efficacy of efforts.Digital bullying is a unique and dynamic problem that needs specialist solutions and methods to protect people's well-being, particularly young people who use social media extensively (Lamba et al., 2016;Chen and Luppicini, 2021;Nasla et al., 2021). The necessity to address challenges peculiar to Saudi Arabia's environment prompted experts to investigate the phenomenon of cyberbullying in the country.The Kingdom of Saudi Arabia has swiftly embraced information and communication technology (ICT), with both positive and bad consequences (Alzahrani, 2020).Digital bullying has become a major cause of worry, negatively impacting those who utilise social media platforms in unique and significant ways (Fati, 2021).The study intends to investigate the specific issues experienced by media users in that nation in order to fill a research gap and provide informative information that may assist build interventions to address the problem of cyberbullying in the Saudi Arabian setting. The current study aims to provide a more comprehensive and in-depth examination of digital bullying, hence addressing the complexities underlying these prevalent phenomena.While previous research has established a foundation for understanding the whole range of risks connected with social networks, the current study intends to focus primarily on digital bullying (Jabeen and Treur, 2018).The study's purpose is to make a substantial contribution to the establishment of preventative measures, welfare programmes, and regulations that can effectively reduce the negative impacts of cyberbullying.The objective of this research is to create a more secure and productive environment for everyone who uses the internet. Study problem The phenomenon of bullying has traditionally involved verbal, physical, or social forms of aggression, such as name-calling, beating, or isolating someone from social activities.However, with the advancement of information and communication technology and the emergence of cyberspace and various online platforms that form a large-scale virtual community for many people (such as Facebook, Twitter, Instagram, Telegram, and chat rooms), a new type of bullying has arisen, which is digital bullying.This is the most recent form of bullying that relies on technological means, and thus the confrontation be-tween the bully and the victim has shifted from face-to-face to online in a virtual environment.Since the online space has no geographical boundaries for its participants, it is possible for someone to receive abuse beyond their real community and to experience it in the virtual world.Digital bullying often undermines the dignity of users in a publicly visible way (Patchin and Hinduja, 2017), and other participants can join in the abuse and ridicule by responding to and engaging with the offensive content through negative comments and reposts (Doll et al., 2017). According to statistics from the United Nations Children's Fund (UNICEF), 90% of young people around the world have experienced some form of bullying, either psycho-logical or physical (Al-Qaddouri and Abdelkader, 2020).Scientific research also suggests that 7 out of 10 young people have been exposed to online abuse at some point for psychological and social reasons against others with certain characteristics such as race, religion, people with special needs, etc., (Abu Ghazaleh, 2018).Everyone in society has the right to live with equal rights with others without discrimination (Beran, 2018), and it is important to encourage young people to express themselves freely with-out harming others and to promote digital citizenship to contribute to a global society (Cook et al., 2019). Therefore, digital bullying has attracted great attention from researchers in the educational, psychological, and social fields due to its increase and spread in recent decades due to technological progress and the growing use of young people for various modern technology tools and social media applications.This has resulted in the reproduction of bullying through the online space, which is one of the negative phenomena that entails many negative impacts on various segments of society, whether psychologically, emotionally, socially, or academically (Pentan and Al-Asmar, 2019).It also affects not only the bully and the victim but also extends to the victim's family and society as a whole.Bullying can change the behavior of the individual victim from a normal person to a person with behavioral deviation to defend against the bullying shown by others towards him.It also reflects on his personal and social life and affects his interaction and family and social relations as well as the achievement of the goals of the group and society alike.Therefore, it is important to study it in a scientific manner to reach proposals that aim to address and reduce it (Al-Sayed, 2020).Hence, the problem of this research lies in identifying the impacts of digital bullying on social media users by answering the following questions: 1. What are the forms of digital bullying among social media users?2. What are the causes of digital bullying from the perspective of social media users?3. What are the impacts of digital bullying on victims, their families, and society? 4. Do respondents' attitudes towards digital bullying differ according to gender and monthly income? The objectives of the study This study seeks to explore the repercussions of digital bullying on social media users (victims, their families and society) and to propose solutions that can mitigate the negative impacts on the victim, his or her family and society. The importance of the study • Digital bullying is a pressing issue at the local, regional, and global levels that demands attention due to its effects on all segments of society, especially the youth.• The study contributes to the knowledge base on digital bullying through social media platforms by enriching the Arab literature with information about digital bullying in the Saudi context.• This study is important because it focuses on the youth category, which is a crucial stage for developing social, physical, and psychological skills that are essential for the formation of human personality.They are also the main force in building societies, so it is vital to understand and address the challenges they face.• The findings of the study are beneficial for policymakers to identify the motives and causes of the rise of digital bullying and its adverse effects on victims, their families and society as a whole, and to devise strategies that can tackle this problem. Theoretical framework Digital development has brought rapid advances in technological means, but this progress achieved by information technology in recent decades has also brought multiple risks and challenges, including digital bullying (Halachová, 2014).This has become one of the most serious threats to young people in the digital space and raises many concerns in society (Cheng et al., 2018). The research is based on a wide theoretical framework that asserts that a range of elements, such as social, psychological, economic, and technological variables, shape digital bullying, particularly in the setting of social media.The issue is caused by the rapid growth and complete integration of information and communication technology (ICT).This theoretical position contends that the prevalence of cyberbullying is influenced not just by individual characteristics, but also by broader cultural, economic, and technological factors.This research investigates the complicated roots of cyberbullying, the different forms it may take, and the far-reaching consequences it has on the targets, their families, and society as a whole. Characteristics of digital bullying Digital bullying differs from other forms of bullying in the degree of danger it poses, because the bully can erase the trace and hide his or her identity.The bullies often use fake names and identities and therefore cannot be verified, which encourages them to continue digital bullying without fear.Moreover, the bullies do not see the direct impact of the harm they cause, which does not deter them from bullying or make them feel emotional remorse (Guan et al., 2016).Digital bullying can also occur anywhere, and a bully can reach a vast number of audiences and access and exploit the information of targeted victims (Zubaidi and Tariq, 2020). Forms of digital bullying Due to the development and expansion of the use of the Internet in various transactions, there are multiple forms and types of digital bullying, which are as follows: • Electronic harassment: sending abusive and insulting messages or threatening messages via email and repeating it.• Defamation: sending false information or spreading rumors about a person with the aim of harming him or her.• Impersonation: hacking someone's personal account and pretending to be him or her to send or post electronic materials to trap or discredit this person.• Disclosure: sharing someone's secrets, embarrassing information about him or her, or pictures and posting them on the internet.• Deception: tricking someone into revealing his or her secrets, embarrassing information about him or her, or pictures in a situation that others do not want to see and then posting them on the Internet or sending them to others.• Harassment and extortion: sending nasty and insulting messages repeatedly through multiple electronic communication channels to create intense fear in the other party.• Exclusion: deliberately and cruelly excluding someone from an online group (Aboulela, 2017). Causes of digital bullying The phenomenon of digital bullying is attributed to a set of causes, which we review as follows: • Social causes: disruption of social and family relations digital bullying may result from the presence of deficiencies or defects in the family structure, such as family disintegration problems, which take various forms such as separation, divorce, continuous disputes, the absence of one of the parents from the family, parents' ignorance of socialization methods, harsh treatment, violence within the home, and the inability to control the behavior of children (Amer, 2019).Frequent exposure to physical harm and harsh treatment in the home leads to children's tendency towards deviant and delinquent behavior (Hinduja and Patchin, 2006).• Psychological causes: psychological factors play a major role in shaping bullying behaviors, as they may result from subjective motives stemming from the bully's personality, which is characterized by aggression and authoritarianism, or a strong physical structure that pushes him or her to show his or her strength, irritability, recklessness and weakness of religious scruples, the desire to surpass the complexities of technical means, passion for collecting and seizing information, envy, jealousy and the desire to take revenge on others (Abu Ali, 2018). In addition to behavioral disorders and the imbalance of the personal structure of the individual and psychological, organic and mental diseases that generally affect the human personality and behaviors because of the mental, psychological or functional disorders that occur in the individual, and this means that there is a relationship between normal and deviant behavior and the health and psychological state of the individual (Al-Tarif, 2012).• Economic causes: economic conditions play a role in the occurrence of bullying as the bully may feel empowered and in control due to the high economic level.The opposite is also possible, as perhaps his or her belonging to a poor class and the material need causes him or her a sense of inferiority, frustration and weakness, so he or she practices bullying to vent his or her feelings (Abdulaziz, 2017).• Media and technological causes: the scientific, technological and cognitive revolution has led to the ease of browsing social networking sites by children, adolescents and adults, watching conflicts, quarrels and videos that contain many scenes of violence, watching horror movies, and the spread of electronic games that contain many scenes of violence and abuse and trying to imitate them in their real lives (Al-Ammar, 2016).This implies that violence is normalized as a way of claiming rights.A study supports this claim (Mohammed, 2012).However, there are also technical motives, such as the desire to show technical superiority and the passion for collecting and accessing in-formation.Moreover, there is a lack of information security awareness programs to pre-vent digital bullying, and the laws that deter it are either weak or poorly enforced. The repercussions of digital bullying Digital bullying is closely related to everything that occurs on social media platforms, whether through their content or through the communication technologies they use.The language of violence has become the dominant mode of communication and interaction among young people with their friends and others, which ultimately undermines the social network and causes various social and psychological problems that affect not only the victim, but also the victim's family and society as a whole.This has an impact on the security system of society (Zaidi and Banai, 2022).We will discuss the repercussions of digital bullying on the bully, the victim, their family, and society as follows: • The repercussions of digital bullying on the victim: bullying has multiple negative consequences for victims, as it may lead to psycho-logical, emotional, and behavioral problems in the long term, such as depression, loneliness, isolation, anxiety, addiction and self-harm.The victim becomes ostracized and un-wanted, in addition to low academic achievement due to dropping out of school or frequent absenteeism or escaping from school out of fear or distress.Furthermore, poor social relations and lack of trust in others make them more susceptible to exploitation and lacking self-assertion skills.They may also experience several psychosomatic symptoms such as headache and abdominal pain.Some may resort to suicide as a way to escape from their suffering.Repeated bullying has long-term adverse effects on victims that last for years.Victims of bullying in their early years are more prone to depression and low self-esteem compared to their peers who have not been bullied (Penis, 2020).The victim may also adopt aggressive behavior and bullying as a result of their exposure to it.The victim's withdrawal from social activities in their social environment may increase until they be-come silent and isolated.They may resort to suicide, as studies have shown that suicide victims are constantly increasing due to bullying (Mohammad, 2020).• The repercussions of bullying on bullies: bullying is not only an isolating behavior by its perpetrators, but also part of an antisocial pattern that breaks or weakens the rules that govern it.Bullies are willing to engage in unacceptable social behavior such as assaulting other people's property, shoplifting, skipping school, and frequent drug use.The effects of bullying on bullies can be presented as outcomes of their behavior in the following points: denial of education, expulsion from school, drug addiction, aggression and involvement in criminal acts, legal violations, constant conflicts with others, vandalism and dropping out of school, and early sexual deviations (Mohammad, 2019).• The repercussions of digital bullying on the victim's families: the phenomenon of bullying has many negative impacts on the families of the victims, as they suffer from the consequences of their child being exposed to bullying behavior that affects their health, psychological state and social relations in their family, friends, and society.Sherri (2018) identifies many adverse effects on the victim's family, such as: the parents' feeling of helplessness to remedy or improve the situation, feeling lonely and isolated, being preoccupied with the circumstances that their child is going through, and neglecting their health, feeling sad, feeling of failure due to their inability to protect their bullied child. The family has a vital role in preventing digital bullying by teaching children how to defend themselves, following up and listening to them, supporting them, and working to build a strong personality for them, enhancing their self-confidence and dealing wisely and firmly when they are exposed to digital bullying, and striving to achieve the safe use of social media.The family also plays a role in raising children on human values and morals such as tolerance, equality, respect, love, helping the weak, and others.And ensuring the monitoring of the different behaviors of children at an early age and finding and correcting the wrong behaviors.The family can also take part actively in combating bullying by volunteering with local community institutions that are interested in combating digital bullying through programs and activities that contribute to raising awareness of the phenomenon, its negative effects, and ways to address it.relations, the spread of hatred and animosity among members of society, and difficulty in social adaptation in school, work, or social environment in general.It also leads to an increase in social problems such as the problems facing security and educational institutions such as academic delay, escaping from school, drug addiction and deviating from social values and norms, regulations and laws through delinquency, crime, and suicide, which require allocating a budget to address and prevent these problems (Zaidi and Banai, 2022).The practice of aggression towards public property has clear economic effects as well as social ones.It leads to the waste of public money and delays the development plans that the state is pursuing to develop society and its facilities in various economic and developmental fields.But when these plans meet obstacles that hinder progress, the eco-nomic impact becomes evident.The delay of plans financially and temporally is followed by the delay of services that benefit members of society, due to the tendency to repair the damage caused by aggressive behavior on public facilities such as roads, schools, entertainment places etc., which requires harnessing budgets and efforts to address the negative effects that have been reflected on society (Abu Ali, 2018). Literature review The repercussions and effects of digital bullying on social media users have been explored by various researchers in different contexts.This section will review some of the relevant studies and provide a critical commentary on them.The studies include: Qutb (2022) conducted a social survey to examine the concept of digital bullying among Saudi women at the undergraduate level, using a questionnaire for 788 participants.The study revealed that the main motives and social causes of digital bullying were related to the external appearance and the personal content that women shared on social networks.The study suggested that awareness campaigns, legal sanctions, religious values and self-regulation were important factors to prevent digital bullying.Moirs and Mehrezi (2022) investigated the nature, forms and impacts of Digital bullying, finding that the most common types of bullying were harassment, defamation, identity theft, disclosure of secrets, deception, exclusion and electronic stalking.The study also re-ported that Digital bullying led to various psychological, emotional and behavioral problems for the victims, such as depression, loneliness, introversion, anxiety, addiction, selfharm or suicide.Moreover, Digital bullying affected the victims' trust in others, social relationships, participation in social activities, school attendance, aca-demic performance and aggression levels.The study proposed several strategies to reduce the prevalence of this phenomenon.Mahmoud (2021) explored the emergence of Digital bullying and identified the groups most vulnerable to bullying, such as children with disabilities, learning difficulties, introversion or physical differences.The study indicated that Digital bullying had negative effects on the victims, such as spreading rumors, exclusion from the group, psychological problems like frustration, depression and psychosomatic symptoms that could lead to suicide.The study also discussed the causes of bullying, such as emotional deprivation, parental neglect, violent imitation or repression in the environment.The study recommended the importance of designing appropriate pro-grams and guiding parents on how to deal with children (Mahmoud, 2021).Ben Dada and Karim (2021) examined the manifestations of Digital bullying among university students, showing that there were five forms of Digital bullying among this group: exclusion, sexual harassment, inconvenience and privacy violation, insult and threat, and mockery and distortion.The study emphasized the need for developing preventive programs to reduce this phenomenon because of its serious psychological and social effects on the individual and society.Dalaala and Maghouni (2021) conducted a study to explore the electronic traffic through social media, the role of gender and the time spent by the individual on social media, in influencing the bullying behavior.The study found a statistically significant relationship between the gender variable and the rates of Digital bullying, but no statistically significant relationship between the time spent by the individual on social media and the increase in the rate of bullying.The study recommended the importance of raising the awareness of young people to prevent Digital bullying (Dalaala and Maghouni, 2021).Ben Salem (2020) aimed to identify the level of students' awareness of the psychological effects of Digital bullying.The study used a descriptive approach and a questionnaire for 150 undergraduate students.The study reported that Digital bullying caused depression, social anxiety, low self-worth, feelings of psychological distress, anger at the aggressor, and serious social problems for the victims.The study suggested proactive strategies to counter bullying.Mohammed (2020) explored the causes and factors that lead to bullying and the definition of its forms and effects.The study was a descriptive study and used a questionnaire for 242 pre-university students.The study revealed that the causes of Digital bullying were related to parents not monitoring their children's devices, violent electronic games, violent cartoon films, domestic and community violence, family dysfunction.The study also indicated that the forms of bullying were multiple chat rooms, video watching, phone call, instant messaging, photos, email, impersonation, exclusion or cyber ostracism.The study also reported that the risks of Digital bullying were a sense of fatigue and exhaustion, lack of concentration in studying, feeling up-set, lack of sleep.Zayed (2020) examined the extent to which adolescents were exposed to Digital bullying.The study was a descriptive study and used a survey methodology and a questionnaire for 300 secondary school students.The study showed that the most common forms of Digital bullying that adolescents faced through digital media were the dissemination of personal secrets, opinions and beliefs, temptation to engage in in-appropriate behavior and threat to spread it, threat through digital media, misuse and dissemination of personal photos and videos, sharing an inappropriate video, logging into the personal account and publishing private matters, and receiving in-appropriate text messages from strangers. Al-Baris et al. ( 2019) investigated the level of Digital bullying and exposure to it from the victim's point of view.The study found that Digital bullying was one of the most prevalent behaviors in this era, and it had profoundly serious psychological and social problems with negative consequences on the cognitive, social and emotional development of bullies and victims.Mohammad (2019) conducted a study to identify the reality and forms of Digital bullying among students in the secondary stage, using a descriptive approach and a questionnaire for 132 male students and 127 female students.The study revealed that the most prominent forms of bullying were ridicule, defamation, spreading rumors, publishing disturbing images, harassment, insults, repeated abuse, impersonation, identity theft, disclosure of secrets and electronic stalking.Abu Ali (2018) aimed to identify the social and cultural dimensions of the phenomenon of bullying in secondary schools.The study used a questionnaire for 250 male secondary school students.The study indicated that the phenomenon of bullying was not limited to males only, but also included females.The study also reported that the reasons for the spread of the phenomenon of bullying were social, economic, cultural, and psychological (Mohammad, 2019). The literature on the study of digital bullying at various levels shows that most of the studies focused on the forms, types and causes of Digital bullying, and some studies dealt with the repercussions of digital bullying on the victim and the bully.The current study is different from previous studies by addressing the repercussions of digital bullying on the victim's family and society as a whole. The current study is similar to previous studies in the nature of the methodology used, which is the descriptive approach, and the tool used to collect data and information, which is the questionnaire.The current study is also similar to some previous studies in the sample used to apply the questionnaire to undergraduate students.The current study has benefited from previous studies in forming a comprehensive understanding of digital bullying in general and its repercussions in particular and using the results and recommendations of the studies in writing the theoretical framework for the research, determining the methodology, building the study tool, discussing the results, and enriching the study with references, books, studies and scientific journals. Type of research The current study is a descriptive and analytical study that aims to identify the re-percussions of digital bullying on social media users.Descriptive studies provide information and facts about the reality of the current phenomenon, clarify the relationship be-tween different phenomena and help predict the future of the phenomenon itself (Pandey, 2014).The study also uses the quantitative approach (questionnaire) to collect and describe data numerically and present the results, as well as to extract conclusions, generalizations, and new relationships.The study also reviews previous literature and collects and analyzes data to identify the repercussions of digital bullying on social media users, and then performs statistical processing, analysis, and discussion of the results. Method The study relies on the social survey methodology as one of the main methods used in descriptive analytical studies, which depends on collecting data on a particular phenomenon and analyzing that data to reach results.The social survey focuses on the study of social problems and phenomena, as it covers all aspects of our social life (Al-Tarif, 2019).The study uses the questionnaire to collect data from the sample, as it is one of the most appropriate methods for the nature of this study because it helps in describing the phenomenon under study by providing the necessary data for that.It also enables the researcher to study a small sample of the population and generalize its results to all members of the community concerned with the study (Al-Maaytah, 2011).The sample survey method is one of the most used methods in social research because it saves time, effort, and money within the limits of the capabilities of researchers, in addition to that it comes with accurate results (Hassan, 2016). Study sample The scientific research in descriptive studies deals with a scientific phenomenon emanating from a large population, and the researcher cannot study all that population, but chooses a representative sample of it (El-Beltagi et al., 2012).To determine the representative sample, the regions of the Kingdom have been divided into five regions: north, south, east, west and center of the Kingdom.These regions can be considered as strata in the first stage (stratified sampling), and all universities in the same region are treated as clusters (cluster sampling) as a second stage.In the third stage, for each region, a random sample of universities (simple random sampling) is selected.As a final stage, a random sample is selected from each cluster, i.e., a random sample of students of both sexes in universities through simple random sampling.This ensures a high degree of randomness and representation of the selected sample of students in different regions and departments (Al-Tarif, 2012).Therefore, a random sample of male and female students was selected in five universities randomly chosen to represent each region of the Kingdom.The universities were Imam Muhammad bin Saud Islamic University, Imam Abdul Rahman bin Faisal University, Umm Al-Qura University, Hail University and Jazan University.The total sample was 640 respondents. Study tools The questionnaire was chosen as the main tool for collecting field data according to the nature and objectives of the study.The questionnaire was prepared based on the theoretical framework and previous studies in this field. The questionnaire consists of primary data related to gender, monthly income, and 42 statements that measure three main axes: 1. Forms of digital bullying among social media users, consisting of 9 statements.2. Causes of digital bullying from the perspective of social media users, consisting of 4 statements.3. Repercussions of digital bullying on victims, their families and society, consisting of 29 statements distributed over three areas: • Repercussions of digital bullying on social media users, consisting of 10 statements.• Repercussions of digital bullying on the families of victims, consisting of 9 statements.• Repercussions of digital bullying on society, consisting of 10 statements. The response to the questionnaire statements was on a three-point scale according to the Likert method, with three responses: (agree, agree to some extent, disagree), and these three responses take the three scores: (3, 2, 1) respectively.To verify the reliability and validity of the questionnaire, it was applied to the members of the pilot sample consisting of 70 male and female students, and the reliability and validity of this questionnaire were calculated as follows: Reliability of the questionnaire The reliability of the Digital bullying survey statements on social media users was calculated by calculating the correlation coefficients between the item scores and the over-all scores of the axis to which the item belongs.The results showed that the correlation coefficients between the score of each item and the total score of the axis to which the item belongs ranged from 0.53 to 0.91, all of which were statistically significant at the level of 0.01 ≥ α, which indicates the internal consistency and reliability of all the statements of the study questionnaire.The reliability of the axes of the questionnaire was also calculated by two methods: Cronbach's alpha coefficient and Spearman-Brown split-half method.The results indicated that the total reliability coefficients for the first and second axes and the three areas of the third axis of the questionnaire by Cronbach's alpha method were: 0.930, 0.727, 0.907, 0.896, 0.947 respectively, and by Spearman-Brown split-half method were: 0.955, 0.947, 0.922, 0.900, and 0.955 respectively, all of which were high, indicating the overall reliability of the questionnaire axes.The questionnaire was revised based on their opinions and finalized. Validity of the questionnaire The validity of the questionnaire statements was calculated by calculating the correlation coefficient between the score of the item and the total score of the axis to which the item belongs, after removing the score of the item from the total score of the axis.This means that the rest of the statements of the axis are used as a criterion for the item.The results indicated that the correlation coefficients between the score of each item and the total score of the axis to which the item belongs (after removing the score of the item) ranged from 0.42 to 0.88, and all of them were statistically significant at the level of 0.01 ≥ α, which indicates the validity of all the statements of the questionnaire.• Validity of the axes: the validity of the questionnaire axes was calculated by calculating the self-validity coefficient for each axis, which is equal to the square root of the reliability coefficient by Cronbach's alpha method.It was found that the self-validity coefficient of the questionnaire axes ranged from 0.64 to 0.89, and all of them were high, which indicates the validity of the axes of the questionnaire. Statistical methods used to process data To achieve the objectives of the study and analyze the collected data, various appropriate statistical methods were used using the statistical packages for social sciences (SPSS).A set of statistical methods were used to calculate the reliability and validity of the research tool and answer its questions, and these methods are: Cronbach's alpha coefficient, Spearman-Brown split-half reliability coefficient, Pearson's correlation coefficient, frequencies and percentages, means, Chi-Square test, Independent Samples Test, and one-way analysis of variance (One-Way ANOVA) followed by Least Significant Difference (LSD) test to determine the direction of statistically significant differences. Life science identifiers Life Science Identifiers (LSIDs) for ZOOBANK registered names or nomenclatural acts should be listed in the manuscript before the keywords with the following format: urn:lsid:<Authority>:<Namespace>:<ObjectID>[:<Version>]. Results and discussion Table 1 shows that there is a slight difference between the size of males and females in the sample, although the percentage of females is higher, reaching 56.25%, while the percentage of males reached 43.75% of the total sample size.Regarding the monthly in-come variable, more than half of the sample had middle income (59.4%),followed by low income (28.1%),while the lowest percentage had high income (12.50%) of the total sample size. Table 2 shows that there are statistically significant differences (at the level of 0.01 ≥ α) between the frequencies of the responses of the respondents in favor of the response (agree) on all statements of the first axis: (form of digital bullying among social media users).This means that the majority of respondents agreed statistically with all forms of digital bullying among social media users.The averages of the statement of the first axis ranged from 2.30 to 2.74 and all these averages fall in the range of response (agree; which ranges from 2.33 to 3), which confirms the agreement of the sample members on all forms of digital bullying among social media users, except for one form of bullying that was partially agreed, which is the item "impersonation." The highest mean for the statements of the ax-is from the perspective of the sample was 2.74 out of 3 and was for the form of digital bullying (Hostile messages that hurt the feelings of the recipient, such as mocking appearance, name-calling, and so on), while the lowest mean for the statements of this axis was 2.30 and was for the form of digital bullying (impersonation).The results of the study are consistent with the findings of Zayed (2020), Ben Dada and Karim (2021), Moirs and Mehrezi (2022) as well as Aboulela ( 2017), and Mohammad (2020). Table 3 shows the multifaceted factors contributing to acts of violence, encompassing social, psychological, economic, and technological dimensions.These factors include poor social relations, weak family control, exposure to violence in the home or community, mental disorders, jealousy, the desire for attention, a sense of emptiness, income levels, material deprivation, high prices, and the influence of violent media portrayals.The statistically significant difference (at the level of 0.01 ≥ α) between the frequencies of the responses of the respondents in favor of the response (agree) on all statements of the second axis: (causes of digital bullying among social media users) as shown in Table 3.The average of the statements ranged from 2.31 to 2.82 and all these averages fall within the range of response (agree; which ranges from 2.33 to 3) except for one item that was partially agree, which was for "economic reasons." The highest mean for the statements from the perspective of the sample was 2.82 out of 3 and was for psychological reasons, followed by social reasons, then reasons related to technological development, while the lowest mean for this axis was 2.31 and was for economic reasons.The results of this study are consistent with both (Abu Ali, 2018;Mohammed, 2020;Mahmoud, 2021;Qutb, 2022) which dealt with social motives and causes for bullying. Supplementary Table S1 shows that there are statistically significant differences (at the level of 0.01 ≥ α) between the frequencies of the responses of the sample members in favor of the response (agree) on all statements of the third axis related to the three areas: (the repercussions of digital bullying on social media users, the repercussions of digital bullying on the families of victims, and the repercussions of digital bullying on society).The mean of the statements of the third axis ranged from 2.44 to 2.83 and all these means are in the range of response (agree).The highest mean for the area of (the repercussions of digital bullying on social media users) from the perspective of the sample was 2.83 out of 3 and was for (low self-esteem and lack of confidence in oneself and others), while the lowest mean for the statements of this area was 2.44 and was for (unwillingness to practice hobbies).On the second area about (the repercussions of digital bullying on the families of victims), the mean ranged from 2.31 to 2.69 and all these means are in the range of response (agree).The highest mean for the statements of the area about (the repercussions of digital bullying on the families of victims) from the perspective of the sample was 2.69 out of 3 and was for (confusion and lack of family knowledge of how to deal with the problem), while the lowest mean for the statements of this area was 2.31 and was for (low family productivity).As for the statement of the third area about (the repercussions of digital bullying on society), the means ranged from 2.54 to 2.80 and all these means are in the range of response (agree).The highest mean for the area of (the repercussions of digital bullying on society) from the perspective of the sample was 2.80 out of 3 and was for (the emergence of bullying and hostile personalities against society), while the lowest mean for the statements of this axis was 2.54 and was for (the spread of a culture of violence as acceptable solutions to social problems).The results of the repercussions of digital bullying on social media users, victims' families, and society are consistent with the findings of Al-Baris et al. (2019), Ben Salem (2020), Mahmoud (2021), as well as Mohammad (2020) on the various effects of digital bullying on the social and psychological aspects. To answer the fifth question: do respondents' attitudes towards digital bullying differ according to gender and monthly income?The Independent Samples Test was used to examine gender differences, and One-Way ANOVA analysis followed by LSD test to determine the direction of statistically significant differences.Table 4 shows that there is a statistically significant difference (at the level of 0.05 ≥ α) between the mean of males and females in: (form of digital bullying among social media users, causes of digital bullying from the perspective of social media users), (the repercussions of digital bullying on social media users), (the repercussions of digital bullying on society) due to the gender variable, as the values of F are statistically significant in favor of the female mean in all cases.We find that female members of the sample of social media users are more aware of the form and causes of digital bullying among social media users, as well as more aware of the repercussions of digital bullying on both social media users and society, compared to males.This result is consistent with (Dalaala and Maghouni, 2021) that indicates a relationship between gender and Digital bullying.The results also showed that there were no statistically significant differences in each of the (repercussions of digital bullying on the families of victims) due to the gender variable, as the value of T is not statistically significant.This indicates that respondents' attitudes towards digital bullying related to the re-percussions of digital bullying on the families of victims do not differ according to gender.The phenomenon of cyberbullying may have a substantial impact on a person's psychological health, perhaps having a detrimental impact on their mental health.It's important to remember, however, that cyberbullying is not regarded a direct cause of suicide.Suicidal intents and behavior involve a larger range of components that extend beyond the confines of cyberbullying.The study reveals statistically significant gender variations in respondents' understanding of the many forms, fundamental causes, and repercussions of cyberbullying.Female participants were shown to have a higher level of awareness of these issues than their male counterparts.Furthermore, respondents with middle-income levels were shown to be more educated about the causes and effects of cyberbullying than those with low-and highincome levels.However, the respondents' monthly income had no discernible effect on their attitudes about cyberbullying.This research study throws light on the prevalence of gender and wealth disparities in awareness levels, which helps considerably to our knowledge of digital bullying in the Saudi Arabian context. Conclusion The study aimed to identify the repercussions of digital bullying on social media users and used a questionnaire for 640 respondents from students in Saudi universities specified in the study.The study reached the following findings: 1. Regarding the forms of digital bullying, the study found that the majority of respondents agreed statistically with all forms of digital bullying among social media users.The highest percentage was for the item "hostile messages that hurt the feelings of the recipient, such as mockery of appearance, insult, etc., " while the lowest mean was for impersonation.2. Regarding the causes of digital bullying, the results found that the majority of respondents agreed statistically on all causes of digital bullying among social media users.The highest percentage was for psychological reasons, followed by social rea-sons, then reasons related to technological development, and in last place economic reasons.3.As for the repercussions of digital bullying, the results of the study showed that the majority of respondents agreed statistically on all the repercussions of digital bullying on social media users, victims' families, and society.With regard to the repercussions of digital bullying on social media users, the highest percentage was for the item "low self-esteem and lack of confidence in oneself and others, " while the lowest percentage was for the item "not wanting to practice hobbies." With regard to the re-percussions of digital bullying on the families of victims, the highest percentage was for "family confusion and lack of knowledge of how to deal with the problem, " and the lowest percentage was for the item "low family productivity." For the repercussions of digital bullying on society, the highest percentage was for the item "the emergence of bullying and hostile figures against society, " while the lowest percent-age was for the item "the spread of a culture of violence as acceptable solutions to social problems." 4. The results also showed that female social media users were more aware of the forms and causes of digital bullying among social media users, as well as more aware of the repercussions of digital bullying on both social media users and society, com-pared to males.However, there was no difference in awareness of the repercussions of digital bullying on the families of victims according to gender. 5.The results found that middle-income social media users were more aware of the causes and repercussions of digital bullying among social media users, as well as more aware of the repercussions of digital bullying on victims' families and society, compared to both low-and high-income monthly earners.However, there was no difference in attitudes towards forms of digital bullying among social media users according to monthly income.Based on the findings of the current study, it offers a number of recommendations to reduce the repercussions of digital bullying as follows: • Awareness campaigns: implement wide public awareness initiatives aimed at reaching a varied population of social media users, regardless of gender.These programmes must prioritize teaching users on the many signs of cyberbullying, its underlying causes, and any possible consequences.People can increase their ability to detect and respond to incidents of cyberbullying by raising their awareness.• School-based programs: to reduce cyberbullying, educational institutions such as schools and universities should adopt instructional activities.The primary purpose of these activities should be to give children with the knowledge and skills they need to spot and report incidents of cyberbullying.Furthermore, it is critical to emphasize the need of maintaining a secure digital environment for everyone who uses online platforms.• Psychological support: people who have been the victims of cyber bullying should seek counselling and psychological help.These programmes can assist people in coping with the emotional and psychological consequences of cyberbullying, such as poor selfesteem and confidence.• Parental involvement: encourage parents to become involved in their children's online activity.To adequately assist their children if they are victims of cyberbullying, parents must get extensively informed on the indicators of cyberbullying and obtain the information and abilities required to do so.It is critical to promote effective communication between parents and children.• Social media platforms: collaborate with social media platforms to improve their reporting and moderation functions.It is vital that platforms respond swiftly by taking appropriate action against abusers and that users have simple access to cyberbullying reporting tools.Stricter regulations aimed towards combating cyberbullying can successfully deter future perpetrators.• Research and monitoring: given the continuing changes in the online world, it is vital to pursue more study on the dynamic nature of cyberbullying.Anti-bullying activities must be reviewed and evaluated on a regular basis to guarantee their continued relevance and efficacy.• Community support: to help victims and their families, it is essential to establish support networks throughout localities.These networks have the ability to provide advice, emotional support, and access to resources that can help people deal with the challenges brought on by cyberbullying.• Curriculum integration: include online safety, excellent digital manners, and responsible internet usage training modules in the entire curriculum framework.We can help children acquire a more complete awareness of safe online behavior by teaching them these skills at a young age.• Legal measures: the inquiry focuses on the implementation of legislation regulations meant to promote accountability for persons who participate in extreme incidents of internet bullying.Legislation might involve the development of legal frameworks that designate cyberbullying as an actionable offence, punishable by warnings, fines, or other legal ramifications.• Longitudinal research: encourage the conduct of long-term research to analyse the long-term impacts of cyberbullying on people and society.Gaining a full grasp of the long-term repercussions of cyberbullying can assist in the development of more effective preventative and intervention techniques. • Apparent validity: the questionnaire was presented to a group of specialized professors in the Department of Sociology and Social Work at Princess Nourah bint Abdulrahman University and Imam Muhammad bin Saud Islamic University to determine the suitability, importance, clarity, and wording of the statements. TABLE 2 Distribution of the study sample by gender and monthly income variables.Chi-square test results to examine the differences between the frequencies of the respondent's responses to the statements of the first axis (forms of digital bullying among social media users). *Statistically significant at the level of 0.01 ≥ α. TABLE 3 Chi-square test results to examine the differences between the frequencies of respondents' responses to the statements of the second axis (causes of digital bullying among social media users). TABLE 4 Results of the Independent Samples Test to examine the different attitudes of respondents towards digital bullying according to gender.Statistically significant at the level of 0.05 ≥ α; **Statistically significant at the level of 0.01 ≥ α. *
2023-11-24T16:13:19.135Z
2023-11-21T00:00:00.000
{ "year": 2023, "sha1": "b67a36903d549e2dd47078cb2b950569b9dcffc9", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1280757/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "17cddee3c57881074c308c2efc48e16a313ba92a", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
220436630
pes2o/s2orc
v3-fos-license
Computer Network Assisted Test of Spoken English With the development of computer network technology, the means of foreign language teaching have changed. Computer aided spoken English test is a new test method, and there is a great advantage compared with the traditional oral test. In order to further understand the superiority of the computer aided spoken English test, this study took the sophomores of Foreign Language Department in Henan University of Chinese Medicine, China, as the subjects and carried out the traditional interview-type spoken English test and computer-assisted spoken English test. The score system based on Hidden Markov Model (HMM) was used, and then the two tests were carried out. The performance in the two tests was compared, and the attitudes of the participants to the computer assisted Spoken English test were analyzed in the form of questionnaires. The results showed that the computer aided spoken English test could better reflect the true level of the students, and the teachers and students clearly stated that the computer aided spoken English test could relieve tension and reduce the burden of teachers. The research verified the feasibility of the computer-assisted spoken English test, which provides a reference for the promotion of the computer-assisted spoken English test. INTRODUCTION With the reform of English teaching in universities, schools have paid more attention to the training of students' English speaking ability. The traditional spoken English test was carried out in the form of interview, one vs. one to one, two to one or two to three, which requires high on the teacher resources. Moreover the test is featured by high cost, low efficiency and strong evaluation subjectivity. Moreover face-to-face interview is easy to make students nervous and anxious. Lim et al. [7] found that students were in a moderate anxiety level during the spoken English test and the anxiety caused large influence on the performance of students. With the development of computer network technology, Computer Assisted Language Testing (CALT) has come into the sight of scholars in China. Computer-aided spoken English test is a way of indirect test. The test is carried out by means of computer and human-computer dialogue. The examinee can complete the test of a large number of students in the same time, and only a small number of teachers are required to invigilate and assist the test in the examination * Email: luzuo82@yeah.net hall, which greatly saves teachers' resources, time and effort and has high efficiency. It has been extensively applied in language teaching. Buckingham et al. [2] Through the experiments on Turkish children, Buckingham et al. [2] found that using computer as a mediator could help them practice spoke English and improve their spoken English ability and willingness to communicate when parents lack sufficient English proficiency to support their children to complete their English tasks. Based on the CALT theory, Guo et al. [3] investigated and analyzed the Chinese learning of a Catholic University in Brazil and compared CALT with the traditional language teaching and found the advantages of computer aided language teaching. Van Han et al. [4] studied the impact of computer aided language learning on the listening module of English International Communication (TOEIC), and found that computer assisted language learning had a significant role in improving the listening performance of students. Fu et al. [5] made a comparison between computer assisted spoken English test and the traditional spoken English test and revealed the obvious advantages of computer-assisted spoken English test. Through comparing the teaching effect of computer aided teaching and traditional teaching method for, students from the seventh grade, Kaplan et al. [6] found that the vol 34 no 6 November 2019 319 Figure 1 The flow of the speech recognition module. teaching effect of computer aided teaching group was obviously better than that of the traditional one. Soleimani et al. [7] considered that computer assisted instruction was beneficial to the oral expression of learners by comparing the learners who received or did not receive computer assisted teaching. Sajedi [8] also confirmed the potential of computer-assisted language teaching in enhancing the ability of learners through comparative experiments. In this study, a computer aided spoken English test system was designed. The research subjects were divided into two groups, i.e. the computer aided spoken English test and the traditional spoken English test, and the test results were compared, and the teachers and students participating in the computer aided spoken English test were investigated via questionnaire to understand their attitudes towards the tests. COMPUTER BASED SPOKEN ENGLISH TEST SCORING SYSTEM The system firstly scored speech via Sphinx 4 speech identifier [9], then regarded the average pronunciation level as the new evaluation standard, and finally scored speech taking standard, phoneme and average pronunciation level into account. The flow of speech recognition is shown in Figure 1. As to the autonomous scoring module, the features of speech were extracted and aligned compulsively with the feature sequence of the standard speech to obtain the status sequence of the maximum probability value. Then Hidden Markov Model (HMM) based log posterior probability was used to determine the wrong speech and detect the wrong pronunciation of students. Scoring process is the recognition process based on HMM. After feature extraction, the output observation sequence of the speech which needs to be scored was set The standard reference HMM was expressed by (A, B, π) (π : original status distribution; A: status transition probability; B: output probability set). In the model, there are many hidden status sequences S = (s 1 , s 2 , . . . s T ). Speech assessment is a process of obtaining the probability of the input speech observation sequence O (P (O| )) when π is known. Segmentation alignment was performed using Viterbi algorithm to obtain the hidden status sequence s corresponding to observation sequence O. Then HMM was trained repeatedly, and the parameters were updated. Then the optimal probability P (O| ) of the model was output for posterior probability score. The phoneme posterior probability of HMM is: where M stands for the number of phonemes in all the texts in the reference model. When the optimal probability p (O i |q i ) is known, posterior probability can only be known after the prior probability of phoneme q, i.e. p (q), is obtained. After summary, to the start time of phoneme q i , and t i refers to the duration of the pronunciation of phoneme q i . The conversion from log posterior probability score to hundred-mark system score is: where and λ are obtained after training, and P q i stands for the posterior probability of i-th phoneme. Hundred-mark system score can be obtained through converting the formula. Differential and λ are obtained by training according to different expert grading standards to realize the scoring levels under different levels of difficulties. The final scoring results are feedback. Research Subjects The research subjects were 720 sophomores from Foreign Language Department in Henan University of Chinese Medicine. A 20-min computer assisted spoken English test was added to the normal spoken English class for students to adapt to the form of the computer assisted spoken English test and eliminate the influence of poor emotion on the test performance in the initial contact. Research Procedures After one month of adaption, computer assisted spoken English test and the traditional interview-type spoken English test were carried out. The performances of the students in the two tests were collected and compared. Then the students involved were investigated via questionnaire to understand their attitudes towards the computer assisted spoken English test. The questionnaires were recycled and analyzed. Test content The content of the two tests was the same, including recitation and impromptu speaking. The content of recitation was from a TEM4 spoken English exam. Whenever Mr. Smith goes to Westgate, he stays at the Grand Hotel. In spite of its name, it is really not very "grand," but it is cheap, clean, and comfortable. Since he knows the manger well, he never has to go to the trouble of reserving a room. The fact is that he always gets the same room. It is situated at the far end of the building and overlooks a beautiful bay. On his last visit, Mr. Smith was told that he could have his usual room, but the manager added apologetically that it might be a little noisy. So great was the demand for rooms, the manager said, that the hotel had decided to build a new wing. Mr. Smith said he did not mind. It amused him to think that the dear old Grand Hotel was making an effort to live up to its name. During the first day Mr. Smith hardly noticed the noise at all. The room was a little dusty, but that was natural. The following afternoon, he borrowed a book from the hotel library and went upstairs to read. No sooner had he sat down than he heard someone hammering loudly at the wall. At first he paid no attention, but after a while he began to feel very uncomfortable. His clothes were slowly being covered with fine white powder. Soon there was so much dust in the room that he began to cough. The hammering was now louder than ever and bits of plaster were coming away from the walls. It looked as though the whole building was going to fall. Mr. Smith went immediately to complain to the manager. They both returned to the room, but everything was very quiet. As they stood there looking at each other, Mr. Smith felt rather embarrassed for having dragged the manager all the way up the stairs for nothing. All of a sudden, the hammering began again and a large brick landed on the floor. Looking up, they saw a sharp metal tool had forced its way through the wall, making a very large hole right above the bed! Impromptu speaking included three subjects. The subject was determined by random selection. a. Describe an embarrassing situation in which you got very angry. b. Tell a story that illustrates the need for love. c. Describe one of the most unpleasant dreams you've ever had. Test Results (1) Comparison of students' performance The performance of the students in the two tests is shown in table 1. The full mark was 100 points. It could be seen from Table 1 and 2 that most of the scores that the students obtained in the two tests concentrated between 60 points and 85 points, i.e., qualified level and good level, indicating the scores in the two tests were similar. But the highest Figure 2 The investigation results. score in the computer assisted spoken English test was higher than that in the interview-type spoken English test, and the lowest score and average score in the computer assisted spoken English test was lower those in the interview-type spoken English test, suggesting the students with excellent spoken English ability were more likely to get a good score and poor students were more likely to get a poor score in the computer assisted spoken English test. It might be because that the score in the interviewtype spoken English test was prone to be affected by subjective factors. Teachers scored the performance of students based on general impression, and they avoided giving a too high or low score. But the score in the computer assisted spoken English test was directly given by the scoring system, which ensured fairness and reliability. (2) Questionnaire results The questionnaire included five items, i.e. I like the computer assisted spoken English test, the computer assisted spoken English test can reflect my real spoken English level, the computer assisted spoken English test can relieve may anxiety, the pressure of the computer assisted spoken English test is smaller than that of the interview-type spoken English test, and the scoring in the computer assisted spoken English test is more fair. There were three options for each item, i.e. agree, maybe and disagree. Totally 720 copies of questionnaire were released, and 715 copies of effective questionnaire were recycled. The investigation results were shown in Figure 2. The attitude of the students participating in the test on the computer aided spoken English test could be understood through the questionnaire. 73% of the students showed a good attitude to the computer assisted spoken English test. Only 6% of the students did not like the form of the computer assisted spoken English test, indicating that the acceptance of the students on the computer assisted spoken English test was favorable. 61% of the students thought that the computer assisted spoken English test could reflect the true spoken English proficiency, but at the same time, 23% of the students were not sure and 16% of the students disagreed. For this problem, the students' opinions were different. The way of man-computer dialogue made some students think that the computer assisted spoken English test could not show the interaction of the spoken English test. 52% of the students thought that the computer assisted spoken English test could relieve tension, but 39% of the students were not sure and 9% of the students disagreed. Many students still felt nervous in the computer assisted spoken English test, but compared with the interview-type spoken English test, 84% of the students thought that the computer assisted spoken English test was less stressful and avoided face-to-face interaction. In terms of grading, 78% of the students thought that the score of the computer assisted spoken English test was more fair and closer to their actual level. DISCUSSION As computer technology gradually becomes a teaching means, the computer assisted language teaching and test has been extensively applied [10]. Students who receive the computer assisted teaching perform better in individual learning and cooperative learning than those who receive the traditional teaching [11]. The computer assisted teaching has positive influence on disabled students including students with autisticspectrum disorder [12] and can help students who have learning disorder learn better [13]. Language test is an important part of language teaching, and computer is helpful in language test [14,15]. With the constant development of computer technology, computer has tended to be more and more reliable and practical in language test [16]. It is not difficult to find that the computer assisted spoken English test has great application prospect after comparing the computer assisted spoken English test with the traditional spoken English test. First of all, in the perspective of students' scores, the two scoring results were mainly concentrated in the qualified and good stages, but in the computer-aided spoken English test, the students' high and low scores were more significant. The number of excellent and qualified students were slightly higher than the results of the traditional interview spoken English test, which showed that the computer-aided sproken English test could make a more accurate evaluation of students' abilities. According to the results of the questionnaire survey, most of the students liked the computer spoken test, especially in the fourth and fifth questions, 84% of the students thought that the pressure of the computer spoken English test was lower and 78% of the students thought that the score of the computer spoken English test was fairer, which showed that the students had a high acceptance of the computer assisted spoken English test. The further promotion of the computer assisted spoken English test in the professional English spoken English test is practical. For the students, the computer aided spoken English test greatly relieved their nervousness and stimulates their interest and motivation in learning spoken English. In addition, through the computer assisted spoken English test, the students also realized that the computer examination made them pay more attention to the training of language expression in daily learning and increase their vocabulary reserve to avoid being dumb in the spoken English test. The students also preferred the fairer scoring way of the computer aided spoken English test and considered that it can help them understand their own spoken English proficiency better. However, the computer aided spoken English test is difficult to achieve real natural communication and reflect communication and interaction. Computer-assisted spoken English has some requirements on the computer skills of students, and high-level computer familiarity can promote the performance of students [17]. Moreover in the process of testing, students cannot ask for repetition if they did not hear the question clearly and also cannot be hinted if they have no idea at the question. It increases the difficulty of the test to a certain extent. The wordless situation while facing with computer may induce the resistance of students. Moreover the failure of the test software or computer will have a great impact on the emotions of students. CONCLUSION The computer aided spoken English test system was designed in this study, and the computer aided spoken English test and interview-type spoken English test were performed on students. The computer aided spoken English test was more deeply understood through the comparison of the performance of the students in the two tests and the questionnaire survey. The results demonstrated that the computer aided spoken English test could relieve the tension of students, improve fairness, and reduce the burden of teachers. The computer aided spoken English test is a new method of testing with many advantages and some shortcomings. Therefore more studies and exploration are needed. But in a word, the computer aided spoken English test has a broad development prospect.
2020-07-10T13:07:41.378Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "bb6c072bed3e8946131eb500b67216d314b388a2", "oa_license": "CCBY", "oa_url": "https://doi.org/10.32604/csse.2019.34.319", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "5a6f7c57cd0258c79ea3a3f1baba1533bbbc4b8e", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Computer Science" ] }
13338229
pes2o/s2orc
v3-fos-license
Australia's Oldest Marsupial Fossils and their Biogeographical Implications Background We describe new cranial and post-cranial marsupial fossils from the early Eocene Tingamarra Local Fauna in Australia and refer them to Djarthia murgonensis, which was previously known only from fragmentary dental remains. Methodology/Principal Findings The new material indicates that Djarthia is a member of Australidelphia, a pan-Gondwanan clade comprising all extant Australian marsupials together with the South American microbiotheres. Djarthia is therefore the oldest known crown-group marsupial anywhere in the world that is represented by dental, cranial and post-cranial remains, and the oldest known Australian marsupial by 30 million years. It is also the most plesiomorphic known australidelphian, and phylogenetic analyses place it outside all other Australian marsupials. Conclusions/Significance As the most plesiomorphic and oldest unequivocal australidelphian, Djarthia may approximate the ancestral morphotype of the Australian marsupial radiation and suggests that the South American microbiotheres may be the result of back-dispersal from eastern Gondwana, which is the reverse of prevailing hypotheses. Introduction Australia's marsupials are the most iconic members of the continent's fauna, but much remains unknown about their origins and early evolution.Current evidence suggests that the extant Australian marsupial orders evolved from an ancestor or ancestors that dispersed from South America, via Antarctica, sometime during the Late Cretaceous or early Palaeogene [1,2], and that the orders diverged prior to the late Oligocene [2].Recent phylogenetic analyses strongly support monophyly of Australidelphia [3][4][5][6][7][8][9][10], a pan-Gondwanan clade that includes all modern Australian marsupial orders as well as the South American microbiotheres (represented today by a single genus, Dromiciops).However, uncertain relationships within Australidelphia (notably, the position of the only extant South American australidelphian, Dromiciops [3][4][5][6][7][8][9][10], and doubts about the affinities of possible fossil australidelphians from South America [5,11], mean that both the number and the direction of marsupial dispersals between South America and Australia are unclear. The only pre-Oligocene Australian metatherians (marsupials and their stem-relatives) currently known are from a single site, the early Eocene Tingamarra fauna in southeastern Queensland [12][13][14].The fossiliferous deposits at Tingamarra are green authigenic illite-smectite clays that appear to have formed in a shallow, lowenergy aquatic environment [12,15].K-Ar dating of the illite gives a minimal age of 54.660.05MYA ( = earliest Eocene) for the site [12].Geological evidence and biocorrelative data from madtsoiid snakes [16], 'graculavid' birds [17] and an 'archaeonycteroid' bat [18] support the radiometric date (Text S1).Two Tingamarran metatherians have been described based on isolated teeth and mandibular fragments: the bunodont Thylacotinga bartholomaii [14] and the dilambdodont Djarthia murgonensis [13].Neither can be confidently referred to a specific metatherian clade based on their preserved dental characters alone, so their relationship to the modern Australasian marsupial radiation and to the marsupial crown-group as a whole is unclear.Here we describe isolated petrosal and tarsal bones from Tingamarra that we refer to Djarthia on the basis of relative size, comparative morphology and abundance.This new material clarifies the phylogenetic relationships of Djarthia and provides significant new evidence regarding key aspects of Gondwanan marsupial evolution and biogeography. Tingamarran Metatherian Petrosals Mammalian petrosals (which house the cochlea and semicircular canals) are highly complex bones that are commonly preserved in fossil deposits, and studies have identified numerous phylogenetically informative petrosal characters [6,7,[19][20][21].Seven isolated metatherian petrosals, representing a single morphotype, have been recovered from Tingamarra (Figure 1).They can be referred to Metatheria because they exhibit: cochlear coiling of .360degrees (a therian synapomorphy [20]); presence in some specimens of a prootic canal (a mammalian plesiomorphy lost in all known eutherians except Prokennalestes [20] and 'zhelestids' [21]); absence of foramina or sulci for the internal carotid or stapedial arteries (loss of these being a synapomorphy of Metatheria [20,22]).The hiatus fallopii (the exit for the greater petrosal nerve) opens dorsally, as in the Palaeogene North American metatherian Herpetotherium [8], Palaeocene South American metatherians [6,7] and extant caenolestids [19] and marmosine didelphids [19], but unlike most australidelphians [23]; this feature may be plesiomorphic for crown-group Marsupialia.The rostral tympanic process of the petrosal is better developed than in deltatheroidans [22], most Late Cretaceous North American metatherians [19] and Andinodelphys, Mayulestes and Pucadelphys from the Middle Palaeocene of Bolivia [7], but resembles the condition in some Late Palaeocene metatherians from Brazil [6] and extant South American didelphids [19,23] (with some exceptions, such as caluromyines [24]) and caenolestids [19,23]; this feature may be a synapomorphy of crown-group Marsupialia.Within Australidelphia, Dromiciops (and the early Miocene microbiothere Microbiotherium tehuelchum [25]) dasyurids, diprotodontians and some peramelemorphians show considerable elaboration of either or both the rostral and caudal tympanic processes of the petrosal; the relatively simple structure of both of these processes in the Tingamarran petrosals is probably plesiomorphic within Australidelphia.A complete stylomastoid foramen within the caudal tympanic process for exit of the facial nerve (apomorphically present in dasyurids and macropodoids) is absent.A small, horizontal prootic canal, which transmits the lateral head vein [19], is present in three of the petrosals but absent in two others (Figure 1).The prootic canal is a mammalian plesiomorphy [20] retained by most stem-metatherians [19] but lost in some crown-group marsupials [23] including most australidelphians [23]. Tingamarran Metatherian Tarsals Isolated tarsals are amongst the most commonly preserved mammalian post-cranial elements in fossil deposits, and the morphology of the tarsus-particularly the calcaneus and astragalus-has played a key role in our current understanding of metatherian phylogeny [5,9].Three isolated metatherian calcanea representing a single morphotype are known from Tingamarra (Figure 2C), as is a single metatherian astragalus that closely matches the calcanea in size and morphology of the conarticular joint surfaces (Figure 2D).Collectively, the Tingamarran specimens are clearly australidelphian because the ectal and sustentacular facets are fused on both the calcanea and astragalus, forming the diagnostic australidelphian 'continuous lower ankle joint' [5], and the calcaneocuboid facet of the calcanea is subdivided into three distinct facets (another synapomorphy of Australidelphia [5]; Figure 2C-F).These specimens are the oldest known that exhibit this distinctive morphology.Features that are probably plesiomorphic within Australidelphia include the gently rounded upper ankle joint surface of the astragalus (indicating that the upper ankle joint was extremely mobile, which suggests arboreality), a broad fibular facet of the astragalus (a possible apomorphy linking didelphids and australidelphians [5,9]), a large astragalar medial plantar tuberosity that wraps under the sustentacular facet (absent in most australidelphians) and a large peroneal process of the calcaneus (greatly reduced in all other known australidelphians) [5,9]. Referral of Tingamarran Metatherian Petrosals and Tarsals to Djarthia murgonensis We refer the petrosals and tarsals described here to Djarthia murgonensis because: 1) Djarthia is by far the most common dental taxon from Tingamarra, comprising ,25% of all mammalian teeth from the site; 2) all metatherian petrosals and tarsals so far identified from Tingamarra each comprise a single morphotype; 3) regression analyses indicate that the sizes of these petrosals and tarsals correspond closely to those predicted for Djarthia based on dental measurements (Table S1, S2, Figure S1, S2). Phylogenetic Analysis and Molecular Divergence Dates Parsimony analysis of a 242 morphological character matrix [8] (Figure 3A, Text S2, S3) and partitioned Bayesian analysis of this matrix in combination with 20.1 kb of sequence data [10] (Figure 3B) confirm that Djarthia is a member of Australidelphia, but both analyses place Djarthia outside a clade comprising extant Australasian marsupials (Figure 3A-B).Djarthia is therefore the oldest known Australian crown-group marsupial by some 30 million years (over twice as old as the next oldest from Australasia [2]) and one of the oldest anywhere in the world.Divergence dates calculated using a Bayesian 'relaxed molecular clock' method [26] (Figure S3, Table S3) indicate that Australidelphia originated 65.0-75.1 MYA (95% CI = 59.2-84.3MYA) and that the extant australidelphian orders diverged from each other 56.9-65.5 MYA (95% CI = 51.1-71.8MYA).These dates are compatible with Djarthia as either a stem-or early crown-australidelphian.Because Djarthia appears to be more plesiomorphic than any other known australidelphian, it may approximate the ancestral morphotype of the Australasian marsupial radiation or of Australidelphia as a whole.The dentition of Djarthia indicates a generalised insectivorous diet [13], whilst the tarsal remains suggest scansorial or arboreal habits. Biogeographical Implications Khasia cordillerensis [27] from early or middle Palaeocene (59.2-64.5 MYA) deposits at Tiupampa in Bolivia and Mirandatherium alipoi [28] from late Palaeocene (58.7-59.2MYA) deposits at Itaborai in Brazil-both known only from dental specimens-have been referred to Microbiotheria, rendering them the oldest described australidelphians (although Djarthia is dentally more plesiomorphic than both of these taxa).However, these referrals have been questioned [5,11] because they are based solely on characters of the dentition that are known to be highly homoplastic.Furthermore, no australidelphian-type tarsals have been found at either Tiupampa or Itaborai (even though tarsals of at least 13 metatherian taxa are known from the latter site [5]), and phylogenetic analyses of five different metatherian petrosal morphotypes from Itaborai indicate that none are referable to Australidelphia [6].The oldest unequivocal South American australidelphian is Microbiotherium tehuelchum, which is from the early Miocene (16.2-16.6MYA) Santa Cruz fauna of Argentina (roughly 40 million years younger than Tingamarra) and which exhibits distinctive microbiothere autapomorphies of the auditory region [25].Possible non-australidelphian crown-group marsupials older than Djarthia include Carolopaulocoutoia (a possible paucituberculate [29]), isolated petrosals [6] and didelphid-like tarsals [5], all from Itaborai (which is approximately four million years older than Tingamarra).However, the affinities of these highly fragmentary taxa have yet to be investigated in the context of a broad-scale phylogenetic analysis that combines morphological and molecular sequence data.Djarthia is therefore the oldest crown-group marsupial known from dental, cranial and postcranial remains, and the oldest with confidently resolved phylogenetic relationships; as such, it represents a robust calibration point for molecular dating analyses.The extremely plesiomorphic australidelphian morphology of Djarthia and the apparent absence of undoubted australidelphians from early Palaeogene deposits in South America raises the possibility that Australidelphia originated in Australia or elsewhere in eastern Gondwana, perhaps from a Djarthia-like ancestor.If so, Australidelphia did not originate in South America (as has usually been assumed [5,9]) and the South American microbiotheres are the result of a later back-dispersal from eastern Gondwana.However, the early Palaeogene record of metatherians in South America is still relatively poorly known, particularly in the south of the continent; it is feasible that undoubted early Palaeogene South American australidelphians wait to be found.Possible microbiotheres have been described from the Middle Eocene La Meseta Formation of Seymour Island, but these taxa are known solely from isolated teeth and are approximately ten million years younger (and dentally more derived) than Djarthia [30]. Collection of fossils All the fossil specimens described here were obtained by screenwashing of clay samples from the Tingamarra Local Fauna and subsequent microscope-assisted sorting of the concentrate. Justification for referral of the isolated Tingamarran metatherian petrosals and tarsals to Djarthia murgonensis We base our referral of the isolated Tingamarran metatherian petrosals and tarsals to the Tingamarran metatherian Djarthia murgonensis (previously known only from dental specimens [13]) on the basis of: 1) relative abundance; 2) comparative morphology; 3) comparative size (using regression analyses). D. murgonensis is by far the most common mammalian dental taxon at Tingamarra, comprising approximately 25% of all dental specimens.The second most common dental taxon is the bunodont metatherian Thylacotinga bartholomaii [14], which is considerably less common than D. murgonensis and is also approximately three times larger in linear dimensions (measurements taken from [14] and [13]) and is therefore far too large for the Tingamarran petrosals and tarsals described here (see Table S1, S2, Figure S1, S2).Other Tingamarran metatherians currently represented by dental specimens are far less common than D. murgonensis and T. bartholomaii.Thus, the Tingamarran petrosals and tarsals are likely to belong to D. murgonensis on the basis of relative abundance. Collectively, the Tingamarran metatherian petrosals represent a single morphotype with minor variations in morphology, and are very similar in size (Figure 1).They can be referred to Metatheria based on cochlear coiling of .360u(a therian synapomorphy [20,21]) and absence of sulci or foramina on the petrosal for the internal carotid or stapedial arteries (loss of these is a metatherian synapomorphy [22]).The petrosals most likely represent a plesiomorphic crown-group marsupial because of: 1) the absence of a groove on the anterior pole of the promontorium for the internal carotid artery (present in the South American stemmetatherians Pucadelphys, Andinodelphys and Mayulestes from the early or middle Palaeocene of Tiupampa in Bolivia and in some isolated petrosals from the late Palaeocene of Itaborai in Brazil [6,7,31]); 2) the presence of a well-developed rostral tympanic process of the petrosal (absent in Pucadelphys, Andinodelphys and Mayulestes) which is nevertheless not greatly enlarged as it is in some didelphids and many australidelphians; 3) no evidence of a complete stylomastoid foramen within the caudal tympanic process of the petrosal (this foramen is a derived feature of dasyurids and macropodoids); 4) loss in some specimens of the prootic canal (absence of this canal is common in crown-group marsupials, but apparently also occurred independently in borhyaenoids [23,32]).Variation in the presence or absence of the prootic canal could potentially indicate that the petrosals described here represent more than one taxon.However, within marsupials polymorphism of this character has been reported at the family-level (caenolestids, didelphids, peramelids, peroryctids, dasyurids and phalangerids), genus-level (the dasyurid Dasyurus) and species-level (the didelphid Philander opossum) [23,33].The prootic canal has been described as present in Dasyurus viverrinus [33], but a D. viverrinus specimen from the University of New South Wales (AR6521) lacks an obvious prootic canal, indicating that this character is polymorphic within at least one australidelphian species.Although the prootic canal is apparently absent in adults of Dromiciops gliroides, it has been found in a late juvenile of this species, suggesting that this feature may be lost relatively late in ontogeny [23].The polymorphism seen in the Tingamarran petrosals may reflect ontogenetic differences, or an intermediate stage in the loss or gain of the prootic canal.The Tingamarran petrosals cannot be unequivocally referred to Australidelphia because possible australidelphian synapomorphies of the petrosal [6,31] show considerable polymorphism when a wider diversity of australidelphian taxa are considered [23]. Similarly to the petrosals, the Tingamarran metatherian tarsals represent a single morphotype with minor variations in morphology, and are also very similar in size (Table S2).They can be referred to Australidelphia because they exhibit fusion of the ectal and sustentacular facets, forming the australidelphian 'continuous lower ankle joint pattern', and subdivision of the calcaneocuboid facet into three distinct facets [5,34].The Tingamarran tarsals appear to be from a very plesiomorphic australidelphian because of the presence of a large peroneal process of the calcaneus (this process is reduced in all other known australidelphians [5]). Based on its preserved dental features, D. murgonensis is probably a plesiomorphic member of the marsupial crown-group [13], although it could not be confidently assigned to either 'Ameridelphia' (a paraphyletic grade that includes the extant orders Didelphimorphia and Paucituberculata) or Australidelphia because of an apparent absence of unequivocal australidelphian dental synapomorphies [13].Given that the Tingamarran metatherian petrosals and tarsals appear to represent a plesiomorphic crown-group marsupial and a plesiomorphic australidelphian respectively, referral of the petrosals and tarsals to D. murgonensis appears reasonable based on comparative morphology. Following Szalay [5], Ekdale et al. [21] and Ladeve `ze [6], we have also used regression analyses to assess whether the Tingamarran metatherian petrosals and tarsals are of appropriate size for referral to Djarthia murgonensis.Ekdale et al. [21] and Ladeve `ze [6] calculated the area of the promontorium of the petrosal and molar area for a range of different eutherians and metatherians respectively, and used these in regression analyses.We have found that promontorium area is difficult to calculate unambiguously because the precise extent of the promontorium relative to adjacent regions of the petrosal is not always obvious, and it cannot be calculated in intact skulls of taxa in which the promontorium is not completely exposed in ventral view; instead, we have measured maximum petrosal length in ventral view for a range of different marsupial taxa (Table S1).We used mesiodistal length of the second upper molar (M2) as our dental measurement (Table S1), rather than molar area (as used by Ekdale et al. [21] and Ladeve `ze [6]), because the lingual portions of the M2 and M3 in the holotype of D. murgonensis are missing and so areas cannot be calculated for these teeth.Szalay [5] suggested that in metatherians the width of the lower ankle joint (i.e.combined width of the ectal and sustentacular facets of non-australidelphian taxa, or width of the continuous lower ankle joint of australidelphians) correlates with the mesiodistal lengths of the second upper and second lower molars, although he did not provide quantitative data to demonstrate this relationship.We therefore measured M2 mesiodistal length and lower ankle joint width (taken from the calcaneus) for a number of marsupials (Table S2) to investigate the allometric relationship between these measurements and to test the association of the Tingamarran metatherian tarsals with D. murgonensis.Measurements from specimens of a range of extant and fossil marsupials available at the University of New South Wales were taken using a Wild MMS235 measuring device.Graphs of 1) M2 mesiodistal length against petrosal maximum length (Figure S1), and 2) M2 mesiodistal length against lower ankle joint width were plotted (Figure S2), lines of best fit and their associated equations calculated and R2 values determined.Values for D. murgonensis were then plotted on each graph, assuming that the Tingamarran metatherian petrosals and tarsals are referrable to this taxon.Because all the Tingamarran metatherian petrosals referrable to D. murgonensis are incomplete, a composite measurement for maximum petrosal length was estimated from QM F36393, F36397 and F32322, which are three most complete specimens and are illustrated in Figure 1.Lower ankle joint width values for D. murgonensis were taken from QM F52747 (illustrated in Figure 2), F52748 and F 52749, which are all right calcanea.Of the dental specimens recovered from Tingamarra to date, those referable to D. murgonensis show the best fit in terms of size to the Tingamarran metatherian petrosals and tarsals, based on this regression analysis. The morphological character matrix was analysed using maximum parsimony as implemented in PAUP*4.0b10[35].The two-stage heuristic search used by Worthy et al. [36] was employed here.Support values were calculated using bootstrapping (2000 replicates using standard PAUP* settings) and Bremer support (using the two-stage heuristic search strategy of Worthy et al. [36]).The strict consensus of the most parsimonious trees, together with support values, is given in Figure 3a. The morphological dataset was combined with the 20.1kb molecular supermatrix of Beck [10]-which comprises DNA sequence data from 7 nuclear genes (APOB, BRCA1, IRBP, PGK1, P1, RAG1, and VWF) and 15 mitochondrial loci (12S rRNA, 16S rRNA, tRNA valine, and 12 H-strand protein-coding genes)-and analysed using MrBayes 3.1.2[37].Further details regarding the supermatrix are given in [10].Following Beck [10], the molecular supermatrix was partitioned by gene, codon position (for protein-coding genes) and stem and loop regions (for ribosomal genes), with each partition assigned the model selected for it by MrModelTest 2.2 [38] assuming the Akaike Information Criterion [39].The morphological partition was assigned an Mk+G model [37,40].Using MrBayes 3.1.2,the combined analysis comprised four independent runs, each comprising 8 MCMC chains (7 'heated' and 1 'cold'), with the temperature of the heated chains reduced from the default value of 0.2 to 0.15, to improve mixing.These analyses were run for 5 million generations, sampling trees every 100 generations.The first 4 million generations were discarded as burn-in, and a 50% majority rule consensus was constructed from the last one million generations (Fig. 3b). BEAST molecular dating analysis Molecular divergence dates were calculated using the 20.1 kb molecular supermatrix of Beck [10] and the Bayesian relaxed molecular clock method implemented in BEAST 1.4 [26].The partitioning scheme and models used in the MrBayes analyses (see above) were followed, and an uncorrelated lognormal relaxed clock [26] and a Yule tree prior (as recommended for species-level phylogenies [41]) were assumed.Prior estimates for the divergence dates for selected nodes were specified using transformed lognormal distributions [26,41,42]: these require specification of a 'hard' minimum bound (with a 0% probability of the divergence being younger than this date), a mean estimate and a 'soft' maximum bound (with a 5% probability of the divergence being older than this date).Here, the 'hard' minimum bounds were based on the minimum age of the oldest fossil that can be confidently assigned to a particular node.Given the incompleteness of the marsupial fossil record (particularly in Australasia), the mean estimates for divergence dates used here were taken from recent molecular studies.However, some current molecular dating methods may not be able to account for abrupt changes in the rate of molecular evolution, leading to overestimated divergence dates [43]; indeed, recent point estimates for divergences within mammals based on molecular data often appear unrealistically old from a palaeontological perspective (e.g.Wible et al. [44]; although the lower end of confidence intervals for these molecular divergences usually agrees well with the fossil record).For this reason, we selected lower bounds of estimated age ranges (usually one standard deviation less than the point estimate for a particular node) from previous molecular studies as mean values.The 'soft' maximum bound represents the oldest age for a divergence that, in our opinion, appears feasible based on current molecular and palaeontological evidence.The calibrations used are given in Text S4. Because third codon positions of mitochondrial protein coding genes have been shown to mislead some phylogenetic analyses of marsupials [4,10], two BEAST analyses were carried out: one using the full molecular matrix ( = 'full'), and one with the third codon positions of the mitochondrial protein coding genes were excluded ( = 'no mt3').Following a pre-burnin of 1 million generations, both BEAST analyses were run for 10 million generations, sampling trees every 1000 generations.The first 9 million generations were discarded as burnin, with a 50% majority rule consensus constructed from trees sampled from the last 1 million generations. The BEAST analyses supported the phylogeny given in Figure S3 (analyses of the 'full' and 'no mt3' datasets recovered the exact same topology except within macropodines-this conflict is represented as an unresolved trichotomy).As seen in Figure S3 (nodes 1 and 3), both BEAST analyses recovered a sister-group relationship between monotremes and marsupials ( = Marsupionta), which is almost certainly anomalous given that monophyly of marsupials and placentals ( = Theria) is now strongly supported by both morphological and recent molecular data [45][46][47].However, relationships within marsupials are congruent with other recent molecular phylogenies [3,4,10,48].The divergence dates and 95% confidence intervals calculated for each node, using both the 'full' and 'no mt3' datasets, are given in Table S3.Australidelphian synapomorphies must have evolved between node 6 (the split between Australidelphia and Paucituberculata) and node 7 (the first divergence within Australidelphia), giving a range of 65.04-75.09MYA (95% confidence interval = 59.21-84.32MYA).Nodes 7, 8, 18 and 19 represent the divergences between the extant australidelphian orders; based on the results of the BEAST analyses, these occurred over the period 56.85-65.5 MYA (95% confidence interval = 51.09-71.77MYA). Supporting Information Text S1 Justification for an early Eocene age of the Tingamarra Local Fauna Found at: doi:10.1371/journal.pone.0001858.s001(0.03 MB PDF) Text S2 List of morphological characters scored for Djarthia murgonensis and/or modified from Sa ´nchez-Villagra et al. [9].Numbering follows Sa ´nchez-Villagra et al. [9].Found at: doi:10.1371/journal.pone.0001858.s002(0.06 MB PDF) Text S3 Morphological character matrix Found at: doi:10.1371/journal.pone.0001858.s003(0.02 MB PDF) Text S4 Fossil calibration points used in the BEAST molecular dating analysis Figure S3 50% majority rule consensus from partitioned Bayesian analysis using BEAST (10 million generations, 9 million generation burn-in) of the 'full' and 'no mt3' versions of the molecular matrix of Beck [6].Numbers correspond to nodes given in Table S3.Found at: doi:10.1371/journal.pone.0001858.s004(0.04 MB PDF) Table S1 Measurements of maximum petrosal length and M2 mesiodistal length for a range of extant and fossil marsupials (fossil taxa are indicated by {).Measurements for Djarthia murgonensis assume that the Tingamarran metatherian petrosals QM F36397, F36393 and F32322 (illustrated in Figure 1) are referrable to that taxon.No petrosal measurement is available for Thylacotinga bartholomaii because the petrosal of this taxon is currently unknown.Found at: doi:10.1371/journal.pone.0001858.s005(0.02 MB PDF) Table S2 Measurements of lower ankle joint width (taken from the calcaneus) and M2 mesiodistal length for a range of extant marsupials, plus Djarthia murgonensis and Thylacotinga bartholomaii.Measurements for D. murgonensis assume that the Tingamarran metatherian calcanea QM F52747 (illustrated in Figure 2), F52748 and F 52749 are referable to that taxon.No lower ankle joint width measurement is available for T. bartholomaii, as the calcaneus of this taxon is currently unknown.Found at: doi:10.1371/journal.pone.0001858.s006(0.01 MB PDF) Table S3 Molecular divergence dates within marsupials as calculated by BEAST assuming an uncorrelated lognormal relaxed clock.Dates were calculated using the supermatrix of Beck [6], including ('full') and excluding ('no mt3') the third codon positions of mitochondrial protein-coding genes.Node numbers correspond to the phylogeny in Figure S3.Point estimates and 95% confidence intervals are given for each node.Found at: doi:10.1371/journal.pone.0001858.s007(0.01 MB PDF) Figure S1 Plot of M2 mesiodistal length against maximum petrosal length for the specimens listed in Table S1.Specimens of Djarthia murgonensis and Thylacotinga bartholomaii are identified by squares and circles respectively.The predicted maximum petrosal length for T. bartholomaii was calculated according to the equation for the line of best fit.Found at: doi:10.1371/journal.pone.0001858.s008(0.04 MB PDF) Figure S2 Plot of M2 mesiodistal length against lower ankle joint width for the specimens listed in Table S2.Specimens of Djarthia murgonensis and Thylacotinga bartholomaii are identified by squares and circles respectively.Predicted lower ankle joint width for T. bartholomaii was calculated according to the equation for the line of best fit.Found at: doi:10.1371/journal.pone.0001858.s009(0.04 MB PDF) Figure 3 . Figure 3. Phylogenetic relationships of Djarthia murgonensis.A, strict consensus of 2 most parsimonious trees (tree length = 886; consistency index (CI) excluding uninformative characters = 0.357; retention index (RI) = 0.646) from analysis of a 242 morphological character matrix [8].Position of Djarthia highlighted in red.Australidelphia is indicated.Numbers above branches represent bootstrap values (2000 replicates); numbers below branches represent Bremer support values.B, Bayesian 50% majority rule consensus from analysis of the morphological matrix in combination with a 20.1 kb molecular data set [10].Position of Djarthia highlighted in red.Australidelphia is indicated.Numbers at nodes represent Bayesian posterior probabilities.doi:10.1371/journal.pone.0001858.g003
2014-10-01T00:00:00.000Z
2008-03-26T00:00:00.000
{ "year": 2008, "sha1": "477d925cdbf1eaf0f4606e5796735be7e2af6015", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0001858&type=printable", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "477d925cdbf1eaf0f4606e5796735be7e2af6015", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
196476829
pes2o/s2orc
v3-fos-license
Femoral-Side ACL Rupture: Arthroscopic Transosseous Refixation Technique and Early Outcomes Our study consists, in those cases, in a new ACL refixation technique, with the aim of preserving the native ACL and his mechanoreceptors that lead to a better proprioception and stability. Moreover, drilling the femoral bone leads to a lack of bone marrow stem cells that promote the healing of the ligament, as described by Steadman et al. [4] in the “healing response technique”, an all-arthroscopic procedure that preserves the native ACL and makes use of an arthroscopic awl with a 45-degrees angle to make holes in the femoral attachment of the ACL [4]. Introduction The Anterior Cruciate Ligament (ACL) tears are common injuries, especially during the winter sports. The current gold standard for ACL tears is anatomic ACL reconstruction with autograft or allograft [1], recommended first of all for athletes and active patients, but also for patients with giving-away symptoms. The femoral and tibial attachment sites of the anterior cruciate ligament contain mechanoreceptors such as the Pacinian corpuscles, Ruffini endings, and Golgi tendon organ-like corpuscles, all of which play a role in proprioception. With an ACL tear, those mechanoreceptors undergo to a rupture that may lead to a lack of afferent sensory input to the central nervous system [2,3]. In addition, elapsed time from the injury may affect proprioception and postural stability. In some cases, the ACL tear consists in an isolated femoral-side rupture, clinically diagnosed with positivity of Drawer, Lachman and (depending on the pain) Pivot shift test, shown during MRI exam and confirmed during the arthroscopic procedure. Our study consists, in those cases, in a new ACL refixation technique, with the aim of preserving the native ACL and his mechanoreceptors that lead to a better proprioception and stability. Moreover, drilling the femoral bone leads to a lack of bone marrow stem cells that promote the healing of the ligament, as described by Steadman et al. [4] in the "healing response technique", an all-arthroscopic procedure that preserves the native ACL and makes use of an arthroscopic awl with a 45-degrees angle to make holes in the femoral attachment of the ACL [4]. Material and Methods The inclusion and exclusion criteria for this technique are shown in Table 1. We suggest this technique for patients with an intra-synovial ACL lesion, detached from the femoral insertion, proved positive by Lachman and Drawer tests and as the MRI exam can primarily show ( Figure 1). Before the arthroscopic procedure, the patient is in supine position and after endovenous infusion of antibiotic solution; the knee is positioned on a leg holder, flexed at 90-degrees, with a hemostatic device at proximal femur. The first step consists in a diagnostic arthroscopy through the standard arthroscopic portals; after lavage of hemarthros, all the slight meniscal lesions are regularized, if present. If in the pivot compartment the ACL shows a detachment from femoral insertion with synovial sleeve still intact around the bundles and unstable during the hook-traction-test (Figure 2), we proceed with the second step that consists in the ACL suture. After partial removal of the Hoffa's fat-pad, a cannula is inserted in the standard antero-medial portal, with the arthroscopic optical device in the standard antero-lateral portal. Through a modified Mason-Allen stitch technique, a Vycril® #2 wire is passed through the distal ACL fibers with an arthroscopic special tool ( Figure 3) and both Vycril® wires ends are drilled outside with two slotted guide wires to the distal lateral femur in a parallel way (Figure 4). After a small incision of the skin and the split of the fascia lata, the two wires are knotted over the transosseous bridge with an appropriate push-knot device, directly over the lateral distal femoral cortex. By this technique an anatomical reinsertion of the ACL is achieved, as well as a normal tensioning of the previous ligament fibers ( Figure 5). Finally, after a plentiful arthroscopic lavage of the joint, soft tissues are closed layer by layer and covered with sterile dressing; the knee is locked at 15° of flexion with an adjustable R.O.M. knee brace for the next two weeks with partial weight bearing, with an accurate antithrombotic prophylaxis with low-molecular-weight heparin. At the end of the surgical procedure we performed again Lachman, Drawer and Pivot shift tests that result completely negative, as also shown by the arthroscopic view of the reinserted ACL, extremely stable at the hook-traction-test. We performed another MRI the day after the surgical procedure that shows how the native ACL has a good tension (Figure 6), with the holes centered in the femoral isometric insertion point ( Figure 7); it's also possible to see the micro fractures for the Steadman's healing response technique (Figure 8). The stitches are removed after 14 days, than the flexion of the knee brace must be increased by 60-degrees for other two weeks during which the patient can start with isometric exercises of the quadriceps, walking into the water and gradually achieve complete weight bearing, without crutches. Femoral-Side ACL Rupture: Arthroscopic Transosseous Refixation Technique and Early After one month from the operation, the knee flexion with the brace must be increased by 90-degrees for other two weeks and the patient can start swimming (it's forbidden the breaststroke). At the end of those two weeks the patient can start home cycling, if the knee flexion reaches easily 90-degrees; jogging is permitted not before the end of the 12th week and contact sports, ballsports and pivot-torsional movements are forbidden until the end of the 6th month. Patients who accepted this new technique were just 10 in the past winter-sport season; for that reason we cannot have longterm results and there are no references of this surgical procedure in literature. However after 6 months, those patients were interviewed and they referred that they didn't feel any instability problems of the knee and would be able to start again their sports activities. Results and Discussion This new technique is a valid option for patients with an isolated, acute, femoral-side ACL lesion, without any collateral ligaments lesions or unstable meniscal tears, and in which cases we prefer the standard reconstruction with ST. Performing this technique permits to preserve the native ACL, avoiding all the risk of an ACL reconstruction, such as graft failure or the lack of proprioception, well documented in patients who have undergone ACL reconstruction, highlighting the advantages of ACL further preservation. The ACL refixation leads to keep the original proprioception, due to the maintenance of the patient's ACL that contains all the mechanoreceptors involved in knee proprioception. As described by Steadman et al. in the healing response technique [4], 93% of patients had a minimum of 2-year followup at an average of 7.6 years. Average preoperative Lysholm score was 54 and improved to an average of 90 postoperatively (p = 0.001); Tegner activity scale mean at follow-up was 5 (range, 2 to 9); patient satisfaction mean was 10 (range, 4 to 10); higher patient satisfaction was correlated with increased Lysholm score at follow-up. Tegner activity scale was associated with postoperative Lysholm score. This study demonstrates the effectiveness of the healing response procedure to allow patients to return to high levels of recreational activity and to restore knee function to normal levels. In a select group of mature patients with acute proximal ACL tears, the healing response procedure is an effective treatment technique. Conclusion Therefore we believe that reattaching the proximal ACL stump to the footprints of the femoral insertion after drilling holes in this region to deliver stem cells and growth factors to the scaffold might be a way to create the best possible conditions to stimulate the healing process [5,6].
2019-03-16T13:10:26.217Z
2016-09-02T00:00:00.000
{ "year": 2016, "sha1": "25fc87af1bed951714185be660f65a59ea976796", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.15406/mojor.2016.05.00190", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "34cb12c1c4e888e4d58a52a164ee94a3f999ab39", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252297443
pes2o/s2orc
v3-fos-license
Paragraph Shrinking Strategy for Teaching Reading Nursing Student data and determine whether the hypothesis was correct. ABSTRACT The goal of this study is to determine whether or not the paragraph-shrinking strategy significantly affected students' reading comprehension. This study used an experimental design with a posttest-only control group. The control class received treatment using the usual technique, while the experimental class received treatment using the paragraph shrinking strategy. This study used a multiple-choice reading test as INTRODUCTION Finally, According to Vacca et.al (2014:21), there are a variety of classroom-related factors influence reading comprehension to learn in a give discipline. First, the learner's prior knowledge of, attitude toward, and interest in the subject. Second, the learner"s purpose for engaging in reading, writing, and discussion. Third, the vocabulary and conceptual difficulty of the text material. The fourth, the assumptions that the text writers make about their audience of readers. The fifth, the text structures that writers use to organize ideas and information. The last, the teacher's beliefs about and attitude toward the use of texts in learning situations. Teaching is the activity where the teacher delivering knowledge to the students. According to Leamnson (2012: 51), teaching as any activity that has the conscious intention of, and potential for, facilitating learning in another. This is an uncommon definition and many teachers, particularly those in departments of education, will find it unacceptable. The reasons are easily discerned. As defined here, teaching does not imply necessarily that any learning is going on. Then, Farrell (2009: 20) agrees that teacher in teaching reading should remember to prepare the effective reading lesson. The teacher should bring something important to the text and provide the readers with schemata as the networks of prior interpretation and they become the basis for comprehension. There are some definitions of Paragraph Shrinking that given by expert. First, Harris and Graham (2015:93), Paragraph Shrinking is simple technique for identifying the main idea of a paragraph or short section of text. So it is a good strategy to train students in reading and understanding of the students in a paragraph. Next, according to Danielle (2007:185) Paragraph Shrinking is designed to developed comprehension through summarization and main idea identification. Then, Karen R. H in Cartika (2014:4) states that Paragraph Shrinking is a simple strategy for identifying the main idea of a paragraph or short section of text. So in this strategy students are required to be found in a basic idea paragraph. Based on the definitions above, it can be concluded that the Paragraph Shrinking strategy is one of strategy in teaching reading that train the students ability to understanding of the paragraph on a text. Furthermore, there are some procedures of Paragraph Shrinking strategy. According to Mathes et. al in Harris and Graham (2015:93) states that the steps of Paragraph Shrinking are as follows: First, identify the subject of the paragraph by looking for who or what the paragraph is mostly about. Second, state the most important information about who or what. Third, say the main idea in 10 or fewer words. Then, according to Fuchs and Burrish in Wilson and Blednick (2011:130) states that the steps of Paragraph Shrinking are as follows: First, each students reads aloud to partner without reading the text. Second, after each paragraph, the students stop to summarize the main points. Third, students is decide who or what each paragraph is about, and what is important about the who or what. Fourth, if the students disagree, they silently skim the paragraph again and answer the question a second time. Fifth, students switch reading and listening tasks. Sixth, progress is monitored and cheeked for correct responses. Next, Harris and Graham (2012:141) states that the steps of Paragraph Shrinking are as follows: First, after students have finished the repeated reading routine, the stronger reader continues reading the new text and stops to summarize a paragraph after reading for 5 minutes. Second, the weaker reader then asks the stronger reader to "Name the who or what. Tell the most important thing about who or what. Say the main idea in 10 words or less." Third, the two peers switch roles and repeat the routine with the next portion of the passage. METHOD Writer used experimental research in which is posttest-only control group design. Gay et.al (2012:250) argue that experimental research is the only of the research that can hypothesis to establish cause and effect relationship. The writer manipulates at least one independent variable, control other relevant variables, and observer the effect one or more dependent variables. The purpose is to know the effectiveness teaching reading by using paragraph shrinking strategy or not. There were two classes in this research, experimental and control class. Experimental class was taught by using paragraph shrinking strategy and control class was taught by using conventional strategy Then, the total population this research were 84 students in two class of nursing study program University of Bina Sehat PPNI Mojokerto FINDING AND DISCUSSION The finding of research is used to answer formulation and hyphotesis of the research that is Did or not Paragraph Shrinking Strategy give significant effect toward students" reading comprehension?. To answer this formulation and hyphotesis need the data description that were gotten from two classes ( Experimental and control class) which administered by the reading test in multiple choice form. In this data description, the writer describes the result of the research. The research was about the effectiveness study of teaching reading by using Paragraph Shrinking strategy on students" reading comprehension. From data showed t-calculated is 6,13 was bigger than t-table is 2,021 in level significance (0,05) and degree of freedom (df) is n1+n2 -2 = 44 with the name of result of the post-test in experimental class is 23 and total score is 1686 and control class is 1236. Mean score for experimental class is 73,30 and for control class is 53,74. From the result of the data analysis, it was found that t-calculated was bigger than t-table. It means that research hypothesis (Ha) was accepted and Null hyphotesis (H0) was rejected. In other word, teaching reading by using Paragraph Shrinking strategy gave significant effect toward students" reading comprehension. As it is proven by the result of the t-calculated 6,13 which is bigger than t-table 2,021. This effectiveness can also be seen by the classroom interaction that happened during the research. The strategy can make students understanding in teaching reading and the students" have more motivation to study because in strategy students can get information and have more knowledge and they are so fun to study. Then, the students can find and know the meaning, the topic, and the main ideas of each paragraph in the text. The last, students can easily make the question of every paragraf in the text. The writer can assume that there are many factors influenced it. There are internal (intrinsic) and external (extrinsic) factors. The first, students interested to listen to the text was read by his friend because the students ask to read the text to be aloud. Second, the students are more enjoyable during teaching learning process because they can be active to read the text. Third, students were able to determine the topic and main ideas of every paragraf read by his friend. Fourth, the paragraph shrinking can be used to help the students understand of the text. And the teacher can apply the paragraph shrinking strategy by well done. Next, the teacher can manage the students in environment of the classroom and others. Meanwhile, in control class the students less interested to learning English, the students bored with strategy that apply by teacher in monotonous way. The students are difficult to find the meaning and the main ideas from the text. As the result they got low score in reading class. CONCLUSSION The writer found that there was significant effect of teaching reading by using paragraph shrinking strategy. It shows that students understand and comprehend the text that they have read and answer the question correctly. The writer hope this journal can be used by others to study and explore for better result. Then, this strategy can be applied in teaching reading by English teacher. And also it is not only in junior or senior level but also for college students to make understanding English text.
2022-09-16T15:34:11.132Z
2022-08-29T00:00:00.000
{ "year": 2022, "sha1": "e6ec5e71240a18966f6a973e3466e0406d0048e6", "oa_license": "CCBY", "oa_url": "https://jsret.knpub.com/index.php/jrest/article/download/3/2", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "ea0657f4e3c7a711ed4bc6855ed55189bc73762b", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
259224062
pes2o/s2orc
v3-fos-license
Development of a portable device to quantify hepatic steatosis in potential donor livers An accurate estimation of liver fat content is necessary to predict how a donated liver will function after transplantation. Currently, a pathologist needs to be available at all hours of the day, even at remote hospitals, when an organ donor is procured. Even among expert pathologists, the estimation of liver fat content is operator-dependent. Here we describe the development of a low-cost, end-to-end artificial intelligence platform to evaluate liver fat content on a donor liver biopsy slide in real-time. The hardware includes a high-resolution camera, display, and GPU to acquire and process donor liver biopsy slides. A deep learning model was trained to label and quantify fat globules in liver tissue. The algorithm was deployed on the device to enable real-time quantification and characterization of fat content for transplant decision-making. This information is displayed on the device and can also be sent to a cloud platform for further analysis. Introduction Thousands of patients die every year from the shortage of donor livers for transplantation (1). Therefore, transplant surgeons seek to expand the criteria and safe use of potentially transplantable livers. Livers with high fat content or steatosis are thought to function poorly following transplantation, increasing the risk for early graft dysfunction, need for re-transplantation, or death (2,3). However, some have argued that these organs are transplantable especially if appropriate recipients are chosen (4). Recent reports raise concerns that manual fat scores between different pathologists can vary (5)(6)(7). More accurate estimates may enable the use of more livers. About 33% of livers considered for transplantation are biopsied as part of the evaluation process (8). Often, a community pathologist is called in to evaluate the liver tissues, night or day, wherever the liver donor is located. However, this process takes time and studies have shown significantly different fat scores reported between different pathologists (5) Recognizing this problem, several groups have sought to automate steatosis scoring (6,7,9,10). Previously, we detailed the development of a machine-learning algorithm to label fat globules with high accuracy by leveraging pre-trained neural networks built on a labeled database of donor liver slides (11,12). The analysis of these data resided on the cloud posing practical limitations when considering their implementation in a clinical setting. First, cloud analysis depends on secure access to the internet which may not be readily accessible in remote community hospitals where donors may become available. Second, waiting for the transplant surgical team to retrieve the donor liver biopsy for analysis at the transplanting center, can delay the determination of whether the organ is safe for use and prolong cold ischemia time. Lastly, reliance on the cloud is associated with the risk of private patient data exposure while transferring health information for central analysis. The use of a point-ofcare device enables the private analysis to be de-identified and/ or deleted without the risks of exposure related to the cloud. Therefore, we propose the development of an end-to-end device that leverages an artificial intelligence (AI)-based algorithm, a graphics processing unit (GPU), and a highdefinition camera to detect percent steatosis in livers of patients with high precision and accuracy. The device is portable, computationally efficient, independent of internet access, and of low cost. Results Software for AI-based steatosis detection at the point-of-care There are several challenges in developing a machine learning algorithm that runs on a device. Since the algorithm for segmenting fat cells usually requires significant computational resources and memory, we use model compression techniques to transfer the machine learning model to the Nvidia Jetson Nano device ( Figure 1A). We also utilize the GPU computing power to improve the latency of slide analysis and inference to allow almost real-time results. Hardware assembly To take pictures directly through the microscopes' eyepiece we mounted an IMX477-IR Cut Arducam camera module (12.3 megapixels) with low-light sensor capabilities to a 3D printed adapter. The adapter is friction fitted to the microscope eyepiece of 38 mm in diameter ( Figure 1B). The camera driver has auto-focus and can adjust to low light capabilities. Less expensive cameras without low light capability did not take adequate photos. The IMX camera module connects directly to the Jetson Nano (Nvidia) via the CSI-2 port. The Jetson Nano was mounted on a custom-designed 3Dprinted case made of polylactic acid. The case has multiple open sides for easy access to ports. On the top is a vent hole designed to allow optimal airflow for the heat sync. In front of the vent is a display mount which allows for a liquid crystal display to be easily attachable and detachable ( Figure 1). With the display mounted, the device is 24 cm high, 16.2 cm wide, and 14.5 cm deep. Device software A Python script provides a graphical user interface to access the camera capture function natively. To reduce strain on the CPU, the resolution was set to 1080 × 720 p and the frame rate to 60 fps. Using the script, a user can trigger the device to acquire an image through the microscope and store it in the internal memory where it can be accessed for analysis. The U-net network that we use to detect liver fat content accepts 256*256 pixel tile input (12). Therefore, we developed a script to tile the image captured from the sensor into multiple 256*256 tiles. The tiles are then passed into the neural network to create a steatosis mask. The network analyzes each pixel in the tiles to identify whether it represents a fat pixel. By programmatically counting all fat vacuoles and using a filter to remove the background, the device can estimate the percent steatosis in the liver tiles. The system then computes the average steatosis across the images that were sampled from the slides. Comparison between cloud platform and end-to-end device The assessment of the steatosis on 33 slides by the device was compared to the whole slide assessment using the same algorithm on a cloud-based platform previously described (12). There was a strong positive correlation (r = 0.9399) between the two techniques. Figure 2 shows a plot of cloud-based steatosis scores against device-derived scores. When slides that were particularly divergent between the two techniques (Slide 20) were compared to less divergent slides (Slide 26), the former had patches of steatosis rather than the uniform distribution seen on the latter ( Figure 3). Discussion Currently, the assessment of liver biopsies for steatosis can be variable and even difficult to obtain outside of dedicated liver transplant centers. Therefore, we developed this prototype standalone device capable of acquiring images of liver biopsies from a microscope, processing them, analyzing them, and measuring the percent steatosis. We hope that these assessments will facilitate the evaluation of donated livers for transplantation to reduce the number discarded and optimally match them with appropriate recipients. The device relies on the AI algorithm our group previously trained on a Google Cloud Platform to label fat vacuoles in a liver specimen. Fortunately, our previous work had shown that the algorithm was very good at recognizing artifacts in the image caused by tears in the tissue and not scoring these as steatosis (12). While the steatosis correlation between the device and the cloud was high, it was not perfect. Despite the device capturing three images from each slide, steatosis percent varied considerably on several slides. Therefore, three images may not be enough to score steatosis reproducibly. Future iterations of this algorithm should include an assessment of the image and biopsy quality to identify images out of focus or with too many artifacts rendering the image unusable. This end-to-end device can be used at a donor hospital to obtain images from their microscope and assess the steatosis of a donor liver without a digital scanner or a connection to the internet. The device is powerful enough that images could be analyzed on the device without needing to load large files to a cloud platform. Keeping the data on the device also reduces the danger of sending protected personal health information to the cloud. The biopsy data would not be permanently stored on the device and can be erased as soon as a result is given. The results are obtained more rapidly and reliably by using the device. Since the slides are relatively large, it can take a substantial amount of time to transfer over the network with limited bandwidth. In the case of the device, the image acquisition and analysis are done on the device using a GPU; therefore, the analysis can be completed within a few minutes. To access the quality of a donated liver, a transplant team considers multiple donor variables including age, medical history, Correlation of cloud and device scored steatosis. With trend line intercept set at 0,0, a strong correlation (r = 0.9339) was noted with a few outliers that were subsequently inspected. Open circle, ○, point corresponds to Slide #20; X-point corresponds to Slide #26 (see Figure 3). Klinkachorn et al. 10.3389/frtra.2023.1206085 Frontiers in Transplantation cause of death, and laboratory values. After the organ is provisionally accepted, a procurement team is dispatched to the hospital in which the donor candidate is located. In many circumstances, the potential donor may be at a remote community hospital that does not have an experienced, on-call, liver pathologist, who could readily screen for liver steatosis, or there may be an extended delay in bringing in an on-call pathologist to review a donor liver biopsy in the middle of the night. Moreover, review of fat globules in the community hospital setting is uncommon on hematoxylin and eosin-stained frozen biopsy slides as other stains that take days to process are the preferred modality for non-urgent clinical circumstances. In some cases, a screenshot of the donor liver biopsy slide through the microscope may be crudely sent to the supervising transplant surgeon who can review the image to visually estimate the degree of fat involvement before approving liver procurement to begin. A point-of-care device offers several advantages over a clouddependent platform for donor liver biopsy analysis. First, a remote community hospital may lack the internet access and computing power necessary to utilize a robust cloud-dependent platform. Second, a point-of-care device can quickly define the degree of fat involvement without requiring a pathologist to arrive, often, in the middle of the night to review the slide before permitting transplantation. Before deciding not to use a liver solely based on the assessment provided by the device, we would recommend having a pathologist examine the slides to prevent the unnecessary discarding of livers. Third, the use of a such device for real-time, rapid evaluation of biopsies could also be advantageous in assessing the impact of machine perfusion on reducing fat content in donated livers intended for transplantation. Finally, the use of a closed system device, disconnected from the internet, reduces the risk of private health information exposure that may occur during the transference of data onto a cloud-based central system. Logistically, a device available in real-time can streamline the transplant decisionmaking process to limit the aforementioned barriers to transplantation. This device is still a prototype. As such, there are several improvements to be made. Despite being relatively small, it is clunky and will benefit from useability studies to improve the design. The microscope adapter will need to come in different sizes to accommodate different lab microscopes. Currently, the use of this device still requires the slides to be prepared and therefore is not completely independent of local hospital support. Additional work needs to be done to characterize macro-versus microsteatosis and its impact on outcomes. Initial work suggested a distribution of vacuole sizes rather than two distinct populations. Biopsy characteristics such as fibrosis and inflammation are not characterized by the device and should be trained into future versions of the algorithm. In addition, it is important to acknowledge competing technologies based on increasingly powerful smartphones and access to cloud computing (9). Machine perfusion pumps may also mitigate concerns about prolonged cold ischemia times. Hardware components The components of this device include the Jetson Nano TM (Nvidia) with a graphic processing unit (GPU), a Waveshare HQ Camera with a 12.3MP IMX477High Sensitivity, a 7inch IPS capacitive touch display, ribbon connectors, a power supply, and an HDMI cable. Software components We develop our platform on the native Ubuntu TM 18.04 which is the operating system on the Jetson Nano. Scripts were written in Python TM , with the help of the following libraries, Tensor Flow TM for machine learning and deep learning inference, OpenCV TM for image analysis, and Argus API for image ingestion. Liver steatosis detection algorithm Previously, a U-net network has been pre-trained on a cell segmentation task capable of segmenting fat vacuoles with high accuracy (12). Utilizing an established U-net platform to detect the fat content in the liver tissues, the algorithm was able to assess every pixel on a liver donor biopsy slide to determine whether they represent fat vacuoles or normal liver cells. Imaging and data collection Three images of each slide were acquired by the device. The physician can adjust the microscope to different areas of the slides and use the touchscreen on the device to collect the sample images on the area of interest. After the image acquisition, several filters are applied to confirm that the picture is stable and remove any background noise from the data collection process. The images are then processed into 256 × 256-pixel tiles to be analyzed by the AI algorithm. Within these tiles, the AI algorithm labels each pixel as a "one" for the presence of steatosis or a "zero" for the absence of steatosis to create the mask. Evaluation metrics The capability of device analysis to the cloud analysis was done by plotting the two modalities against each other. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Ethics statement The studies involving human participants were reviewed and approved by Stanford IRB 61 eprotocol 51,441. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
2023-06-23T13:15:38.220Z
2023-06-23T00:00:00.000
{ "year": 2023, "sha1": "ba7f7f5764b3073e962f0fc01e9b9e19177b48f7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "ba7f7f5764b3073e962f0fc01e9b9e19177b48f7", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [] }
6642787
pes2o/s2orc
v3-fos-license
Surveillance of antimalarial drug resistance in China in the 1980s–1990s Since the successful preparation of the microplates and the medium for field application, the resistance degree and its geographical distribution of chloroquine-resistant Plasmodium falciparum, the fluctuation of the resistance degree of P. falciparum to chloroquine, and the sensitivity of the parasite to commonly used antimalarial drugs were investigated between 1980 and 2003 by the in vitro microtest and the in vivo four-week test recommended by the World Health Organization (WHO). The results indicated that chloroquine-resistant falciparum malaria was present in all eight provinces/autonomous regions endemic for falciparum malaria in China, and the resistance was high and widely distributed in the Hainan and Yunnan provinces. When the use of chloroquine was stopped or administered in a decreased quanity, the drug resistance gradually decreased. In Hainan and Yunnan, P. falciparum was still highly resistant to chloroquine, amodiaquine and piperaquine, and sensitive to pyronaridine and artemisinin derivatives, but the sensitivity was gradually reduced. Based on these results, principles and therapeutic regimens for antimalarial drug use in China were formulated, the use of the antimalarials which had already developed resistance was stopped or reduced, and recommendations to use artemisinin derivatives or compound pyronaridine to promote a rational use of antimalarials and strengthen malaria control were made. The results showed that malaria incidence had declined, and endemic areas of falciparum malaria have been gradually reducing since the mid-1980s. reduced gradually, resulting in an increase of malaria cases and posing a major challenge for malaria control. Therefore, drug resistance of falciparum malaria became an urgent issue to tackle in malaria control in China. Before 1978, the WHO standard in vivo tests [10] (seven-day or four-week tests) were used for the assessment of drug resistance, but the tests are time consuming and costly, the patients need to be hospitalized (and are usually not compliant), and the result of assessment is affected by the patients' immunity, thus it is not applicable in large areas. In 1979, an in vitro microtechnique was recommended by the WHO to assess drug resistance of P. falciparum, which was simpler, fast, accurate and acceptable, and therefore appropriate to use in a large-scale survey [11,12]. The WHO had published updated protocol for antimalarial drug resistance surveillance in vivo (2003), in vitro (2001), and genotyping method (2007) [13][14][15]. In 1978, the authors had successfully established in vitro continuous culture of erythrocytic stage P. falciparum, which provided an appropriate environment for the development of the in vitro microtechnique, including the drug-coated plates and the medium. Based on these technical progresses [16,17], a large-scale assessment of drug resistance was carried out by using both in vivo and in vitro tests between 1980 and 2003. The results of the investigation were highly beneficial to advise on using antimalarial drugs rationally in the malaria control program. Developing an in vitro microtechnique for determining the sensitivity of P. falciparum to antimalarial drugs To undertake a field survey by the in vitro microtechnique, it is necessary to first prepare a drug-coated plate and a medium, both of which are applicable in the field. The WHO developed a standard chloroquine-coated plate and test kit in 1979 [12]. After a trial application, we found that its medium must be made into a culture solution in the field using a relatively cumbersome procedure. In addition, it is easy to pollute, its success rate is low, and the effective period is only 48 hours at 4°C. The maximum dosage of chloroquine in a chloroquine-coated plate was 32 pmol/well, which could not inhibit the growth and development of P. falciparum completely, and it had only one control well for observing the growth and development of the parasite. The WHO chloroquine-coated microplate had 96 wells for testing 12 cases; therefore each plate must be put into an incubator repeatedly, which may cause an error between the before and after results. In order to promote the application of the technique to better understand the situation on the drug sensitivity of P. falciparum in China, we started to develop drug-coated plates and field applicable mediums in 1979. A homemade plastic plate suitable for malaria parasite growth was selected to make the chloroquine-coated microplate. Each plate had four horizontal rows with 10 wells in each row, a total of 40 wells. The size and depth of the well were the same as in the WHO plate. The chloroquine diphosphate solution was filled into the wells with reference to the dosage of the WHO chloroquinecoated microplate, two more wells were added-one for higher dosage and one for blank control [16]-which were kept in an incubator for 24 hours, the wells were sealed with plastic adhesive tape after drying, the drug-coated plate was put into a plastic bag, and stored at room temperature. Both laboratory and field tests indicated that the plate was as effective as the WHO standard plate, suitable for field use in areas with higher chloroquine resistance, and could provide a timely observation on the growth and development of malaria parasites. All these factors resulted in the test success rate increasing. When stored at room temperature for over a year, the effect remains the same [18,19]. Based on the successful preparation of the chloroquinecoated plate, other antimalarial drug-coated plates were also developed, including piperaquine, pyronaridine, artesunate, dihydroartemisinin, artemether, and arteether. Consequently, we could make simultaneous determinations of the sensitivity of one patient's blood to different antimalarials, and overcome the adverse effects on the growth of malaria parasites due to repeatedly entering the incubator and sealing with plastic tape. At present, the in vitro microtechnique can be used to determine the sensitivity of the malaria parasite to all commonly used antimalarials. Based on the experience with the in vitro continuous culture of P. falciparum, and by carrying out repeated tests and improvements, the liquid medium was prepared with RPMI1640 powder, adding 15% type AB human serum, then packed into sterile ampoules with 0.9 ml of the solution in each; then after lyophilization, the ampoule was sealed and stored at 4°C. Before use, it was dissolved with 0.21% sodium bicarbonate solution (in ampoule). This medium was plainly packed, easily prepared, and used in field situations; one ampoule of medium for one sample. Comparative tests demonstrated that the above homemade lyophilized medium was better than the WHO standard medium in supporting the growth of malaria parasites, and the effect was kept stable for two years at 4°C [16,18,19]. Since the preparation of lyophilized medium was still troublesome, ampoule-sealed liquid medium was prepared later. Once it was opened, it could be immediately used for testing, was much easier to apply in the field, and could remain in its effective state for two months at 4°C. It was well accepted by various institutions in the country together with the drug-coated plates [17]. The chloroquine-resistance degree of P. falciparum and its geographical distribution A large-scale survey on the chloroquine resistance of P. falciparum was then carried out in the 1980s in eight provinces/autonomous regions where falciparum malaria was prevalent, namely Yunnan, Hainan, Guangxi, Guizhou, Henan, Jiangsu, Anhui, and Fujian. Based on the local endemicity of falciparum malaria, each province/ autonomous region set pilot points in counties with high morbidity of falciparum malaria to enroll patients, and to make an in vivo four-week test [10] and an in vitro microtest [11,12]. Between 1981 and 1984, 466 cases from 23 counties were examined. Among them, 395 cases finished the treatment course and received observation four weeks later; 311 out of 321 cases had successful in vitro, and 224 cases were examined concurrently by in vivo and in vitro with success. The results indicated that chloroquine resistance had already existed in the eight provinces/autonomous regions, especially in Hainan and Yunnan where falciparum malaria was severely prevalent. In Hainan, 106 cases from four counties were examined by the in vivo test; 90 cases were successful, 74 were chloroquine resistant (82.2%), and 35.1% had degree III resistance (RIII). Meanwhile, 123 cases from six counties were examined by the in vitro microtechnique; 120 cases were successful (86 cases were concurrently examined by the in vivo test), and 113 were chloroquine resistant (94.2%), which was more severe in the southwestern mountainous area [8,9,20]. In Yunnan, 178 cases from four counties were examined by the in vivo four-week test; 155 were successful and 115 (74.2%) had chloroquine resistance. Among 93 cases from four counties examined by the in vitro microtest, 88 cases were successful (81 cased were also examined by the in vivo test), 75 cases (85.2%) were chloroquine resistant, and the resistance was severe in the bordering area of southern Yunnan [6,7]. In Guangxi, of the 46 cases examined by the in vivo test, 36 were successful and only 12 cases (33.3%) were RI; 14 of the successful cases were sensitive. Among 28 cases examined by the in vitro microtest, 27 were successful and 21 (77.8%) were chloroquine resistant. In Guizhou, 39 cases were examined by the in vivo test; 31 were successful, two (6.5%) were RI. Among the 34 examined by the in vitro microtest, 33 were successful and 12 cases (36.4%) appeared to show resistance. In Anhui, of the 25 cases examined by the in vivo four-week test, 20 were successful, four were RI, and 16 were RII. All 22 cases examined by the in vitro microtest developed resistance. In Henan, of the 62 cases examined by the in vivo four-week test, 52 were successful and three were RI. Of the nine cases examined by the in vitro test, three (33.3%) were chloroquine resistant. In Jiangsu, eight cases examined by the in vivo four-week test were sensitive to chloroquine, but five out of 12 cases (41.7%) examined by the in vitro microtest showed resistance. In Fujian, one of the two cases examined by the in vivo four-week test was RI. The results indicated that falciparum malaria highly resistant to chloroquine was present in Hainan and Yunnan provinces, and falciarum malaria in southern Guangxi and central Anhui also exhibited obvious chloroquine resistance, but the resistance level in these latter two provinces was lower than that in Hainan and Yunnan, and the chloroquine resistance in southern Henan, Guizhou, and western Jiangsu was at its initial stage [21] (see Table 1). The above results indicated the corelationship between in vivo and in vitro tests. All 135 resistant cases determined by the in vivo test were also resistant by the in vitro test; and all 40 sensitive cases determined by the in vitro test also proved to be sensitive by the in vivo test. However, among the 95 sensitive cases determined by the in vivo test, 55 cases showed resistance at different levels by the in vitro test, indicating that some cases could not be detected by the in vivo test; thereby, the resistance rate by the in vitro test would often be higher than that by the in vivo test. At the same time, it could be seen that the higher the resistance rate by the in vivo test, the higher the drug concentrations for complete inhibition of schizont maturation by the in vitro test [16]. Investigation on the fluctuation of the chloroquine resistance of P. falciparum Since chloroquine-resistant falciparum malaria was found in Yunnan and Hainan in 1973 and 1974, respectively, the resistant falciparum malaria began spreading rapidly, and the degree of resistance kept rising, becoming widely distributed from early individual resistant cases in just five years. In 1978, chloroquine-resistant P. falciparum was found in all 11 counties endemic for falciparum malaria in Hainan. Therefore, the Hainan provincial government issued a document to stop the use of chloroquine for malaria control from 1979 and replace it with piperaquine. In 1983, it was confirmed that chloroquine-resistant falciparum malaria spread all over the falciparum malaria endemic areas in Yunnan. Since then, chloroquine was rarely used, and the drugs mainly used for the control of falciparum malaria were artemisinins, pyronaridine, and the No. 3 antimalarial tablet (piperaquine + sulfadoxine). In order to understand the fluctuation of the chloroquine resistance of P. falciparum after stopping or reducing the use of chloroquine, the sensitivity of P. falciparum to chloroquine was investigated at one to three year intervals in the Hainan and Yunnan provinces. Variations in the chloroquine-resistance degree: The results by the in vitro microtest demonstrated that the mean drug concentration for complete inhibition of schizont maturation decreased from 10.4 ± 7.1 pmol/μl blood in 1981 to 1.6 ± 1.5 pmol/μl blood in 1997, a reduction of 84.4% (P < 0.01). This indicated that in most cases, as the time of stopping the use of chloroquine extended, the necessary drug concentration for complete inhibition of schizont maturation decreased gradually from the previous higher drug concentration (>6.4 pmol/μl) to a lower drug concentration (<1.6 pmol/μl). During 1981-1997, the former decreased from 83.3% to 6.7%, a reduction of 92.0% (P < 0.01), and the latter increased from 4.2% to 73.3%, an increase of 94.3% (P < 0.01). The result of the assessment by the in vivo four-week test demonstrated that the asexual parasite clearance time in the blood reduced from 72 hours in 1981 to 50.7 hours in 1997, the percentage of RIII cases in the whole spectrum of resistant cases reduced from 53.1% in 1981 to 14.3% (P < 0.01) in 1997. In 2001, 82 cases from the Yaliang Township of Sanya City were examined by the in vitro microtest; the results indicated that the mean drug concentration for complete inhibition of schizont maturation was 3.56 pmol/μl blood and in 12.5% cases, it was >6.4 pmol/μl blood. In 2003, 16 cases from the Fubao Township of Ledong County were examined by the in vivo four-week test; the results indicated that the asexual parasite clearance time in the blood was an average of 56.9 hours and the percentage of RIII cases was 20%, indicating that the degree of the chloroquine resistance of P. falciparum reduced gradually after the use of chloroquine was stopped [22][23][24][25][26]. The results of the assessment in Yunnan Variations in the chloroquine-resistance rate: A total of 234 cases from the Mengla County were examined seven times successively by the in vitro microtest; the resistance rate decreased from 97.4% in 1981 to 77.8% in 1999, a reduction of 20.1% (P < 0.01). A total of 27 cases from the Jinghong County were examined by the in vitro microtest; the resistance rate was 70.4%. Variations in the chloroquine-resistance degree: The mean drug concentration for complete inhibition of schizont maturation decreased from 17.2 ± 12.6 pmol/μl blood in 1981 to 4.4 ± 3.1 pmol/μl blood in 1999, a reduction of 74.4% (P < 0.01). The percentage of the cases with a complete inhibition of schizont maturation in the drug concentration >6.4 pmol/μl in blood decreased from 58.9% in 1981 to 19.6% in 1999, a reduction of 66.7% (P < 0.01). In 2002, 27 cases from the Jinghong County were examined; the mean drug concentration for complete inhibition of schizont maturation was 4.0 ± 3.3 pmol/μl blood, and 16.6% cases had complete inhibition of schizont maturation in drug concentration >6.4 pmol/μl in blood, indicating that the chloroquine-resistance rate and degree of P. falciparum fell gradually after reduced use of chloroquine [26] (see Table 2). Present situation of the sensitivity of P. falciparum to antimalarial drugs Due to the widespread chloroquine resistance of P. falciparum, application of other antimalarial drugs increased from 1980. In order to understand the sensitivity of the parasite to commonly used antimalarials and to guide rational administration of the drugs, the sensitivity of P. falciparum to chloroquine, amodiaquine, piperaquine, mefloquine, pyronaridine, artesunate, arteether, dihydroartemisinin, and quinine was examined three and five times in Hainan and Yunnan, respectively, between 1984 and 2002. The results indicated that the P. falciparum in the two provinces had developed resistance to seven antimalarial drugs except for mefloquine and quinine, and displayed relatively higher resistance to chloroquine, amodiaquine, and piperaquine. However, the sensitivity of P. falciparum to chloroquine was recovering. The rate and degree of resistance to piperaquine were on the rise. In the Hainan Province, a total of 216 cases were examined by the in vitro test five times successively; the resistance rate increased from 15.8% in 1985 to 72.9% in 1997, and the mean drug concentration for complete inhibition of schizont maturation increased from 9.7 pmol/μ in 1985 to 47.9 pmol/μl in 1997, which was almost five times higher than it was in 1985. A total of 154 cases were examined by the in vivo test three times succesively; the resistance rate increased from 17.2% in 1984 to 50.0% in 1997. No RIII cases were found in 1984, but RIII cases accounted for 71.4% of the total resistant cases in 1997. In the Yunnan Province, 126 cases were examined by the in vitro test three times successively; the resistance rate increased from 21.3% in 1990 to 73.0% in 1993, and the mean drug concentration for complete inhibition of schizont maturation increased from 19.0 pmol/μl in 1990 to 38.0 pmol/μl in 1993, which was two to three times higher than in 1990. The results of clinical treatment indicated that 50% of falciparum malaria cases in Hainan developed resistance to piperaquine (see Table 3), and the sensitivity to pyronaridine and artemisinin derivatives was falling gradually. Even artemether-resistant cases were found through clinical treatment in Yunnan [27][28][29][30][31][32]. In Hainan, the mean concentration of pyronaridine and artesunate for complete inhibition of schizont maturation in vitro increased by four to eight times and two to eight times, respectively [33][34][35][36][37] (see Table 4). Rational use of antimalarial drugs in China After completion of the survey on the resistance degree of chloroquine-resistant falciparum malaria and its geographical distribution, the results and suggestions for improving local malaria control were reported to the Ministry of Health and the local governments, recommening that surveillance on drug-resistant malaria be listed as one of the important tasks in malaria control. For those provinces/autonomous regions only with sporadic falciparum malaria cases, it was suggested that mosquito control measures were strengthened to prevent possible focal transmission of falciparum malaria, focusing more on migrants to prevent the spreading of the drug resistance more effectively. In Hainan and Yunnan, where critical drug resistance persisted, use of chloroquine for falciparum malaria control should be stopped, and the therapeutic effect of other substitutes should be monitored, in order to find and contain the new development of drug resistance in a timely fashion. Due to active implementation of the control program, the number of falciparum malaria endemic provinces/autonomous regions decreased yearly, and after 1998, falciparum malaria was prevalent only in the Hainan and Yunnan provinces. In order to more effectively cure malaria patients and avoid or delay the development of drug resistance of P. falciparum, principles and therapeutic regimens for the application of antimalarial drugs in China were formulated in 2000 on the basis of the drug sensitivity of the parasite found through the surveys. They were divided into first-line, second-line, and third-line drugs, in order to make a reasonable and standardized application of the drugs. Chloroquine and piperaquine were recommended as the first-line drugs for endemic areas of vivax malaria and some areas of falciparum malaria where chloroquine and piperaquine were still sensitive. Artemisinin derivatives and pyronaridine were recommended as the second-line drugs for areas of falciparum malaria with moderate or high resistance to chloroquine and piperaquine. Artemisinin derivatives or pyronaridine in combination with other antimalarials were recommended as the third-line drugs for those areas where the therapeutic effect of second-line drugs was limited [38][39][40][41][42][43][44][45][46]. Considering that P. falciparum in China has generally developed resistance to chloroquine, piperaquine and sulfadoxinepyrimethamine, and other antimalarials, artemisinin derivatives and pyronaridine have been widely used as the first-line drugs in endemic areas of falciparum malaria. Due to the rational and standardized use of antimalarials, the therapeutic effect was greatly improved, malaria cases decreased considerably from 903,802 cases in 1984 to 7,855 cases in 2010 (including 1,258 falciparum malaria cases), and the number of falciparum malaria endemic counties decreased from 74 in eight provinces in 1984 to just 17 in the Yunnan Province only in 2010 [47,48]. Conclusion Drug resistance of P. falciparum is a global problem, especialy in Southeast Asia. In some neighboring countries, such as Thailand, Cambodia, Myanmar, India, and Vietnam, P. falciparum has developed resistance to chloroquine, sulfadoxine-pyrimethamine, mefloquine, etc., extensively. On the Thailand-Cambodia border and in Myanmar, high multiple resistance has appeared [40,49], and Myanmar and Thailand have reported that P. vivax has developed resistance to chloroquine and primaquine, respectively [4,5]. Therefore, antimalarials have been devided into first-line drugs and second-line drugs for rational application in these countries [40]. Chloroquine, piperaquine, pyronaridine, and artemisinin derivatives are commonly used antimalarials in China. Chloroquine has been used extensively for more than 50 years, and the others are relatively new antimalarials and have been used for over 20 years. The results of the survey indicated that there was a certain variability of the resistance of P. falciparum to chloroquine-the resistance rate and degree to chloroquine was increasing from the appearance of chloroquineresistant falciparum malaria cases to the early stage of reducing or stopping the use of chloroquine. Investigations indicated that once P. falciparum developed resistance to chloroquine, when mosquito vectors with strong transmissibility existed (such as Anopheles dirus and An. minimus), and with frequent human population movement, falciparum malaria would spread rapidly. Once resistance to chloroquine appeared, if the use of chloroquine could be stopped in a timely manner, the parasite could gradually restore its sensitivity to chloroquine under the condition of no drug pressure, but the recovery progress would be slow. In the Hainan Province, 18 years after stopping the use of chloroquine, about 20-60% of the cases still had certain resistance to chloroquine. The process of recovery of chloroquine sensitivity firstly indicated a decline of the resistance degree, then the reduction of the resistance rate when the resistance level dropped below a certain level. In the first 10 years after stopping use of chloroquine, the resistance degree fell faster, yet the resistance rate hardly changed; however, 10 more years later, the resistance rate fell faster as well. In the period of extensive use of piperaquine, pyronaridine, artemisinin derivatives, or the No. 3 antimalarial tablet, there were plenty of cases taking insufficient dosages due to low compliance, which was also one of the major reasons for a decrease in the sensitivity to those drugs. It was not recommended to use artemisinin derivatives and pyronaridine in endemic areas of vivax malaria; neither for treatment, preventive medicine, or presumptive treatment for individual cases. They were always sensitive and effective clinically, yet the sensitivity of P. falciparum to the drugs was in a gradual downward trend. In order to delay the development of resistance and protect antimalarial drugs, based on the WHO advocation to use combination artemisinin drugs, it was suggested
2016-05-04T20:20:58.661Z
2014-02-24T00:00:00.000
{ "year": 2014, "sha1": "f27e39089eb0c57337c720c8a6353fde563aec21", "oa_license": "CCBY", "oa_url": "https://idpjournal.biomedcentral.com/track/pdf/10.1186/2049-9957-3-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fe7e99101a8ba2a64af515616c3f51d51237c6e8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261018053
pes2o/s2orc
v3-fos-license
Phylogenetic Analysis of Acacia nilotica and Coffea arabica Using Protein Sequences from the Chloroplast RBCL Gene This work ribulose-1,5-bisphosphate carboxylase. The results showed that A. nilotica and C. arabica are polyphyletic and the subspecies A. nilotica and A. n. hemispherica formed the sister group, same as the species C. arabica, C. salvatrix, and C. racemosa. The chloroplast-encoded rbcL gene, which encodes the large subunit of ribulose-1, 5-bisphosphate carboxylase/oxygenase (Rubisco), is a valuable marker for investigating the evolutionary relationships between plant species. In this study, we conducted a phylogenetic analysis of two economically and ecologically significant plants, Acacia nilotica and Coffea arabica, using protein sequences derived from the chloroplast rbcL gene. A multiple sequence alignment of the rbcL protein sequences was performed, and a maximum likelihood phylogenetic tree was constructed using the RAxML algorithm. The tree was rooted using a Thiotrichales bacterium as an outgroup sequence to establish the evolutionary context. Branch support values were calculated to assess the statistical robustness of the inferred relationships. The results of the phylogenetic analysis revealed the evolutionary relationship between Acacia nilotica and Coffea arabica within the context of other plant taxa. The phylogenetic tree provided insights into their shared ancestry, divergence time, and taxonomic placement within the larger plant kingdom. We identified conserved regions in the rbcL protein sequences, reflecting functional importance, as well as divergent regions, suggesting potential adaptive evolution. The significance of our study lies in understanding the evolutionary history and taxonomic position of these economically important plant species. This knowledge has implications for biodiversity conservation, crop improvement, and ecosystem management. The study also highlights the utility of the rbcL gene as a valuable tool for investigating plant phylogenetics. In conclusion, our phylogenetic analysis using the rbcL protein sequences provides valuable insights into the evolutionary relationship between Acacia nilotica and Coffea arabica. This research contributes to our understanding of plant evolution and has practical applications in various fields, from agriculture to conservation. INTRODUCTION "In addition to the nuclear (nDNA) and mitochondrial (mtDNA) genomes, plants have an additional genome, the chloroplast genome (cpDNA) which is not the case in animals. Because of its complexity and repetitive properties, the nuclear genome is used in systematic botany less frequently" [1]. "The mitochondrial genome is used at the species level due to its rapid changes in its structure, size, configuration, and gene order. On the other hand, the chloroplast genome is well suited for evolutionary and phylogenetic studies above and at the species level, because cpDNA, is a relatively abundant component of plants total DNA, thus facilitating extraction and analysis. Secondly, contains primarily single-copy genes. Thirdly, it has a conservative rate of 2 nucleotide substitution; and fourthly extensive background for molecular information on the chloroplast genome is available" [2]. "Therefore, data from cpDNA genes are used in phylogenetic reconstructions in plant systematics. Plastidencoded rbcL gene is the most common gene used to provide sequence data for plant phylogenetic analyses" [3,4]. "This single-copy gene is approximately 1430 base pairs in length, is free from length mutations except at the far 3' end, and has a fairly conservative rate of evolution. The function of the rbcL gene is to code for the large subunit of ribulose 1, 5 bisphosphate carboxylase/oxygenase (RUBISCO or RuBPCase)" [5]. "The enzyme ribulose-1,5-bisphosphate carboxylase (Rubisco) is responsible for the fixation of carbon dioxide in the Calvin cycle" [6]. "The holoenzyme is formed by a 16-mer structure that includes eight identical chloroplastencoded large subunit polypeptides and eight small subunit polypeptides" [6]. "In green algae and in land plants, the genetic information for the small subunit is encoded in the nuclear genome, typically in a small multigene family" [7,8]. "Owing to its central importance in photosynthetic carbon fixation and owing to the early technical advantages associated with the study of the chloroplast genome, the molecular characterization of the rbcL gene was a major goal of plant molecular biology in the 1970s" [6]. Cloning and determining the sequence of the rbcL gene was first accomplished by [9] and by [10] working with maize (Zea mays). "The rbcL gene of chloroplast contains high substitution rates within the species and is emerging as a potential candidate for study of plant systematics and evolution" [11]. "It has long been evident that molecular sequences contain useful information about evolutionary history" [12]. "The rbcL gene has ideal size, a high rate of substitution, a large proportion of variation at nucleic acid and protein level at first and second codon position, a low transition/transversion ratio, and the presence of mutationally conserved sectors. These features of rbcL gene are exploited to resolve genus and species-level relationships. Polymorphism of chloroplast DNA especially rpoB, rbcL, and intergenic rpocL, rpoC regions has been used to study the phylogeny of various plants" [11]. The sequence data of the rbcL gene are widely used in the reconstruction of phylogenies throughout the seed plants and flowering plants. "The chloroplast-encoded rbcL gene, which encodes the large subunit of ribulose-1,5bisphosphate carboxylase/oxygenase (Rubisco), is a widely used marker in plant phylogenetic studies. Rubisco is a critical enzyme involved in carbon fixation during photosynthesis, making it essential for plant growth and survival. The rbcL gene has a relatively slow evolutionary rate and is highly conserved across plant taxa, making it suitable for investigating evolutionary relationships between distantly related species" [13]. Acacia nilotica and Coffea arabica are two economically and ecologically significant plant species belonging to different families. Acacia nilotica, commonly known as the Egyptian thorn or gum Arabic tree, is a multipurpose tree species with a wide distribution across Africa, Asia, and the Middle East. It plays a crucial role in various ecological processes, such as soil improvement, biodiversity conservation, and as a source of valuable products like gum Arabic. Coffea arabica, known as Arabica coffee, is one of the most popular and economically important coffee species, accounting for a significant portion of global coffee production. It is prized for its flavor and quality, making it a staple in the global coffee market [14]. Studying the phylogenetic relationship between Acacia nilotica and Coffea arabica using the rbcL gene has several important implications. Firstly, the phylogenetic analysis will shed light on the evolutionary history and ancestry of Acacia nilotica and Coffea arabica. By elucidating their relationship to other plant species, we can gain insights into their diversification, speciation events, and biogeographical patterns. Secondly, determining the evolutionary position of Acacia nilotica and Coffea arabica within the plant kingdom is crucial for accurate taxonomic classification. This information contributes to our understanding of plant diversity and assists in refining their systematic placement. Finally, Acacia nilotica and Coffea arabica are valuable genetic resources with ecological and economic significance. Understanding their phylogenetic relationship aids in conservation efforts, enabling the identification of related species that may also require protection and preservation. Nowadays phylogenetic analysis not only does it complements and often outperforms similarity searches and transition/transversion rate in protein sequence when dealing with sequence identity. Molecular Evolutionary Genetics Analysis (MEGA) software provided a framework for qualified identification of protein sequences of Acacia nilotica and Coffea arabica is provided with the interspecies relationship. The phylogenetic analysis of Acacia nilotica and Coffea arabica using the rbcL gene sequences will contribute to the existing body of knowledge on plant evolution and diversification. The results will provide valuable information for researchers, plant taxonomists, conservationists, and agriculturists. Additionally, the study may have implications for ecosystem management, agroforestry practices, and the sustainable utilization of these plant species. Understanding the evolutionary relationship between these economically important plants can lead to better strategies for their conservation and utilization, ultimately benefiting both human society and the natural environment [15]. The objective of this study was to evaluate the generic, species variation, and phylogenetic relationships of Acacia and Coffea plants using the chloroplast rbcL gene sequences available from the Genbank to analyze whether they are monophyletic, paraphyletic, and polyphyletic. Sequence Retrieval The protein sequence of the chloroplast gene rbcL of Acacia nilotica and Coffea arabica was assessed to know the generic and interspecific differences. The entire coding region of rbcL sequences of A. nilotica and C. arabica were retrieved from GenBank and the BLAST search showed 95% sequence similarities with multiple plant species. In this process, the sequence is assigned on the basis of its similarity to a set of reference (identified) sequences [16]. The related sequences were retrieved from the GenBank database to determine the phylogenetic analysis of the studied specimen. Multiple sequence alignment done using Clustal W which is in MEGA software. Tree analyses were conducted using maximum likelihood and neighbor-joining methods. Sequence Analysis The data analysis was done for the plant species Acacia nilotica and Coffea arabica for which their sequences are available in Genbank to find the interspecies variation. Multiple sequence alignment was performed by using MUSCLE, which is offline software that performs optimum alignment for sequence. Alignments were not complicated due to the occurrence of indels and were not included in data analysis, [17]. Aligned sequences were edited by using the software JALVIEW. Phylogenetic Analysis Using (Maximum Likelihood Estimation and Neighbor-Joining) The basic sequence statistics including amino acid frequencies, transition/transversion (ns/nv) ratio, and variability in different regions of sequences were computed by Molecular Evolutionary Genetics Analysis (MEGA), [18]. The sequence data were analyzed by Maximum Likelihood Estimation (MLE) [19] by using MEGA version X. Distances were calculated using the Neighbour-join method. Bootstrap analysis was performed by NJ plot. Various clades were determined by MEGA. Multiple sequence alignment The protein sequences of the chloroplast rbcL gene from Acacia nilotica and Coffea arabica, along with other related species, were aligned using MUSCLE (Multiple Sequence Comparison by Log-Expectation) which is a progressive algorithm that uses a distance-based approach to align sequences. The alignment results in a total of 100 sequences, representing various plant species and only those plant species with higher percentage identity of more than 95% were selected [20]. Phylogenetic analysis The Table 2 above shows part of a data set used to construct phylogenetic trees for Acacia nilotica and Coffea arabica. The data are the aligned sequences of large subunit of 43 ribulose 1, 5 bisphosphate carboxylase/oxygenase rbcL gene from plant species of the genus Acacia and Coffea and Thiotrichales bacterium (outgroup) in the MEGA format. The rbcL gene is 1430 base pairs in length. The maximum likelihood (ML) phylogenetic tree was generated using the RAxML (Randomized Accelerated Maximum Likelihood) algorithm, one of the most widely used methods for inferring phylogenetic trees. RAxML employs a statistical model to estimate the likelihood of the observed data given a particular tree topology and branch lengths. It searches for the tree that maximizes the likelihood score, representing the most probable evolutionary history for the aligned rbcL protein sequences. The tree was rooted using an appropriate outgroup sequence to establish the evolutionary relationships. Phylogenetic trees generated from 5' -3' end of rbcL sequences of 13 plants with outgroup revealed that the two plant species are distantly related to each other ( Figs. 1 and 2). This is because Acacia nilotica has undergone several speciation. However, Coffea arabica has not undergone speciation since the time they shared a common ancestor. Acacia nilotica has 4 clades while Coffea arabica has only 2 clades. The numbers above the branches correspond to bootstrap support. The branches in the maximum likelihood tree were evaluated for statistical support using bootstrap analysis. Bootstrap values are expressed as percentages and indicate the proportion of times that a particular branch appears in the phylogenetic trees generated from resampled datasets. Higher bootstrap values (>70%) provide stronger support for the corresponding branches, suggesting greater confidence in the inferred relationships. Thiotrichales bacterium was taken as outgroup and rooted on the tree. The phylogenetic tree is based on the protein sequence of rbcL gene. The numbers at the branches are confidence values based on Felsenstein's bootstrap method. B = 1000 bootstrap replications. The percentage of replicate trees, in which the associated taxa clustered together in the bootstrap test (1000 replicates) are shown in Fig. 2 next to the branches. The scale bar represents the branch length measurement in the number of substitutions per site. The present study aimed to investigate the phylogenetic relationship between Acacia nilotica and Coffea arabica using protein sequences derived from the chloroplast rbcL gene. The rbcL gene encodes the large subunit of ribulose-1, 5bisphosphate carboxylase/oxygenase (Rubisco), an essential enzyme involved in photosynthesis. By analyzing this gene, we sought to gain insights into the evolutionary history and potential genetic relatedness between these two plant species. Our phylogenetic analysis revealed a robust tree that clustered various species based on their rbcL protein sequences. Acacia nilotica and Coffea arabica were found to form distinct clades, reflecting their evolutionary divergence. This result suggests that despite some shared physiological and ecological traits, these two species have followed separate evolutionary paths over time (Figs. 1 and 2). Interestingly, our analysis also showed that Acacia nilotica grouped together with other Acacia species, forming a monophyletic clade. This finding supports the notion that Acacia species share a common ancestor and have experienced relatively recent speciation events. On the other hand, Coffea arabica was found to be closely related to other Coffea species, forming a separate monophyletic clade within the tree. This result indicates a close evolutionary relationship among coffee species and reinforces the idea of a shared evolutionary history among members of the Coffea genus [21,22]. Acacia nilotica and Coffea arabica were placed within the maximum likelihood phylogenetic tree. The tree shows the positions of these two species relative to other taxa in the dataset. The branching pattern and the lengths of the branches reflect the evolutionary distances and relationships among the species. The placement of Acacia nilotica and Coffea arabica in distinct clades may suggest that these species diverged from a common ancestor at a relatively distant point in evolutionary history. All the trees that were inferred from the partial rbcL gene sequence of both Acacia nilotica and Coffea arabica and related taxa demonstrated a distinct lineage of the studied specimen; thus, could distinguish the species of A. nilotica and C. arabica and show their relatedness as they share a common ancestor. The sequences generated from rbcL also indicated that Acacia nilotica and coffea arabica are polyphyletic. The evolutionary analysis on the basis of rbcL proved that Acacia nilotica ssp. Subalata and Acacia nilotica ssp. hemispherica are closely related as they form the sister groups (Fig. 2). CONCLUSION The study found that both Acacia nilotica and Coffea arabica share a common evolutionary ancestor as they both possess the rbcL gene in their chloroplasts. This indicates that they are both descendants of a common ancestor and belong to the same larger group, likely a family or order within the plant kingdom. By analyzing the genetic differences between the two species, we were able to estimate the approximate divergence time between Acacia nilotica and Coffea arabica. This information provides insights into the timing of their evolutionary split, which could be used to infer historical biogeography and speciation events. The phylogenetic analysis showed the placement of Acacia nilotica and Coffea arabica within the broader evolutionary tree of plant species. The tree analysis showed that Acacia nilotica and Coffea arabica are polyphyletic as they share a common ancestor through distantly related. Acacia nilotica exhibits higher bootstrap values than Coffea arabica, indicating stronger support for the inferred evolutionary relationship between the two genera. This information is valuable for understanding their evolutionary history and relationships with other plant taxa. While our phylogenetic analysis provides valuable insights into the relationship between Acacia nilotica and Coffea arabica, it is essential to acknowledge some limitations. Firstly, the rbcL gene represents only one part of the chloroplast genome, and additional molecular markers or complete chloroplast genomes could provide a more comprehensive picture of their evolutionary history. Secondly, the limited sampling of species in this study might not fully capture the broader diversity and complexity of the evolutionary relationships among Acacia and Coffea species. Furthermore, it is worth considering that other factors, such as hybridization, introgression, and ecological interactions, could have influenced the observed phylogenetic patterns. Future studies could incorporate additional data and methodologies to address these complexities and gain a more nuanced understanding of the evolutionary dynamics between Acacia nilotica and Coffea arabica. Also, replication of the study is necessary to strengthen and confirm the findings.
2023-08-20T15:09:22.165Z
2023-08-18T00:00:00.000
{ "year": 2023, "sha1": "4dd9c6b89bc3fdca39591ef6bfdbed40baa00b47", "oa_license": null, "oa_url": "https://journalajbgmb.com/index.php/AJBGMB/article/view/323/643", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "17241cefaa4cfb33d5b69176d49ed782a6455ff9", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [] }
133794595
pes2o/s2orc
v3-fos-license
Finite element modeling of the contact interaction of the acetabular component and acetabulum Degenerative-dystrophic lesions of large joints arethe most common diseases. Up to now, the problem of hip arthroplasty with dysplastic coxarthrosis has not been completely resolved. The most interesting is the modeling of the contact between the cavity and the neck of the thigh. This paper consists of verification algorithm for acetabular component model. Particularly contact displacements and stresses fields are considered. The main idea of this method is comparison between analytical solution and solution which we get by means of the finite element method. This comparison is used to verify contact elements and contact settings between contact and target area. For doing that the nonstationary contact problem with moving boundary conditions had been solved analytical. Algorithm of the verification of this problem is described below. Introduction Degenerative-dystrophic lesions of large joints arethe most common diseases. The main method of treating patients with dysplastic coxarthrosisremains hip replacement, reconstructive operational interventions are effective only in the early stages of the disease and give a positive result only for 5-10 years [1][2][3][4][5]. Up to now, the problem of hip arthroplasty with dysplastic coxarthrosis has not been completely resolved. Nowadays computer aided methods are popular to solve this problem [6][7][8]. The most interesting is the modeling of the contact between the cavity and the neck of the thigh. There are numbers of methods to simulate the contact [9][10][11]. More than changes of the mechanical parameters of the bone can influence on the quality of the surgery [12,13]. To solve this problem it is necessary to obtain a reference solution. This solution should be compared with solutions obtained through finite element method (FEM). The general algorithm for constructing this solution is presented below. The solutions of the nonstationary contact problem given below are suitable for arbitrary smooth-shaped stamps. Planar nonstationary problems for an elastic half-space and absolutely rigid stamps are considered [11,14]. We use Cartesian rectangular coordinate system. The axis Ox is directed along the unperturbed boundary of the half-space z=0, and the Oz axis is directed into the interior of the halfspace. At the initial instant of time, the half-space is in an unperturbed state, and the impactor bounded Materials and Methods The mathematical formulation of these problems includes: equations of motion in potentials: w w x   -moving the boundary of the stamp (see Figure 2). The properties of the contact assignment in the finite element model of the acetabulum component are selected in accordance with the solutions obtained above. As a form of the stamp, a hemisphere was taken (the model of the neck of the thigh). This form of the stamp is smooth and the function describing the law of the introduction of the sphere has an order of homogeneity m equal to 2. But in considered there is no need to take into account to study behavior of the hole model. Particular part of acetabular component where contact characteristics have been studied are presented on Figure 3. The computational mesh of the acetabular component model (see Figure 3) has a hexagonal structure, in order to obtain the most correct results consistent with the solution given above. The mesh has a regular structure, there were no problems with the disagreement of linear dimensions during the study. The results of the calculation are in good agreement with the analytical solution given above with the introduction of the stamp (femoral neck) into the acetabular component no more than 5% of the radius of the stamp (the radius of the stamp is introduced here as a dimensionless parameter.The mechanism of transition to dimensionless parameters is given above). In the case of further introduction of edge effects due to the complex geometry of the model, the convergence with analytical solutions is significantly reduced. Discussion In this paper, the verification algorithm for the computational model of the acetabular component in the part of the contact interaction is considered. Analytical dependences for the distribution of contact stresses inside the region of the contact and displacements outside the contact region are obtained. This approach is also suitable for working out other tasks, where a very careful analysis is required in assessing contact stresses. Due to the fact that the contact interaction problem is solved in a fairly general form, its solutions can be used for bodies with strongly pronounced anisotropy. Conclusion Often, the settings for contact interaction in FEM programs pursue the goal of numerical convergence of the solution obtained. It is by no means always obvious to what exactly the solutions obtained should converge. In the pursuit of numerical convergence, physical aspects of contact interaction are often lost. In this paper, a reference solution is presented, in accordance with which the corresponding contact settings are selected. Numerical convergence is achieved by constructing a regular sufficiently thick hexagonal grid. In many applied problems this is often difficult to achieve. However, one must understand that the distribution of contact displacements and stresses is very sensitive to the choice of contact elements and the settings of their interaction.
2019-04-27T13:13:19.889Z
2019-02-01T00:00:00.000
{ "year": 2019, "sha1": "5a5002a017f4654f138a5ad1db15bd965861b920", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1158/3/032017", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "44f101220fe289b45499fe1b3d5f7480451fee9a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Geology" ] }
261856406
pes2o/s2orc
v3-fos-license
Who deserves exceptions in times of crisis? A comparison of policy responses to mitigate negative consequences for unemployed people and immigrants during the COVID-19 pandemic The boundaries for whom the welfare state should protect during times of crisis are not necessarily obvious. Deservingness studies have identified unemployed people and immigrants as groups perceived as 'less deserving' of welfare state support than other groups in need during ‘normal’ times. These two groups have in recent years been subject to more conditional requirements and an incentivizing rationale. In this article, we compare the policy responses for 1) unemployed people and 2) immigrants during the COVID-19 pandemic in Norway form 2020–2022. We ask: Who deserves exceptions in times of crisis? We find that a cross-partisan parliament introduced extensive economic relief packages and temporary regulations to mitigate negative financial consequences for unemployed persons and furloughed workers. Politicians argued that individuals were not to blame for their unfortunate financial circumstances during the pandemic, and that the welfare state had to take the larger share of the burden. However, the government chose not to make temporary exemptions from economic requirements for permanent residence or family reunification. It was explicitly stated that there was no reason to deviate (temporarily) from the general economic requirements during the pandemic, referring to the potential strain on the Norwegian welfare state if immigrants were not self-sufficient. We argue that the political rationale of incentives underlying these requirements falls short during economic crises and that this non-policy response illustrate new forms of welfare state chauvinism. Introduction As a response to the spread of the coronavirus in 2020, governments around the world introduced some of the strongest, most intrusive measures during peacetime, including lockdowns and other restrictions on physical contact.Consequently, as well as being a health crisis, the pandemic also led to a severe economic crisis, with businesses closing, record-high unemployment rates and furloughs.Although it was argued that unprecedented restrictions were necessary from a health perspective, concerns were raised about the financial, social and psychological consequences.To ease some of the economic consequences, many governments passed regulatory and financial 'COVID-relief' packages to help businesses, employees and other particularly affected groups. In the Nordic welfare states, as well as in many other countries, the overall narrative during the pandemic was that governments had to step in (Crabtree and Wehde, 2023;Gjerde, 2021Gjerde, , 2022;;Greve et al., 2021;Nilsen and Skarpenes, 2020).This attitude is illustrated by the leader of the Norwegian Labour Party, Jonas Gahr Støre's (2020) statement during one of the parliamentary debates that led to the introduction of an economic relief package: of the economy are grinding to a halt.(…).They should know that we're there for them: that we're behind them.'However, the boundaries for whom the welfare state should protect during crises are not necessarily obvious, particularly with respect to immigrants.During the pandemic, migrants faced additional disadvantages.New, stricter border control practices were a visible consequence of the COVID-19 pandemic and have been the subject of emerging research (Boucher et al., 2021;Macklin, 2022;Martin and Bergmann, 2021;Tanyi and Egan, 2021).These temporary restrictions kept families apart, delayed arrivals of UN quota refugees, and left many stranded away from their homes and workplaces.These consequences were publicly debated, and the border restrictions were gradually removed with the arrival of vaccines and lower infection rates. Beyond such immediate concerns, however, are additional negative consequences immigrants face that have been given less attention in public debates and the emerging scholarly literature.Although economic crises affect society at large, previous studies show that immigrants are hit disproportionately hard during recessions, which was also the case in most countries during the COVID-19 pandemic (OECD, 2022).Furthermore, many European states have in recent decades introduced conditional economic requirements for permanent residence, citizenship and family reunification; typically requiring evidence of employment or selfsufficiency for a certain period before application (Stadlmair, 2018;Eggebø et al., 2023).Thus, immigrants are more likely to experience additional negative consequences that other citizens are not subject to during recessions.Their unemployment and welfare state dependency during economic crises may not only lead to financial insecurity, but also directly influence their access to a secure legal or family reunification. In this article, we analyse and compare the Norwegian policy responses during the COVID-19 pandemic for two target groups 1) persons who became (partly) unemployed during the pandemic, and 2) immigrants.In most US and European studies investigating the social legitimacy of welfare state benefits and the literature on different groups' perceived 'deservingness', both unemployed people and immigrants are groups perceived as 'less deserving' compared to e.g., elderly persons or persons with disabilities (see literature review in van Oorschot and Roosma, 2017).Both target groups have also been subject to a general trend of more conditional requirements for receiving certain benefits, andas will be shownthese requirements are strongly based on an incentivizing logic.Empirically, we ask: What policy responses were introduced to mitigate negative consequences for unemployed people and immigrants during the pandemic, and what was the rationale underlying these responses? Using Norway during the pandemic as a case study, we analyse the policy processes leading up to (temporary) regulations and financial relief related to a) unemployment-related benefits and b) economic integration requirements for immigrants, from March 2020 to February 2022, when Norway lifted almost all COVID-related restrictions.Norway provides a highly relevant context, as the universalistic principle underlying the Nordic welfare states has, historically, been a central ideology and a measure for ending stigmatization and creating equal opportunities for disadvantaged groups by promoting equality and solidarity between classes, regions and genders (Anttonen et al., 2012).When the pandemic hit Norway, it was in a comparatively good fiscal position to respond to the crisis, with budget surpluses and low baseline unemployment.Greve et al. (2021) also highlight that among the Nordic countries, Norway had the most generous policies during the pandemic, which makes it a 'most likely' case for state support in times of crisis. We start with a brief review of recent developments concerning welfare state retrenchment and chauvinism and show how economic and conditional integration policies for immigrants based on a logic of incentives fall within this overall trend.After describing the data, we give a brief introduction to the Norwegian case.In the empirical analysis, we show how the Norwegian parliament introduced extensive economic relief packages and temporary regulations to mitigate the negative financial consequences for the individuals affected by COVID restrictions and the economic recession.A cross-partisan parliament argued the individuals were not to blame for their unfortunate circumstances, and that the welfare state had to take the larger share of the burden.In the political assessment of economic requirements for immigrants, however, similar arguments were not evident.The Norwegian government actively chose not to provide temporary exceptions to economic requirements for permanent residence or family reunification.It was explicitly stated that there was no reason to deviate (temporarily) from general requirements during the pandemic, referring to the potential cost and strain on the Norwegian welfare state if immigrants were not self-sufficient.Lastly, we argue that the political rationale of incentives underlying the economic integration requirements falls short during economic recessions and discuss how the decision not to make exemptions during the pandemic may illustrate new forms of welfare state chauvinism. Incentives and conditional requirements for welfare state support Norway, like the other Nordic countries, developed a comprehensive welfare state in the post-war period, which is characterized by universalistic principles and a high level of public service provision.Since the 1980s, there has been a trend toward retrenchment in line with the overall objective of 'workfare' (arbeidslinja).The 'workfare' policy holds that it should be more attractive to work rather than to receive benefits, seeing the welfare state partly as a kind of incentive structure aimed at ensuring high employment.Concrete policy changes include means-testing of welfare benefits and activity requirements for unemployed persons (Gubrium et al., 2014), limited access to certain economic benefits and an increased emphasis on the duties of citizens vis-à-vis the state as incentives to work (Breidahl, 2017).The rationale behind these changes has been to create incentives for higher labour-market participation and thereby reduce public expenses, as a response to concerns about the fiscal sustainability of the welfare state (Eggebø et al., 2023). Similar logics of incentives and conditionality appear in integration policies for immigrants.Immigration has frequently been portrayed as a challenge to the welfare state's fiscal sustainability in public debates, where low unemployment rates among refugees and family migrants are portrayed as a potential threat (Jurado and Brochmann, 2012).During the 2000s, many European states introduced conditional integration requirements to regulate family reunification and access to secure legal status and citizenship, a trend that intensified during the 2015 'refugee crisis' (Hernes, 2018).Common requirements include language and citizenship tests (Baldi and Goodman, 2015).European countries have also introduced conditional economic requirements, whereby an immigrant must have been in employment or selfsufficient for x years before applying for permanent residence or family reunification (Eggebø et al., 2023;Stadlmair, 2018).While scholars have argued that such policies operate on a 'selective logic that distinguishes between 'desired' vs. 'undesired' migrants' (Keskinen et al., 2016, Staver, 2021), the stated aim is usually that, for example, access to family reunification could be used as an incentive for labour-market participation.This incentive logic runs counter to a previously more common conception of secure legal status and access to family reunification as preconditions for successful integration.In the incentive logic, immigrants earn rights by demonstrating financial independence and deservingness, whereas in the precondition logic, the state provides rights so that immigrants may achieve full participation (Borevi, 2014). A central concept in research at the intersection of migration and the welfare state is welfare chauvinism, which emerged in studies of right-wing populist parties' 'use of the welfare state and welfare benefits to draw the distinction between 'us' and 'them'' (Keskinen et al., 2016, 322).The welfare chauvinism literature has expanded with research on exploring the question of whether governments grant immigrants the same social rights as native citizens.Jørgensen and Thomsen define welfare chauvinism as an 'unwillingness to grant the same entitlements to all people in a society', focusing on the specific example of Denmark's separate and lower welfare benefit for newcomers (Jørgensen and Thomsen 2016). The concept of welfare chauvinism has also been used somewhat more broadly in survey research which examines 'how ordinary people view who is deserving of welfare provisions and under what conditions they perceive migrants to have rights to benefits' (Keskinen et al., 2016, 322).Deservingness studies find that people tend to hold such mental hierarchies, and that perceived control is one of the main criteria for being seen as deserving of welfare state support (van Oorschot, 2000;van Oorschot and Roosma, 2017).For example, studies have found that although unemployed people are often a stigmatized group, people support unemployment benefits more in times of high unemployment, because people then place less blame on unemployed people themselves for their situation (Fridberg and Ploug, 2000;Jeene et al., 2014). This question of perceived control can be directly tied back to the logic of incentivizing policies.An underlying premise for all such incentivizing policies is that individuals have the possibility to influence their own situation, e.g., to find employment.The (implicit) assumption is that the cause(s) of unemployment are individual, and something they have the possibility to affect by altering their behaviour (Heggebø, 2021).The introduction of conditional economic integration requirements in Norway in 2016, exemplify this line of argument: It is of utmost importance that immigration regulations encourage people to take up education and apply for work rather than passively receiving welfare benefits (Ministry of Justice and Public Security, 2015: 80-82). Here, the suggestion is that immigrants' lower labour market participation and higher uptake of benefits is due to their not trying hard enough, or not having a strong enough incentive. Thus, both unemployed people and immigrants have increasingly been subject to policies with a similar political rationale, emphasising restrictive and conditional policies as both 1) an incentive and 2) a necessity to ensure the survival of the welfare state (Breidahl, 2017).However, this incentive logic has been criticised in both political and academic debates because it falls short if the individual does not have the possibility to influence their situation.If their unemployment is caused by external factors such as discrimination by employers, or by health issues, the premise for the policy falls short (Heggebø, 2021). Higher welfare state support and exceptions to existing restrictive policies may be legitimised in times of crisis (Fridberg and Ploug, 2000;Jeene et al., 2014).For those who normally support incentivizing policies, an economic crisis would be a classic example of an external factor that could legitimise introducing exceptions to restrictive and conditional policies.But what happened to the incentivizing policies for unemployed persons and immigrants during the pandemic?Here, it is relevant to not only study whether immigrants and unemployed persons got the same entitlements and programs, but also whether they got the same exceptions due to changes in external factors.Who were perceived as deserving of exceptions in times of crisis? Methods and data We analyse the policy processes leading up to (temporary) regulations and economic relief packages related to a) unemployment-related benefits and b) economic integration requirements for immigrants.We include processes from March 2020 to February 2022, when Norway lifted almost all COVIDrestrictions.For the policy processes leading to relief packages and regulations related to unemployment, we analyse publicly available documents including governmental policy propositions and announcements, and the transcripts of the related parliamentary debates. As will be shown, the Norwegian government did not propose exemptions for immigrants from the economic requirements for permanent residence or family reunification.Consequently, there was no formal public policy process with accompanying policy documents or parliamentary debates to analyse.This is, then, a study of a non-decision, or a decision to refrain from doing something.In order to trace how this 'non-decision' was made, we located internal documents related to this issue between the Directorate of Immigration (UDI) and the Ministry of Justice and Public Security in their public journal, based on a UDI representative mentioning a specific letter during a debate we attended in spring 2021.Through several Freedom of Information Act (FOIA) requests directed to both the Ministry and Directorate, we were granted partial access to these documents.They revealed that there had been an internal process within the Ministry and in dialogue with the Directorate concerning potential exemptions from economic integration requirements for immigrants.These documents provided important insights into the policy process.However, as we were only granted partial access, the documents may present a partial picture.To supplement these documents and avoid misinterpreting the process, we conducted three interviews with bureaucrats involved in the internal process in the Ministry of Justice and Public Security, the Ministry of Employment and Inclusion, and the UDI.Additionally, we found that the topic of potential exemptions from the economic integration requirements was brought up in a public consultation concerning other residence requirements for immigrants and once during parliamentary question hour, providing additional information about the rationale for not introducing exceptions. Through an argumentation analysis (Bergström and Boréus, 2005), we analyse the rationale behind the decisions to introduce, or not to introduce, (temporary) exceptions to previous unemployment and integration policies during the crisis.We identify the problem definitions, premises, and logic of consequence for (not) introducing exceptions to restrictive policies in times of crisis. The Norwegian context: Economic integration requirements and the financial situation before the COVID-19 pandemic With respect to conditional integration policies, Norway has often been described in a Nordic context as being 'between' Danish restrictive policy and Swedish traditionally liberal policies (Hernes, 2018).Norway has, however, introduced economic requirements for both family reunification and permanent residence.Since the 1980s, sponsors for family immigration have been required to document sufficient means of subsistence.While large groups were exempted from this rule, it was significantly raised and expanded to almost all sponsors in 2008.Only recently arrived refugees and family members of EEA nationals are fully exempt today.The income requirement for family reunification holds that the reference person residing in Norway must 1) document adequate income the past year (based on tax returns), 2) document adequate expected income for the current/future year, and 3) should not have received means-tested welfare benefits.The required income, which is indexed annually, is approximately EUR 25,000 (NOK 300 988) in 2023. After the large influx of refugees in 2015, Norway also introduced a new income requirement for permanent residence: only immigrants who make more than approximately EUR 24,000 per year (NOK 278,693 in 2023, indexed annually), and who have not received means-tested social assistance during the past 12 months, qualify for permanent residence.This requirement is based on the individual's tax return, meaning it is not sufficient for the applicant to be supported by a family member (e.g., spouse). When the pandemic hit, Norway was in a comparatively good fiscal position to respond to the crisis, with budget surpluses and low baseline unemployment (Greve et al., 2021).At the start of the pandemic, Norway had a centre-right government, while a centre-left government took office after the 2021 election.As both these governments were minority governments, they had to seek parliamentary support for regulatory and financial changes during the pandemic.However, most of the changes made during this period were subject to large cross-partisan compromises, in line with the Norwegian 'tradition' of seeking such compromises during times of crisis (Hernes, 2018). Policy (non-)responses to mitigate negative consequences during the pandemic The 'coronavirus packages': The welfare state steps in As a response to the arrival of the coronavirus in Norway, on 12 March 2020, the Norwegian centre-right government introduced what they called the 'strongest and most intrusive measures we have seen in Norway during peacetime' (Prime Minister's Office, 2020).These measures included strict social distancing rules and lockdowns of businesses and social services.The financial consequences of these lockdownsboth at a societal and individual levelwere addressed the very next day, with the government declaring that they would implement financial and regulatory emergency measures within a few days. Over the following days, a large majority of the political parties in the Norwegian parliament negotiated and agreed on the first of several 'corona packages'.The first major package, which included temporary regulatory changes and supporting financial compensation, was announced on 20 March.The united message from a cross-partisan majority was that the welfare state and government would take a large share of the burden and provide for the people in this difficult situation.As the leader of the Centre Party -Trygve Slagsvold Vedum (2020)said in the parliamentary debate: What is good about Norway, the good thing about the Norwegian political system, is that when crises strike, we work together, and we work together across party lines.We help each other, and we work together as a society and community. As many societal functions and businesses were forced to lock down temporarily, the politicians introduced several measures to ease the consequences for both the affected businesses and their employees.New temporary rules concerning furloughs, unemployment benefits and sick pay were introduced.For example, furloughed workers would receive their full salary for the first 20 days, compared to previous rules where they received no salary the first three days and then only a reduced salary.Another adjustment was that the level of benefits for furloughs and unemployed people after this period were raised, particularly for those with lower incomes.In August, the total number of weeks a person could be furloughed was raised from 26 to 52 weeks. Similarly, regulations for self-employed persons and freelancers were altered.For example, freelancers and self-employed persons would now receive sick pay from the fourth day compared to day 17 according to previous rules.Additionally, self-employed people usually had to use their own funds before they were entitled to social benefits, but exceptions were made to this rule, which gave self-employed persons easier access to social benefits.In April 2020, the government implemented further measures that ensured a minimum salary for the self-employed and freelancers if they became unemployed.These exceptions to ordinary rules were justified in the parliamentary debate by highlighting that the 'victims' were blameless: 'For the self-employed, who are completely innocent in this situation, who lose their income base from one day to the next, we have not previously had any such income security scheme.' [our emphasis] (Vedum, 2020). From the beginning of the pandemic in 2020over the following two years and across governmentsseveral politicians also emphasised that vulnerable groups should get extra assistance, as they are often particularly affected by such financial crises.Measures were introduced targeting additional groups that were not eligible for support through the general regulations, e.g., apprentices in vocational education, students and people with part-time jobs or low income.In the first parliamentary debate in March 2020, Leader of the Socialist Left party, Audun Lysbakken (2020), emphasised that: A unified effort [dugnad] is not a unified effort if the burden is not fairly distributed, and there would be a lack of unified effort if those with the smallest or most uncertain incomes were to take the toughest blow.Workers should not pay the price for the crisis.We now have an agreement that distributes the burden more fairly. The Conservative Minister for Research and Education, Henrik Asheim stated in a press release when introducing new measures to ensure students' financial situation in April 2020: 'I have previously said that we work to ensure that no one falls between the cracks.Therefore, I am very happy that the government has landed on a scheme to help students who have ended up in a financially difficult situation.' (Ministry of Research and Education, 2020). These measures directed at vulnerable groups were prolonged during the pandemic to 'secure more social justice', as emphasized by Labour Party Minister of Employment and Inclusion Hadja Tajik (Ministry of Employment and Inclusion, 2022). During 2020-2022, Norway experienced waves of infection that tested the capacity of the health services.Norway retracted and reintroduced temporary local and national restrictions for social distancing up until March 2022, when almost all restrictions were lifted.Throughout this two-year period, the different governments continuously extended the temporary regulations, easing requirements for businesses and unemployed people.As the Conservative Minister of Finance -Jan Tore Sannerstated in January 2021: We will have financial measures as long as the crisis lasts, but it is important that the measures are adapted to the infection situation.We are still in the middle of the crisis and must continue to provide security and predictability to those affected financially.(Ministry of Finance, 2021) Analysing the politicians' argumentation in the parliamentary debates revealed the underlying premise that this was an 'extraordinary situation' (Jensen 2020), which could legitimise unprecedented measures.It was also stressed that people were affected by this situation through no fault of their own.Liberal MP Terje Breivik (2020) also emphasised this point: Many people are now unsure of how they will manage everything.How will we get the bills paid this month and in the future?What will happen to my workplace and my kids' future?These questions may be especially pressing for the selfemployed (…).They probably feel more insecure about their investment than ever, and the blame is not theirs [our emphasis]. Another premise was that the regulations and financial measures would be temporary, to get by in this extraordinary situation.Based on these premises, the overall narrative, throughout the two-year period and across governments and partisan lines, was that the welfare state had to step up and ease the burden on businesses and ordinary people, irrespective of the financial cost.This point is illustrated by leader of the Centre Party Trygve Slagsvold Vedum (2020): The two measures we are taking when it comes to sickness and care benefits are enormously expensive.But we have chosen, across party lines, that we, as a community, should carry the burden, instead of individuals, single families and companies.This is the strength of Norway as a nation state; this is the strength of our community. No exemptions from economic integration requirements for immigrants In the integration field, early COVID-19 measures were aimed primarily at newly arrived refugees participating in the introduction programme.Language classes and job placements were cancelled or scaled down, but refugees were required to attend the programme full-time in order to receive the 'integration benefit'.Rapid amendments were made during March 2020 to ensure that refugees could still receive the introduction benefit even though corona restrictions partly hindered programme participation.By April 2020, however, more substantive policy changes were underway.The new temporary legislation opened up for extension of the introduction programme and Norwegian language training, and career counselling.Nevertheless, in addition to being delimited to a narrow target group (newly arrived refugees and their family members), these measures were short-term in nature.Staff working with refugees during this period expressed concerns about the longterm effects of the pandemic for immigrants, e.g., as a result of interrupted job placements and poorer language acquisition (Hernes et al., 2022). The UDI identifies a problem In April 2020, the UDI contacted the Ministry of Justice and Public Security to notify them of a possible need for amendments to the Immigration Act and Regulations to respond to the pandemic.They highlighted three separate issues, potentially affecting all non-EEA immigrants in Norway.First, the Directorate noted that travel restrictions could leave people stranded abroad for so long that they would either lose their permanent residence permit or fail the presence test.They suggested that the rules concerning both granting and loss of such permits should be amended, to ensure that inability to return did not have a detrimental impact.Second, the UDI pointed to the income threshold for permanent residence.While unemployment benefits or furlough pay counted toward the minimum income, such benefits were paid at lower rates than the recipients' prior salaries.This could lead to people failing to qualify for permanent residence due to pandemic furloughs and unemployment, because potential applicants would not meet the required income threshold anymore.They did not propose amendments, as it was unclear how long pandemic restrictions would be in place.Finally, they made similar observations concerning the income requirement for family reunification.Sponsors must demonstrate a certain level of income both the year preceding the application and the current year, and unemployment benefits do not count towards the current income threshold (although they count as past income).They argued that exemptions should be made for people who fulfilled the income requirement when they filed the application.The letter ended with a reflection on the possible long-term consequences of high unemployment for people's ability to obtain family reunification, saying that 'how significant the consequences must be assessed in the longer term, but we wish to make the Ministry aware of the situation'.The letter suggests that the Directorate's understanding of the problem was that the pandemic would lead to immigrants, through no fault of their own, losing their status or not qualifying for certain immigration permits. The ministry responds In October 2020, six months after the letter from the UDI, the Ministry of Justice and Public Security sent a proposal to make temporary amendments to the Immigration Regulations for public consultation, addressing only the first of the three concerns raised by the UDI.They appeared to agree that permanent residents should not lose their status if they were physically unable to return from abroad, nor should people fail the presence requirement for permanent residence for the same reason.The concerns raised pertaining to income rules, were not included in the consultations.Like most consultations during the pandemic, it had a short deadline and received few responses.Three of the main NGOs did, however, respond, and the Norwegian Organisation for Asylum Seekers (NOAS), included an additional section in their letter where they noted that they could not understand why an exemption from the income rules had not been proposed.As NOAS noted, the Ministry's reasoning concerning conditions not being fulfilled through no fault of the person could also apply to the economic integration requirements.As they argue, 'if the person, due to COVID measures, loses their job, income or is forced to seek social assistance, this could entail that the self-sufficiency requirement is not fulfilled'.NOAS' suggestion was not taken up, and indeed the entire proposal was shelved. Interestingly, only a week prior to issuing this public consultation, our FOIA requests show that the Ministry had considered making the very exemption NOAS requested.On 6 October 2020, the Ministry sent a letter to the UDI referring to a proposed instruction to make exemptions from the selfsufficiency requirement which they had, according to the letter, submitted to the UDI for comment in July 2020: Following renewed consideration, the Ministry of Justice and Public Security has concluded that it is not desirable to make exemptions from the income requirement in accordance with the Immigration Regulations Section 10-11 for persons with income loss relating to the outbreak of This decision to not make exemptions was made when Norway was a few weeks into its second wave of infections in autumn 2020, and local pandemic measures had already been reintroduced in Oslo.The letter does not go into why they consider it 'not desirable'.Some clues may, however, be found in later communication.In late 2021, following a high-profile case involving an American entrepreneur who lost his residence permit due to reduced income during the pandemic, a representative of the Liberal Party asked the new Minister of Justice and Social Security, Emilie Mehl, (representing the Centre Party, elected in September 2021) 'what measures the Minister will take in order to give the UDI […] more flexibility in the interpretation of the income requirement during the COVID pandemic'.The Minister responded that: The purpose of the income requirement is, as is well known, that foreigners who come to Norway should as a main rule be self-sufficient and not become a strain on the Norwegian welfare society.This is a basic principle which there is broad political agreement on […] Any exemptions from the income requirement must be considered in light of the cost to society of a foreigner not achieving sufficient income after the pandemic and needing support from the state.[…] The government has not found reason to deviate from the main rule of the income requirement for selfemployed persons so far during the pandemic.Neither did the Solberg government. [our emphasis] (Mehl, 2021) While this response concerned labour migrants specifically, it is notable that the Minister used the term 'foreigners who come to Norway', implying broader application.She also emphasised cross-party agreement and continuity of policy orientation with her predecessors, when stating that they had not found any reason to make temporary exemptions from the income requirement during the pandemic.This suggests at least a form of welfare chauvinism at work: however exceptional the circumstances, the welfare state would not step up for immigrants who had not yet fully become 'insiders' by complying with the conditions laid out. While one might argue that labour migrants would be the least likely beneficiaries of such support, it is notable that very similar arguments were made concerning the income requirement for family reunification.During the interview with the Ministry representative, s/ he read (out loud) a written reply from the Ministry to an individual concerning these rules.In the letter, the Ministry refers to considerations behind the rules: It is uncertain how long the current situation with layoffs and lost income resulting from the COVID-19 pandemic will last and what long-term consequences it will have for applications and family immigration.Access to family immigration is of great welfare importance.It can be seen as an intrusive measure that the application is refused due to the income requirement no longer being met when the cause is COVID-19.On the other hand, any exceptions must be assessed against the societal cost if many of the reference persons do not return to work after the pandemic and the family have to be supported by the public.We have no assurance that all the reference persons who have been laid off or lost their income will return to work.It will have negative consequences both financially and in terms of integration if the person who was left permanently out of working life is allowed to bring family members to Norway.Against this background, the government has chosen not to make exceptions to the income requirement for people who have been made redundant or have lost their income due to the pandemic. In the Ministry's letter, it is acknowledged that it may be 'seen as an intrusive measure' if the individual does not meet the income requirement when the cause is COVID-19.Nonetheless, the cost for the welfare state of potential long-term unemployment caused by the pandemic is highlighted as a reason not to make exceptions.The interviewee continues by clearly implying that this decision came from the political leadership. When comparing the rationales for introducing temporary exceptions and relief packages for unemployed people and for not introducing temporary exceptions to economic integration requirements, there is a striking difference.Both explicitly highlight the premise that individuals are unemployedor in an extraordinary financial situationthrough no fault of their own, but as a consequence of the lockdowns during the pandemic.Thereby, the politicians acknowledge that both groups lack 'control' over their situation, which suggests they could be seen as 'deserving ' public support (van Oorschot, 2000;van Oorschot and Roosma, 2017).However, while this premise was used to legitimise exceptions and financial measures aimed the general population who became unemployed, a similar rationale was not evident for immigrants concerning economic integration requirements.Instead of emphasising how the welfare state should step up for the individuals, the political rationale for not making temporary exemptions from the economic integration requirements was reversed: the rationale for not introducing exemptions was that individuals potentially could have negative economic consequences for the welfare state. Our analysis indicates cross-partisan agreement on not making exceptions for the economic integration requirements, as this view was expressed by two Ministers from different sides of the aisle.However, a general challenge when there is not a public policy process to study, it is not clear exactly how deep the agreement runs across the political spectrum.The fact that there has not been a public policy process may have suppressed potential critique and opposition. What are the potential implications? This article studies a decision not to make exceptions from the existing requirements, which poses a methodological challenge in terms of measuring the potential impact and effects this non-decision have hadand may still have on immigrants in Norway.For example, it is challenging to analyse how many people refrained from applying for family reunification or permanent residence because of unemployment or because their income dipped below the threshold, or how many people left Norway because they were unable to comply with economic requirements to renew their residence permits.Not enough time has passed to study such potential medium and long-term effects.However, in our analysis, we have documented concerns for negative consequences raised by a variety of actors and in different arenas: 1) cases that reached the media, 2) question hour in parliament, 3) would-be sponsors for family reunification who contacted the Ministry to complain, 4) the UDI's correspondence with the Ministry, and 5) NGO responses in public consultation processes. The stakes may be high, especially for certain groups.For separated families, family reunification could be severely prolonged.Persons in unhappy or even violent relationships may have been unable to leave their partner, since family migrants' residence permits are conditional upon the maintenance of the relationship until they achieve permanent residence (which, of course, requires adequate income).Refugees have increasingly been made to understand that their stay in Norway is highly conditional.Since 2016, the Ministry of Justice and Public Security has imposed a policy of compulsory cessation of refugee status in case of improvements in the situation in the country of origin, and attempts have been made to withdraw refugee status from Somali refugees (Brekke et al., 2021).Refugee status may be withdrawn until permanent residence is obtained (Eggebø and Staver, 2020).Any delay in access to permanent residence due to inadequate income would lead refugees to worry not only about employment, but also about the possibility of forced return or loss of residence. Concluding discussion We have documented the Norwegian policy responses in the field of unemployment and integration during the pandemic from 2020 to 2022.The empirical analysis shows that the general incentive logicwhich has permeated both integration and general unemployment policies in recent yearswas only partially abandoned during the pandemic.The politicians explicitly justified (temporary) policies and relief packages aimed at mitigating negative consequences for the general population concerning unemployment.The cross-partisan mantra was that the welfare state would make exceptions and support people during these extraordinary circumstances, justified by the narrative that 'it's not your fault; it's the pandemic'. We do not find similar exceptions in the immigration rules.It is important to acknowledge that some policies were introduced to ease the short-term negative consequences for a select group of immigrants (e.g., 'integration packages' for refugees).Immigrants who were entitled to unemployment benefits received the same financial assistance as other unemployed persons.Nevertheless, the Norwegian government made an active choice not to make exemptions from the existing economic integration requirements, although it was explicitly acknowledged that the economic situation during the pandemic could have repercussions for their residence permits and access to family reunification.The main argument was that (potential long-term) unemployment for immigrants caused by the economic recession could be a strain on the welfare state.It was deemed irrelevant that it was not their fault. The stated rationale, that these economic integration requirements are intended to function as incentives, can be questioned (Eggebø et al., 2023).If they are, instead, intended as selection mechanism, it is beside the point whether the structural conditions of the pandemic made it impossible for immigrants to change their behaviour accordingly.Indeed, an extensive literature has noted that integration requirements more often demonstrate a selective effect on immigration, as opposed to positive effects on labour-market integration (Goodman, 2011;Jensen et al., 2019).Broader trends towards selective immigration policies suggest that this is more of a feature than a bug (de Haas et al., 2018;Staver, 2021). Our analysis suggests that there is a limit to who is seen as 'deserving' of help during extraordinary times (Chauvin et al., 2013;Jørgensen and Thomsen, 2016).Although originally legitimized by the same underlying logic of incentivizing the individual, exceptions in times of crisis were only applied to policies addressing the general population, and not the economic integration requirements for immigrants.This finding may exemplify a new kind of welfare state chauvinism during times of crisis, where immigrants are not considered deserving of or entitled to exceptions that may ease their specific negative consequences.Earlier studies of welfare state chauvinist policies have examined active decisions or policies that distinguish between immigrants and the general population (e.g., Jørgensen and Thomsen 2016).As our study shows, welfare chauvinism may, however, also result from non-decisions or failures to make exemptions from existing rules in times of crisis.While active political decisions and new policies are often subject to evaluation and open debate, non-decisions and non-policies could avoid such scrutiny.Although the consequences of a non-decision may be just as severe for the individual affected as an active decision, their effects may be more easily overlooked.Thus, although potentially methodologically challenging, such non-decisions should be the subject of both public and academic scrutiny when debating welfare state chauvinism. With millions of Ukrainians who seek refuge in European countries as a consequence of the full-scale Russian invasion in February 2022, both the question of who is deserving of exceptions (in times of crisis) and the trend of increased selectivity in immigration and integration policies have risen to the top of the political agenda.Although it is still too early to conclude on European governments' policy responses in this current situation, analyses of the initial responses show that many European governments have made temporary exceptions in their existing immigration and integration policies, e.g., related to housing, access to integration measures and financial support (Korsrud, 2023).Whether these policies treat Ukrainians more or less favourably than other groups differ.For example, in Norway, some policy changes provide Ukrainian protection seekers with more flexibility and choice compared to other groups of protection seekers (Hernes et al. 2022), while in Sweden, Ukrainians beneficiaries of temporary protection receive lower financial assistance than other beneficiaries of international protection (Berlina 2022).Still, initial analyses indicate a common trend towards more selective policies for different groups of protection seekers, where potential welfare chauvinist policies and questions of perceived deservingness may affect ongoing and future policy developments.The supposedly egalitarian ethos of the Nordic welfare states make these a fruitful context for further study policy development and consequences for different target groups.
2023-09-15T15:15:15.061Z
2023-09-12T00:00:00.000
{ "year": 2023, "sha1": "2be40663875841c4e6210dc77d252f58e028ee0b", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/02610183231199656", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "de09cba1c41575e9179321cf35c370bc1b19748f", "s2fieldsofstudy": [ "Political Science", "Sociology" ], "extfieldsofstudy": [] }
263333425
pes2o/s2orc
v3-fos-license
Effect of iron and calcium on radiation sensitivity in prostate cancer patients relative to controls Abstract High intake of red meat and/or dairy products may increase the concentration of iron and calcium in plasma—a risk factor for prostate cancer (PC). Despite our understandings of nutrients and their effects on the genome, studies on the effects of iron and calcium on radiation sensitivity of PC patients are lacking. Therefore, we tested the hypothesis that high plasma levels of iron and calcium could increase baseline or radiation-induced DNA damage in PC patients relative to healthy controls. The present study was performed on 106 PC patients and 132 age-matched healthy individuals. CBMN assay was performed to measure mi-cronuclei (MN), nucleoplasmic bridges (NPBs), and nuclear buds (NBuds) in lymphocytes. Plasma concentrations of iron and calcium were measured using inductively coupled plasma atomic emission spectroscopy. MN, NPBs, and NBuds induced by radiation ex vivo were significantly higher in PC patients with high plasma iron (P = .004, P = .047, and P = .0003, respectively) compared to healthy controls. Radiation-induced MN and NBuds frequency were also significantly higher in PC patients (P = .001 and P = .0001, respectively) with high plasma calcium levels relative to controls. Furthermore, radiation-induced frequency of NBuds was significantly higher in PC patients (P < .0001) with high plasma levels of both iron and calcium relative to controls. Our results support the hypothesis that high iron and calcium levels in plasma increases the sensitivity to radiation-induced DNA damage and point to the need of developing nutrition-based strategies to minimize DNA damage in normal tissue of PC patients undergoing radiotherapy. Introduction Prostate cancer (PC) is a common and heterogeneous disease that affects men's health worldwide, and is the second most common form of cancer in men.Due to its highly variable natural history and presentation, optimal management, and better clinical outcome remains a challenge [1]. Iron is an essential mineral that plays important roles in many biological processes such as oxygen transport, ATP production, and DNA synthesis [2][3][4].High concentration of iron has been reported in blood as well as tissues surrounding the tumours and therefore, it was suggested that increased level of iron is associated with increased oxidative stress which can increase DNA damage [5,6]. Calcium is another important mineral that is involved in many cellular, structural, and functional roles such as maintenance of bone structure [7,8]. Essential minerals such as selenium, zinc, iron, and calcium were indicated as factors that can influence prostate health [9].High consumption of dairy products and calcium is associated with increased risk of PC, whereas selenium intake is associated with preventing PC [10].The evidence so far from many epidemiological studies is not conclusive about the role of iron and calcium in contributing to PC risk [11][12][13][14][15][16][17][18]. Personalization of therapies to cure cancer and prevent radiation-induced toxicity remains an important clinical goal, however, its success is dependent on a thorough understanding of various innate genetic, epigenetic, nutritional, and lifestyle factors that may substantially affect radiation sensitivity [19][20][21][22].In spite of well-characterized molecular mechanisms in radiobiological response, identification of individuals at risk of adverse therapeutic outcome remains a challenge requiring the integration of genomic analyses with other 'omic' technologies [20].Exposure to ionizing radiations as a part of cancer treatment to control or kill malignant cells results in an increased toxicity, oxidative stress, and inflammation in normal tissue that can lead to genetic instability and normal tissue morbidity such as rectal and anal dysfunction [21,22]. Cancer is a multistep and progressive disease and chromosome breakage, loss and/or re-arrangement are important initiating events in cancer [23].The Cytokinesis-Block Micronucleus Cytome (CBMNcyt) assay is a well-established tool for evaluating chromosome breaks, chromosome rearrangements, and chromosome loss [24].This assay is a multi-endpoint assay that assesses DNA damage endpoints (in the form of binucleated cells with micronuclei [BN-MN], nucleoplasmic bridges [BN-NPBs], and nuclear buds [BN-BUDs]) as well as other cellular events (such as necrosis, apoptosis, and cell proliferation) simultaneously [24].In the last decade, this assay has become a thoroughly validated and standardized technique to evaluate in vivo radiation exposure of occupational, medical, and accidentally exposed individuals [25,26].Keeping in mind the roles of iron in cellular proliferation, metabolism, and metastasis underpin the association of iron with tumour growth and progression, and also of a suggestive role of calcium in increasing the risk of PC [11][12][13][14][15][16][17][18], we tested the hypothesis that abnormally high levels of iron and calcium in plasma increase baseline chromosome instability in lymphocytes of PC patients relative to healthy controls.In addition, we also tested whether hypersensitivity of lymphocytes in response to exposure to ionizing radiations (3Gy) in PC patients may be due to an interactive effect of high plasma levels of iron and calcium. Study participants This is a hospital-based case-control study and the detailed description about the study population, inclusion, and exclusion criteria is provided in our previous publications [27,28].The study included 106 cases of PC patients and 132 healthy age matched controls.All patients who were part of this study were classified as requiring radiotherapy for cancer control and were untreated at the time of enrolment.All subjects gave written informed consent for participation.The study was conducted in accordance with the Declaration of Helsinki, and approved by the Human Research Ethics Committee (HREC) of the Royal Adelaide Hospital (RAH: 031215), and this was approved and adopted by the Commonwealth Scientific and Industrial Research Organization (CSIRO) HREC. Blood collection, irradiation of lymphocytes, and CBMN assay Blood was collected from PC patients and controls after an overnight fast in lithium heparin tubes.An hour after blood collection, 500 μl of whole blood was mixed with 4.5 ml of pre-warmed RPMI-1640 culture medium (Thermo Trace, Australia) supplemented with 10% foetal calf serum (Thermo Trace, Australia).To induce radiation-induced DNA damage in lymphocytes, the whole blood cultures were exposed to 3 Gy γ-rays from a 137Cs source (Cis Bio IBL 437 C Blood Product Irradiator, dose rate 5.34 Gy/min).This irradiation dose is known to induce a 100-fold increase in micronucleated (MN) BN cell frequency relative to baseline in the lymphocyte CBMN cyt assay as evident from the previous study [26].CBMN assay was performed as described previously [24] with slight modifications and whole blood cultures were set up in duplicate.For comparing the results, un-irradiated blood cultures were used as controls.Following radiation exposure both irradiated and un-irradiated cultures, cultures were incubated for 1 h in a humidified incubator at 37°C containing 5% CO 2 .Following this incubation, 45 μl phytohaemagglutinin (PHA, 22.5 mg/ml; Jomar Diagnostics, Australia) was added to each culture and these cultures were incubated for further 44 h prior to the addition of cytochalasin-B (Cyto-B; Sigma, Australia) to a final concentration of 6 μg/ml.Following the addition of Cyto-B, the cultures were incubated for another 24 h.Lymphocytes from these cultures were separated by carefully overlaying the evenly distributed culture contents onto 1.5 ml Ficoll-Paque (Amersham Biotech) in TV10 tube (Sarstedt, Australia).The tubes were then centrifuged for 30 min at 400×g at 20°C.The isolated buffy lymphocyte layer (~200 μl) was transferred to an another TV10 tube containing 600 μl of Hanks balanced salt solution (HBSS; Thermo Trace, Australia) and centrifugated at 180 g at 20°C for 10 min.The supernatant was discarded and cells (lymphocytes) were re-suspended in 300 μl of RPMI-1640 culture medium containing 5.0 μl dimethyl sulfoxide (Sigma Australia) to facilitate dis-aggregation of cells.Cells were then transferred onto the slides using a cytocentrifuge (Shandon, Runcorn, UK).The air-dried slides were fixed, and stained using Diff-Quik (LabAids, Narrabeen, Australia).The slides were coded and scored for BN cells containing MN, NPB, and NBuds as per previously described scoring criteria as explained in our previous publications [24,28]; Nuclear Division Index (NDI) was also calculated based on the frequency of mononucleated, binucleated, and multinucleated cells [24,28]. Micronutrient analysis and PSA levels Plasma concentrations of iron and calcium were measured using inductively coupled plasma atomic emission spectroscopy (ICP-AES).All samples were first digested using nitric acid and hydrogen peroxide to ensure good recovery of all elements.Duplicate analyses were carried out for each sample.Both cases and controls were coded and analysed in the same batch to minimize inter-assay variation. Statistical analysis All data for iron and calcium, and other parameters were analysed for Gaussian distribution to determine whether to use parametric or non-parametric tests.Unpaired non-parametric Student's t test was used.To determine significance of the differences between the two groups with regards to age, PSA levels, baseline and radiation-induced MN, NPBs, and NBuds in bi-nucleated cells, we used the unpaired non-parametric Student's t test.The results obtained for DNA damage biomarkers were analysed using one-way ANOVA with respect to high or low plasma iron and calcium concentration (low concentrations were ≤ 1.20 mg/L and ≤ 95.50 mg/L, and high blood concentrations were > 1.20 mg/L and > 95.50 mg/L for iron and calcium, respectively).These cut off values were based on the median concentrations of healthy controls.Twoway ANOVA was performed to determine the percent variance and interaction explained in control and PC subjects with respect to the association of CBMN assay biomarkers with plasma iron or calcium and with PC status.All analyses were performed using PRISM 9.0 (GraphPad software) and P values < .05were considered statistically significant. Demographic and clinical characteristics The demographic characteristics of the 106 cases and 132 controls are summarized in Table 1.Cases and controls did not differ significantly in terms of age.PC cases were 2.14 years older (mean age 71.24 ± 7.18) than the controls (mean age 69.07 ± 7.99).PC cases had a 4-fold greater plasma PSA concentration compared to the controls (P = .0001).Furthermore, plasma concentration of iron and calcium was significantly higher in PC cases relative to controls by 29.5% (P < .0001)and 11.6% (P < .0001),respectively.At baseline, lymphocytes from cases had a slightly higher mean NDI than did lymphocytes from controls (1.87 ± 0.08 compared with 1.86 ± 0.02, respectively; P = .96). Effect of high iron concentration on DNA damage biomarkers at baseline and after 3Gy radiation challenge in control and PC patients The ANOVA results indicate that baseline MN frequency is significantly higher in PC patients (P = .04)with high iron status compared to controls with high iron (Fig. 1A).Similar trend was observed in both groups when the plasma iron concentration is low, however, these are not significantly different.When the lymphocytes from control and PC patients with high plasma iron levels were irradiated with 3Gy, MN frequency is significantly higher in high plasma iron in patients than corresponding controls (P = .004;Fig. 1B), whereas it was not significantly higher (P = .63;Fig. 1B) in PC cases compared to controls with low plasma iron levels. Baseline frequency of NPBs is significantly higher in PC cases with high iron than controls (P = .0009;Fig. 1C).Similarly, in PC cases with low iron levels, baseline NPBs are also significantly higher compared to controls with low iron (P = .007;Fig. 1C).Radiation-induced frequency of NPBs also shows similar trend but it was only marginally significantly higher in PC patients with high iron levels (P = .047;Fig. 1D).Baseline frequency of NBuds is significantly different in PC cases than controls only in subjects with high iron levels (P = .05;Fig. 1E).However, radiation-induced frequency of NBuds is significantly higher in PC cases compared to controls with either high or low iron levels (P = .0003and P = .03,respectively; Fig. 1F). Effect of high calcium concentration on DNA damage biomarkers at baseline and after 3Gy radiation challenge in control and PC patients Baseline MN frequency is significantly different in PC cases with high plasma calcium levels compared to corresponding controls (P = .02;Fig. 2A).Radiation-induced MN frequency is significantly higher in PC cases than controls in subjects with high and low calcium levels (P = .001;P = .05;Fig. 2B), respectively. Baseline frequency of NPBs is significantly higher in PC cases either with high or low calcium levels compared to controls (P = .001and P = .01,respectively; Fig. 2C).Radiation-induced NPBs are only marginally higher in PC cases compared to controls irrespective of plasma calcium status. Baseline frequency of NBuds is marginally higher in PC cases compared to controls irrespective of plasma calcium concentration (Fig. 2E).However, radiation-induced frequency of NBuds is significantly higher in PC cases compared to controls either with high or low calcium status (P = .0001and P = 0.02, respectively; Fig. 2F).Generally, high calcium concentration was associated with increased DNA damage biomarkers in PC cases relative to controls. DNA damage biomarkers at baseline and after 3Gy radiation challenge in control and PC patients high in both iron and calcium relative to those low in both iron and calcium Baseline MN frequency is significantly high in both controls (P = .002)and PC patients (P < .0001)with high iron and calcium levels compared to those subjects with low iron and calcium concentration (Fig. 3A).In addition, baseline MN frequency is significantly higher in PC patients with high iron and calcium levels compared to corresponding controls with high iron and calcium (P = .03;Fig. 3A).Similarly, radiation-induced MN frequency is significantly high in controls (P = .001)and PC cases (P = .003)with high iron and calcium levels compared to those corresponding subjects with low iron and calcium levels (Fig. 3B).Similar results were observed with regard to baseline frequency of NPBs in PC cases and controls with high concentration of both iron and calcium relative to corresponding subjects with low iron and low calcium (P = .002and P = .0002,respectively; Fig. 3C).However, radiation-induced frequency of NPBs is not significantly different in two groups irrespective of iron and calcium concentrations, though these are marginally higher in PC cases compared to controls (Fig. 3D).Baseline frequency of NBuds is not significantly different in controls irrespective of iron and calcium levels (Fig. 3E), however, it is significantly higher in PC cases with high iron and calcium levels relative to PC cases with low iron and calcium (P = .01;Fig. 3E).Radiation-induced frequency of NBuds is significantly high in PC cases compared to controls when iron and calcium concentration is high (P < .0001;Fig. 3F).Similarly, radiation-induced frequency of NBuds is significantly higher in PC cases irrespective of iron and calcium levels compared to controls (P < .0001;Fig. 3F).High levels of both iron and calcium were associated with higher DNA damage biomarkers in PC patients compared to controls. Interactive effects of iron and calcium status in inducing DNA damage analysed separately in controls and PC patients In further analysis, we tested whether the interactive effects of iron and calcium on DNA damage biomarkers were different in PC patients relative to controls.Significant interaction effect of iron and calcium was not observed in baseline and radiation-induced frequencies of MN and NPBs (Table 2) in healthy controls and PC cases.However, we found significant interaction of these minerals in the frequency of radiationinduced NBuds (% variance explained = 3.66; P = .03)in healthy controls.In contrast, interactive effect of iron and calcium on baseline and radiation-induced frequency of NBuds was observed in PC cases (% variance explained = 4.21; P = 0.02; % variance explained = 7.12; P = .01,respectively; Table 2).Overall, interactive effect of iron and calcium was more pronounced in PC cases compared to healthy controls. Our results indicate that high iron and calcium status in blood plays an important role in aggravating baseline DNA damage and DNA damage induced by radiation challenge and is significantly pronounced in PC cases compared to healthy controls.Furthermore, the % variance explained by iron appears to be stronger than % variance explained by calcium especially in the case of NBuds and radiation-induced DNA damage.Effect of iron and calcium on radiation sensitivity in prostate cancer patients Discussion Cancer cells divide in an unregulated and uncontrolled growth and proliferation invading normal tissues and their evolution is a multi-step process [29,30].It has also been established that cancer risk increases with higher genomic instability [31,32].Use of dietary supplements and functional foods for healthy well-being is continuously increasing and is becoming popular [33].Western diet comprising mainly of red and processed meats, dairy products and low intake of plant fibres have been reported to be associated with increased PC risk [34], whereas Mediterranean diet could protect from PC [35].It is possible that high plasma iron and calcium is one of the resultant outcomes of consuming western diet.Therefore, it is of importance to determine if high levels of these minerals might increase susceptibility to adverse outcome after radiation exposure. Lymphocytes have the tendency to accumulate DNA damage over their lifespans due to their close contact with different tissue microenvironments.Therefore, chromosome aberrations in these cells are considered to be valuable biomarkers of DNA damage, genomic instability, and as predictors of cancer risk [36,37].CBMN assay endpoints (MN, NPBs, and NBuds) provide valuable information that re-flects chromosomal breakage, chromosome rearrangements, and gene amplification ex vivo in cultured peripheral blood lymphocytes [24,38,39].In the present study, it has been found that cultured lymphocytes from PC patients with high iron and calcium when exposed to 3Gy ionizing radiation, show increased chromosomal instability as shown by increased frequencies of MN and NBuds compared to healthy controls.We have previously reported that radiation-induced MN and NBuds are significantly higher in PC patients with low selenium and lycopene [28].Therefore, it can be hypothesized that PC patients who have a habitual diet low in selenium and lycopene, and/or high in iron and calcium may be more prone to radiation-induced DNA damage.The present findings support the above hypothesis pointing us to the direction of devising new dietary strategies targeting other mechanisms that are associated with the inhibition and progression of PC [40]. It has been reported that free iron generates hydroxyl radicals as per in vivo Fenton reaction and thus has been hypothesized to promote carcinogenesis through lipid peroxidation, DNA and protein oxidation [41,42].Radiotherapy itself induces G1, S, and G2 arrest in a p53-dependent manner in response to the DNA damage induced by ionizing radiation and the production of free radicals that attack the DNA backbone [43].Inflammation is induced due to exposure to ionizing radiations causing tissue damage due to the generation of ROS and ROS levels remaining high due to damaged mitochondria and activated NADPH oxidases [44].ROS are released from mitochondria via classic ATM-p53-bax DNA damage response (DDR) mechanism [45].Furthermore, higher dairy protein and dairy calcium intakes have been associated with higher concentrations of insulin-like growth factor (IGF-1) in the EPIC study [46,47].Therefore, it is highly likely that IGF-1 and its binding proteins may represent a possible causal mechanism linking higher plasma concentration of nutrients rich in meat and dairy food with an increased risk of PC and its progression [48,49].Our results add important weight to the evidence associating calciumrich foods such as dairy products, as a possible risk factor for PC.Therefore, it is possible that these mechanisms might have the potential to further accelerate DNA damage as seen when plasma iron and calcium is high.It has been show that toll-like receptors, RIG-1 (RNA), and cGAS/cGAMP/STING (DNA) sensors connect DDR to pro-inflammatory responses through NF-kB and TKB1/IRF3 pathways to activate feedback loops for senescence and cell death [50].Radiationinduced micronuclei are major sources of cytoplasmic DNA and activate the cGAS/cGAMP/STING pro-inflammatory pathway [51]. To further understand how high plasma levels of these minerals either alone or in combination induced baseline or radiation-induced DNA damage, we then analysed if the interactions of these minerals could be responsible for this increase in DNA damage.Our results provided some evidence about their interactive effects in inducing DNA damage, however, the % variance attributed to the interaction of iron and calcium was only 7% and affected only NBuds, and the effect of iron alone was greater than the interaction effects.It may be possible that higher levels of iron may increase intercellular calcium levels by mitochondrial dysfunction thus increasing ROS that subsequently can induce high DNA damage.Redox-active iron is a net producer of the hydroxyl radical via Fenton reaction [52] that promotes the production of hydroxyl radicals at the expense of oxygen and GSH consumption [53].ROS induces glutathione deficiency and promotes cell death by inducing lipid peroxidation, DNA damage, and mitochondrial depolarization [54].Increased levels of cellular ROS modify the function of many important proteins involved in calcium signalling and homeostasis [55].Hypoxia is a hallmark of the tumour microenvironment that allows the cancer cells to survive in the harsh environment [56].The hypoxia-inducible factor-1 (HIF-1) is a transcription factor and its overexpression has been reported in cancers [57,58].Calcium ions (Ca 2+ ) is a reactive intracellular messenger that plays an important role in many of the hallmarks of cancer such as proliferation, apoptosis resistance, angiogenesis and metastasis [59].Mitochondrial dysfunction promotes increased ROS and activation of IRP1, which results in increased iron accumulation [60].Excess iron promotes lipid peroxidation by hydroxyl radicals derived from the Fenton reaction and thereby modifies the activity of many proteins involved in calcium homeostasis resulting in massive calcium influx [55].Increased cytoplasmic calcium levels increases mitochondrial calcium leading to mitochondrial dysfunction, oxidative stress, and damage.If uncontrolled, this ROS-Fe-Ca2+ cycle becomes deleterious to mitochondrial function leading to cell death [61].Furthermore, high intracellular calcium increases the rate of HIF-1 proteasome degradation also leading to increased ROS levels [62].It is possible that increased rate of HIF-1 proteasome degradation may lead to DNA damage as shown by increased DNA damage biomarkers of CBMN assay ex vivo and after in vitro exposure to ionizing radiation (Fig. 4).Furthermore, it has already been shown that chemical treatments can influence the expression of HIF-1 and DNA damage in cancer cell lines as indicated by altered MN frequency [63,64]. In conclusion, our results provide important evidence suggesting that PC patients are more prone to DNA damage induced by radiation exposure because of high plasma levels of iron and calcium.These findings provide valuable information for future studies that test the feasibility of reducing DNA Table 1 . Comparison of prostate cases and controls by selected demographic and clinical variables. *Chi-square test.Values given in the brackets represent the range. Table 2 . Percentage of variance explained by iron, calcium, and their interaction analyses separately in healthy controls and PC patients.
2023-10-03T06:16:48.273Z
2023-10-02T00:00:00.000
{ "year": 2023, "sha1": "c25989b3a8b4a78c789da20285298f49c2a17a97", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/mutage/advance-article-pdf/doi/10.1093/mutage/gead029/51824421/gead029.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "4522a25c8d4cbb64488e341298e27d1c6c797c29", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
213952094
pes2o/s2orc
v3-fos-license
Interdependence Between the Belgrade Stock Exchange Development and Serbia ’ s Economic Growth The goal of this research is to conduct an empirical test to determine the interdependence between the Belgrade Stock Exchange development and Serbia’s economic growth. The authors of this paper used the quarterly data about the market capitalization and turnover of shares achieved at the Belgrade Stock Exchange, as well as the quarterly data about Serbia’s gross domestic product between 2003 and 2018. During the testing of the long-term variable interconnection the Johansen (1991) co-integration test was used, whereas the Granger causality test was used for the causality analysis. The test results to date primarily show that the development of a capital market is of significance for economic growth, which implies that a liquid stock exchange may serve as a reliable indicator of a long-term growth of economic activity. Introduction The main role of a capital market reflects in savings collection and provision of funds to companies which need to invest them in their manufacturing equipment and new plants (Baldwin & Wyplosz, 2010, 548). By stimulating accumulation, capital markets make an important role in the stimulation of a long-term economic growth. In the event of a need for additional funds, companies may acquire capital by borrowing from commercial banks, or they may acquire it by selling their bonds and/or shares on the capital market. The financial systems of a multitude of countries are dominated by banks, at least those with a dominant role in loan provision and savings collection. However, even though there are numerous examples of successful banc-centric financial systems (e.g. China and Germany), modern conditions dictate a significant development of market-dominated systems. This means that issuers sell their securities on an open market and so acquire the required funds. The UK and US financial systems are the finest examples of direct financing. The capital market in Serbia is still underdeveloped, which is why companies mainly rely on bank loans. The private companies in the observed area are mostly unwilling to "open", which would lead to the diversification of their ownership structures (Marinković, Ljumović & Stojković, 2012). At the end of 2018 the first initial public offer was launched in Serbia after 78 years (Fintel energija a.d.). Also, mistrust in the stock exchange and unwillingness of individuals to make more considerable investments in shares quoted at the Belgrade Stock Exchange prevail in Serbia. By comparison, in the USA the households (individuals) are dominant with regards to purchase of shares as opposed to other investors (Rose & Marquis, 2011, 661). The main goal of this paper is an empirical test to determine the interdependence between a capital market and economic growth illustrated by the example of the Republic of Serbia. The Serbian capital market has been experiencing a decreasing turnover since 2008, as well as less public and investment companies. In terms of organization and regulation, developed market standards have been met, but a low level of investors' trust still remains the major issue. The Granger causality test was selected as a suitable methodology framework for the determination of the interdependence between the capital market and real economy sector and the Johansen test was applied for co-integration testing. The paper structure is as follows: the introductory notes are first followed by the overview of relevant studies dealing with this problem, and then by a more detailed description of the data and research methodologies, whereas the empirical result is presented separately. Finally, the concluding considerations provide recommendations for the holders of economic policy in Serbia. Literature Overview The link between the development of the financial stock market and economic growth has been attracting the attention of many researchers. Pan and Mishra (2018) drew an interesting conclusion that the global crisis had significantly affected China's real and financial sector. Furthermore, the authors emphasized that the Shanghai A market had a negative impact on its economic growth, most likely due to its irrational prosperity, i.e. lack of economic bubbles in China's financial systems. The empirical research included 36 African countries and established that the countries with markets underwent faster development than those without them (Ngare, Nyamongo & Misati, 2014). Owing to the stock market, companies may more quickly obtain the capital required for investments under more favorable terms, which encourages economic growth. Caporale et al. (2004) used the example of 7 countries to illustrate that a well-developed capital market was a prerequisite for economic growth. Also, the research about India proved that its capital market had a considerable impact on its economic growth (Mishra et al, 2010). Contrarily, Vazakidis & Adamopoulos (2009) found that economic growth also instigated stock exchange development on a long-term basis. Similarly, Luintel and Khan (1999) studied a sample of ten developing countries and proved the existence of correlation between the stock exchange development and economic growth. Turkey served as an example that there was a cointegration relationship between the capital market development indicators and economic growth (Coşkun et al., 2017). Adamopoulos (2010) made a similar conclusion by studying the above relationship in Ireland. The research conducted by Levine and Zervos (1998) confirmed and statistically proved a strong relationship between the initial stock exchange development and resultant economic growth. Also, in Malaysia it was determined that there was a positive and statistically relevant connection between the stock exchange development and economic growth on a long-term basis (Nordin & Nordin, 2016). Data and Methodology Description The Belgrade Stock Exchange was founded in 1894, but it ceased operating for almost half a century. It stopped operating in 1941, but formally existed until 1953, when it was abolished by the Decision of the Presidium of the Serbian Government (Dugalić & Štimac, 2014 The traditional ADF test is used to determine whether the time series is stationary or not (Dickey & Fuller, ADF, 1981). The ADF test starts with H0: a time series has a unit root (non-stationary). The Johansen co-integration test is used to define a long-term interconnection of the variables. The basic preconditions for the utilization of the Johansen (1991) test are that the variables are non-stationary at the level and that they are stationary upon conversion into first difference. Trace statistics and maximum characteristic statistics are used to establish rank. The interdependence between the Belgrade Stock Exchange development and Serbia's economic growth is analyzed by means of the Granger causality test (Granger, 1969). This method is used to test the causal relation between two variables. It indicates that, if the past values of variable y significantly contribute to variable x value prediction, then y Granger-causes x. Contrarily, if the past values of variable x statistically improve the prediction of variable y, then x Granger-causes y. The test is based on the following regression: (1) Where and are two variables, and are mutually uncorrelated error terms, t denotes a time period and k and l denote a number of lags. The zero hypothesis is for each l and for each k, in opposition to the alternative hypothesis that and for at least some l and k. If coefficient is statistically significant (reliable), and is not, then x causes y. In the opposing case y causes x. If both coefficients, and , are significant (reliable), then the causality is mutual. In order to test the hypothesis, the F-test is applied as follows: (3) Where:  -restricted residual sum of squares,  -unrestricted residual sum of squares,  Tnumber of observations,  lnumber of lags,  -number of degrees of freedom. The common hypothesis for all equations is that . The zero hypothesis is that x does not Granger-cause y in the first regression and that y does not Granger-cause x in the second regression (Granger, 1969). The Granger test analyzes the zero hypothesis (H0) that there is no causal relation. If (H0) is rejected with its statistical significance, it may be concluded that the tested direction is characterized by causality. After that, the test is conducted in the contrary direction to establish whether there is causality between the two variables in the above direction as well. For each pair of variables there are two zero hypotheses to be tested. The test results are sensitive to the tested time lags, therefore the procedure must be repeated as many times as necessary to find an appropriate lag length. Empirical Results In order to ensure the robustness of results, the empirical analysis starts testing the existence of a unit root of the relevant variables. Stationarity testing is of great importance prior to the application of the causality or cointegration tests. The ADF test results are shown in Table 1. According to the test results, it is obvious that the variables possess a unit root, which means that they are not stationary at the level. Upon their conversion into first difference, they become stationary. Consequently, it may be stated that the order of integration of LGDP, LTURNOVER and LCAP is I(1). In this manner the necessary prerequisite for the conducting of the Johansen co-integration test has been met. Table 2 shows the Johansen co-integration test results. Since the lag length mainly determines the co-integration test results, particular attention has been drawn to its optimal value selection. The optimal lag k is selected based on the formulation of the VAR model, which is characterized by a lack of autocorrelation, heteroscedasticity or distribution of residuals which tend towards normality. Based on the VAR model results, the appropriate lag in the Johansen methodology corresponds to k-1. The optimal lag length is 3 periods (in the VAR model it is 4), as may be observed. The results of the trace statistics and Max-Eigen statistics unequivocally show a presence of one co-integration vector between the relevant variables. The results of the VECM Granger causality test are shown in Table 3. Industrija, Vol.47, No.4, 2019 71 There are two established causal relations between the variables. The observed causality is bidirectional, which implies that the market capitalization changes lead to the economic growth changes and that the economic growth changes lead to the market capitalization changes. Concluding Considerations In this paper the authors search for the connection between the economic growth, market capitalization and share turnover in the Republic of Serbia. The observational time horizon is Q3 2003 -Q1 2018. The co-integration test and proper VECM Granger causality test have been used to determine the causality. The co-integration test results have proven that the variables have a long-term connection, which means they share a common stochastic trend. In addition, the Granger causality test results have shown that there is a bilateral causality between the market capitalization trends and gross domestic product. However, market liquidity is much more important for economic growth and development of a capital market than market size. A liquid stock exchange may be a reliable indicator of future long-term economic growth. It ensures a necessary liquidity for the investors, provides an exit mechanism for venture capital, allows companies to obtain the necessary capital and provides information about the quality of potential investments. Nevertheless, in the example of Serbia no significant connection has been established between the achieved share turnover and economic growth. In this respect, it may be concluded that the Belgrade Stock Exchange presents no essential factor for the development of the Serbian economy.
2020-01-30T09:04:48.072Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "bafa6f2651a28b4dff77704b44deae01d8cddd0f", "oa_license": "CCBYSA", "oa_url": "https://scindeks-clanci.ceon.rs/data/pdf/0350-0373/2019/0350-03731904063S.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "48462884a44338c5160deea9a1499695a005d877", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
233455882
pes2o/s2orc
v3-fos-license
Risk Factors for Revision After Early and Delayed Total Hip Arthroplasty Dislocation. An Analysis of Lithuanian Arthroplasty Register Introduction: Despite relatively low incidence, dislocation remains one of the main reasons for total hip arthroplasty (THA) revision. It is a devastating complication for a patient and a surgeon, and has high burden on the healthcare system. The aim of the present study was to assess and compare the risk factors for revision after early and delayed THA dislocations. Methods: Some 3403 THA through posterior approach for primary osteoarthritis were retrospectively studied in the Lithuanian Arthroplasty Register from 2011 to 2018. Three months after THA was the splitting time between the first event of early and delayed dislocations. Revision was set as outcome measure. Gender, affected side, number of dislocations, femoral head and neck size, and prosthesis fixation type were tested as risk factors for revision after early and delayed THA dislocations. Results: Dislocation occurred in 108 patients (3.2%), and 26 cases (0.8%) required revision. Men had statistically significant higher risk for revision due to early dislocation [hazard ratio (HR) 4.7; 1.3-17.7 confidence interval (CI)] and considerably lower risk for revision due to delayed dislocation (HR 0.5; 0.1-1.7 CI). The left side THA had twice the risk as compared to the right in the early settings (HR 2.1; 0.6-6.9 CI) which equalized after three months (HR 1.1; 0.4-3.1 CI). Some 32 mm femoral head had significantly lower risk in the early group as compared to 28 mm head (HR 0.3; 0.1-0.5 CI). Short head was associated with increased risk for revision after early dislocation, although, not statistically significant. Prosthesis fixation type was not a risk factor for revision surgery neither after early nor after delayed dislocation. Conclusion: The unique finding of gender separation was found -- men tend for revision after early dislocation and women after delayed dislocation. In early stage, additional precautions should be considered when 28 mm short metal heads are used. Introduction With an aging population and growing demand for improved mobility and life quality in the increased cases of arthritis, joint replacement surgery is believed to become the most common elective surgical procedure in the following decades [1]. It is indicated in the United States that the number of patients with total joint replacement is similar to the number of patients with public's attention catching chronic diseases, such as stroke or myocardial infarction and that the prevalence of total joint replacement is considerably higher than heart failure [1]. Dislocation in total hip arthroplasty (THA) is one of the most common reason for revision and has the incidence from 0.3% to 10% [2][3]. It is the most common cause of revision in the United states and the second after aseptic loosening in Swedish and France [2,4]. A similar situation can be seen in Lithuania, where 66.2% of all revisions after THA are performed due to recurrent dislocations [5]. It is a devastating complication for a patient and a surgeon and has a high burden on the healthcare system [6]. Prevention of dislocation starts with thoughtful preoperative planning and assessment, surgical precision, and good postoperative care. However, about 60% of dislocated THA will relapse and 50% will require revision surgery [7]. If great trochanter is not significantly displaced, there is no visible component malposition or failed closed reduction, revision surgery is considered after two or even three dislocation episodes [8]. Risk factors for THA to dislocate are well known and classified to patient, surgeon and implant related, but risk factors for revision after dislocation remain unknown. Therefore, aim of the present study was to assess the risk factors for revision after early and delayed dislocations after THA. Materials And Methods Data were extracted from the Lithuanian Arthroplasty Register and included the period from January 1, 2011 to December 31, 2018. Some 5689 patients who went through THA were retrospectively studied. All patients, who were involved in a study, underwent primary THA through posterolateral approach (described by Moore) for primary arthrosis in the single institution [9]. Exclusion criteria were: revision THA, THA for femoral neck fracture, and stable THA. THA through direct anterior and direct lateral approaches were excluded because of the low sample size and the absence of dislocations. Patients who underwent surgery with the implant heads of a rarely used diameter (24, 26, 30, and 40 mm) were not included in the study. Cases of dual mobility or constrained cup were not included. Excluding patients according to above mentioned criteria, our final sample size was 108 patients (Figure 1). FIGURE 1: Flow diagram of patient exclusion and the final sample size. Experienced radiologist evaluated THA X-rays (in cases of revision prerevision X-rays) for the following criteria: cup anteversion and inclination, femoral offset difference, and leg length discrepancy (LLD). The patient demographics and radiological parameters according to study groups are presented in Table 1. The patients were distributed according to the normal distribution by gender, age, side which was operated, and radiological parameters. In the study, the term 'revision' was defined as an open intervention where the whole prosthesis was eliminated, where an augmentation device was added, or where one or more parts of the implant were exchanged. Dislocations after THA (first occurrence) were divided into early and delayed. Some 90 days after THA was the splitting time between the incidence of early and delayed dislocations [10]. Discussion The purpose of this paper was to estimate, whether revisions after early and delayed dislocations have the same risk factors, using data from the Lithuanian Arthroplasty Register --gender and femoral head size were found as statistically significant factors separating revision risk after early and delayed dislocations. The overall dislocation rate in our study was 3.2%, which is comparable to other reports [2][3]. Moreover, similar dislocation rate was found in the study by Woolson et al., in which 10,500 THAs were performed and the incidence in Italy is studied to be from 0.3% to 10% [11][12]. After adapting for THA and patient characteristics, our analysis shows that male gender is related to a significant higher risk of revision after early dislocation and considerably lower risk of revision due to delayed dislocation after THA. In literature we found little evidence about gender as a risk factor for revision after dislocation. An article by Hailer et al., stated that males have a higher risk for revision after dislocation after THA, but there was no distinction between early and delayed dislocations [4]. Recently, Rowan et al. wrote that neither of sex, simultaneous bilateral THA, or restrictive postoperative precautions have an impact on dislocation rates after THA [13]. We could not find any literature about the risk factors for revision after early and delayed dislocations, therefore, this finding is unique. We did not find any literature about the influence of the operated side on the risk for revision. Even though we saw a tendency that the risk for revision after early dislocations of the left side was twice as high as the risk of the right side, our finding was not statistically significant. There is no literature on THA and dominant leg, but there are some reports on muscle strength difference and its clinical implication of dominant leg [14][15]. In our opinion, the impact of dominant leg on total joint arthroplasty outcomes is a hypothesis for further studies. We chose a three-month period as the distinguishing point between early and delayed dislocations, because dislocations usually occur within a period of three months after THA. Up to 70% of dislocations occur during the first month after surgery or up to 66% occur during first five weeks [16][17]. Dislocations that happen within 0-3 months from surgery, usually occur due to patient factors, deficiency of mature scar tissue or tension in soft tissue, while delayed dislocations are most often caused by component malposition or polyethylene wear [10]. A study by Peters et al. shows that 93% of orthopedic departments in the Netherlands use patient restrictions following posterolateral approach THA [18]. In our clinic, the restriction period and rehabilitation process after posterolateral approach THA lasts for three months. Similar recommendations are described by Zahar et al., that for patients after THA rotation, flexion over 90° and adduction of the hip should be limited by the brace for six weeks, after that each motion modality should be gradually increased, while internal rotation and adduction should still be avoided for three months after operation. Therefore, dislocations that occurred after the end of rehabilitation period were considered delayed [19]. Our findings that head diameter of 32 mm is associated with lower risk of revision after early dislocation as compared to 28 mm head diameter are similar to what was stated by Conroy et al. and Girarg et al., that the increase of head size reduces the risk of revision [20][21]. Furthermore, we only analyzed THA done through posterolateral approach --direct anterior and lateral approaches were excluded because of absence of dislocations and low sample number. Further, Pedneault et al. found that attention to surgical technique with posterior capsular closure outweighs the importance of femoral head size through posterolateral approach [22]. Although, the Lithuanian register of joint arthroplasty does not account the fact of whether the posterior capsule was reconstructed or not. Because of this reason, the question remains if the revision after early dislocation is associated with smaller femoral head or with unreconstructed posterior capsule. In this study, there are some potential limitations that should be considered. First of all, it is a single institution experience extracted from the National Arthroplasty Register and it might be questionable if the results could be applied nationwide. Second, in our analysis we could not adjust for such variables, as patient BMI, activity levels, comorbidities, neurological disability, prosthetic malposition, implant impingement, hip anatomy restoration, alcohol abuse and mental status, and therefore could not assess patient demand on the implant. These variables were not available from the Lithuania Arthroplasty Register. Conclusions The gender separation was found --men tend for revision after early dislocation and women after delayed dislocation. The risk for the revision after early dislocation is twice higher when left hip was operated, although, clinical implication of this finding remains unclear. In early stage, additional precautions should be considered when 28 mm short metal heads are used.
2021-05-01T05:15:16.413Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "9fed63290ee6f9a29084d4e75cea545c98244a2c", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/52350-risk-factors-for-revision-after-early-and-delayed-total-hip-arthroplasty-dislocation-an-analysis-of-lithuanian-arthroplasty-register.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9fed63290ee6f9a29084d4e75cea545c98244a2c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
113829617
pes2o/s2orc
v3-fos-license
Experimental study of a rehabilitation solution that uses GFRP bars to replace the steel bars of reinforced concrete beams Abstract The corrosion of the steel reinforcement affects drastically the long-term durability of many reinforced concrete (RC) structures in the world, especially the ones near the sea. When this problem is detected at early stages, it is possible and important to repair the structure in order to restore its safety and avoid future hazards and more expensive interventions. The research work described in this paper is inspired on these cases as it proposes a rehabilitation solution to replace the tension steel reinforcement of a RC beam with GFRP bars, which is a material immune to corrosion. The experimental study consisted on six full-scale RC beams subjected to a three-point bending test until failure. The specimens had stirrups without the bottom branch and were casted in two phases to simulate the replacement of the corroded and cracked bottom concrete. Two different GFRP reinforcement ratios were tested to assess the behaviour of the repaired beam regarding its service and ultimate states in comparison with the original beam with steel reinforcement. The results are presented and discussed in terms of flexural capacity, failure modes, deflection, crack pattern, mid-span crack width and reinforcement strains. It was concluded that the presented rehabilitation solution is easy to implement, can be designed according to general FRP design guidelines, and is able to restore the serviceability and ultimate limit states of the original RC beam. Introduction Structural rehabilitation is becoming increasingly important nowadays. The amount of deteriorated structures, the frequency and the costs of rehabilitation interventions motivate the introduction of innovative materials and methods to rehabilitate structures. The service behaviour and the ultimate performance of reinforced concrete (RC) are shortened by the corrosion of steel reinforcement [1,2]. Corrosion of the reinforcement induced by chloride environments has a significant effect on the mechanical behaviour, and the loss of cross-sectional area and bond strength of reinforcement have a very important effect on the bending capacity [3]. Malumbela et al. [1] concluded that for a maximum mass loss of 1%, the flexural capacity was reduced by 0.7%. Currently, repairing, rehabilitating and strengthening solutions are being developed and tested using different materials and different layouts. Solutions with steel materials can have limited duration. As alternative, Fibre Reinforced Polymers (FRPs) have been used because of their resistance to corrosion, high strength and light weight. The most common is the use of FRP solutions with sheets or laminates which are externally bonded to replace the structural integrity, in cases of theoretical reinforcement mass loss from 5% to 15%. Many experimental studies [4][5][6][7] indicate that by optimizing the amount and the layout, the bonded FRP sheets are suitable for balancing the strength recovery and that it is possible to restore the yield and the ultimate capacity with the same or lower deflection than initially. To prevent delamination and debonding problems, Spadea, Bencardino [8] suggested that the strengthening for flexure should be accompanied by the strengthening for shear. Thus, the best layout of bonded FRP sheets as reinforcement is a combination of a bonded sheet on the tension side anchored by U-shaped sheets. Several techniques are being developed to prestress FRP plates prior to bonding, which has already been proven to be an efficient solution [9]. However these solutions may not be effective when applied to damaged beams with more than 50% mass loss of tensile steel and it is emphasized that additional research is needed for cases where corrosion is severe and part of the reinforcement is missing. Moreover, the epoxy-bonded FRPs have limitations when applied at high temperatures, because of the rapid deterioration of the properties of the polymer matrix [10]. The use of cement base adhesives can be a solution for the application on structures located in hot regions or when there is a high danger of fire [11]. Rehabilitation solutions using FRP bars are not so frequent. One of the reasons may be that the FRP bar design as reinforcement is still uncommon, although this material has been available on the market for over 15 years. Several factors, such as novelty, production costs, the low modulus of elasticity, the non-ductile behaviour, the different design philosophies, and the need to validate the behaviour, have been responsible for the low levels of its application. Several authors [2,12] suggest that the analytical procedures developed for the design of reinforced concrete with steel bars in terms of ultimate loads, deflection and crack width are not applicable to the design of reinforced concrete with FRP bars (FRP RC) due to the mechanical property differences. Additionally, the design of FRP RC is generally governed by serviceability. However, the majority of codes and guidelines developed until now [12,13], use the same equations developed for steel reinforced members, modified to account for the differences between the materials [12,13]. Several authors [12][13][14][15] have been studying the ultimate and service behaviour of FRP RC. Since the behaviour FRP RC beams is bilinear until failure, reducing stiffness after cracking, most of the guides and codes recommend the flexural design according to a compression failure due to its less catastrophic mode [2]. This forces the design of over-reinforced cross-sections, providing a reduction in service load deflections and crack width and lower FRP bars stress. It is suggested that compression failures present better member deformability and gradual member failure than FRP rupture [15]. In serviceability, due to the lower modulus of elasticity of FRPs and to the different bonding properties, larger deflections and crack widths are expected than in steel RC beams. Several models and approaches for predicting deflections and crack width have been proposed, but some controversy remains. Several authors [16] reported that the deflections of FRP RC can be predicted with the original ACI 318 [17] formulas developed for steel reinforced concrete. On the other hand, other experimental analyses [18][19][20] pointed out that the modifications proposed in ACI 440.1R-06 [12] relative to ACI 318 are needed, achieving accurate predictions with this approach. Other studies [21] propose different methods. The Yost et al. [22] and Toutanji and Saafi's [14] findings suggest that the effective moment of inertia, used in the ACI 318 formula to predict the deflection, is overestimated and that it is possible to establish a correlation between the degree of overestimation and the ratio between the reinforcement area and the balanced reinforcement area (q f /q fb ): the higher the ratio q f /q fb , the lower the error of the effective moment of inertia value. They also proposed alternative equations for the effective moment of inertia and for deflection. The serviceability verification depends on bond and elasticity modulus, a certain equation can predict the behaviour well for one type of FRP bars but not for another of a different material or with a different surface [2,13,14]. Among the different fibres used to make FRPs, glass fibres are the most common as they are the least expensive. Furthermore, other studies [2] indicate the use of high strength concrete (HSC) to make better use of FRPs' properties. Some experimental works of the near surface mounted (NSM) reinforcement technique were done to rehabilitate concrete structures damaged by corrosion [23]. This technique consists in bonding FRP rods with epoxy resins in undamaged areas of concrete cover. Results indicate that it is possible for repaired beams to achieve the same ultimate capacity as the control beam but differing in the failure modes [24] and showing a ductility reduction in comparison with traditional RC beams. However, a significant disadvantage of this technique is that the placing of the NSM rods is highly dependent on the quality of the concrete cover, which is frequently damaged by steel corrosion. If this is the case, this solution cannot be applied. The issues listed in the preceding paragraphs justify the research described in this paper. Additionally, rehabilitation or repairing solutions using FRP sheets or textiles, or even the application of FRP bars with NSM, cannot be applied in many cases, such as when the reinforcement mass loss due to corrosion is high, when the concrete cover is extremely damaged or when it is not possible to increase the depth of the section. As a consequence of these facts, the rehabilitation solution adopted in these cases tends to be the replacement of the corroded steel by new steel reinforcement. However, when the deterioration of the RC structure is due to steel corrosion, the replacement of this material by another that is immune to this problem, such as GFRP, is an additional guaranty for a long-term duration of the rehabilitation solution. The main objective of the present research is to simulate and assess the behaviour of a rehabilitation intervention using GFRP bars on RC beams, in cases of bottom steel reinforcement highly damaged by corrosion and which forces its total removal. An experimental campaign on six full-scale beams was carried out, with the removal of the corroded steel reinforcement being simulated, and a rehabilitation method was developed and applied using GFRP bars. The rehabilitated beams were subjected to three-point bending tests until failure and the load-deflection response was analysed and compared with a reference beam and with theoretical predictions. Experimental programme A total of six full-scale RC beams (one reference and five rehabilitated beams) were cast and tested under three-point bending until failure. The beams were designated according to their characteristics as reference and rehabilitated with steel or GFRP reinforcement. The rehabilitated specimens were concreted in two phases at different dates to simulate the different material layers composed by the original and the rehabilitation concretes with different ages. Several other procedures were also performed to simulate the real conditions when the corroded tension reinforcement has to be removed, such as: use of closed stirrups without the bottom branch to simulate its total destruction due to corrosion; picking the tension surface of the beam to enhance the bonding between the different concrete layers ( Fig. 1(a)); drilling the intersections with other structural elements (for example columns) to insert a new reinforcement; and filling the holes with resin in the anchorage zone at the end of the longitudinal reinforcement ( Fig. 1(b) and (c)). Concrete The concrete strength for all test specimens was a selfcompacting (SCC) C30/37 and its composition is presented in Table 1. Table 2 shows the concrete properties. The strength at 28 days of concrete was determined by compression tests of three cubic samples with 0.15 m edge and three cylinders with 0.15 m in diameter and 0.30 m high. For additional information about the compression strength, a compression test was made on each beam testing day. The elasticity modulus, E, was also experimentally determined since it is particularly important in SCC because it varies with the lithological type of its aggregates. Although its value tends to increase with increasing compression strength, this increase appears to be lower when fly ash and limestone elements are introduced [26]. At 28 days the expected value for the elasticity modulus of C30/37 was at least 33 GPa [27] but tested samples had a lower value. Steel reinforcement bars The steel grade of reinforcement used in the reference beams and in compression reinforcement of all beams was A500. In order to determine the mean values of yield stress (f y ), the maximum tensile (f t ) and the elasticity modulus (E), three samples of each diameter used were tested in pure tension, according to standard NP EN ISO 6892-1:2012 [28]. Reinforcement bars of 12 mm and 16 mm were used as longitudinal reinforcement and bars of 8 mm in diameter were used as shear reinforcement in the form of stirrups. The results of pure tension tests are presented in Table 3. GFRP bars The type and shape of the GFRP bars used in this experimental study are shown in Fig. 2. This reinforcement is a straight bar with a helically grooved surface to increase the bonding to the concrete and headed ends to enhance the anchorage capacity. To compare with the property values presented by the producer, three straight bar samples of each diameter, 12 mm and 25 mm, were tested in pure tension to determine the stress-strain relationship and the tensile strength (f f ). The properties of the GFRP bars according to the producer are presented in Table 4. The conic end heads are made of polymeric concrete [30] and they are cast at the ends of the straight bars. Their geometry insures minimal tensile splitting forces at the heads, which allows the position of the bar very close to the concrete surface and where it is still possible to develop its full design force. In addition, these heads reduce the required bonding length, lb, of the straight bars [30]. The results of the pure tension tests are presented in Table 5. These values were in agreement with the values indicated by the producer for the 12 mm bars. For the 25 mm bars, the tensile strength values were lower than expected due to the premature failure at the clamped ends. Filling resin for the anchorage of the reinforcement in the rehabilitation beams The material used for the anchorage of the reinforcement in the rehabilitation solution was a two-component epoxy resin without shrinkage and with a bond strength to concrete higher than 3 MPa [32]. (2) 1 rehabilitated beam with steel reinforcement (REHAB-STEEL) to have similar behaviour as the reference beam; (3) 2 rehabilitated beams with GFRP reinforcement with the same load capacity as the reference beam (REHABGFRP1 A and B); (4) 2 rehabilitated beams with GFRP reinforcement with the same mid-span deflection as the reference beam (REHABGFRP2 A and B). The REF specimen is a non-deteriorated conventional steel RC beam with two bottom longitudinal 16 mm diameter bars (2/16) to determine the reference behaviour and values of the load capacity and the mid-span deflection to be reproduced by the other five rehabilitated beams. The REF beam was concreted in a single phase, and its stirrups had the conventional closed shape with four branches. The rehabilitated beams were concreted in two phases and their stirrups had only three branches, reproducing a real situation where the bottom branch had already been corroded. This procedure was adopted to evaluate the influence of different concrete layers and the absence of the stirrup bottom branch on the behaviour of the rehabilitated beams. The REHABSTEEL specimen is a rehabilitated beam with the same cross-section geometry and steel reinforcement as the REF beam and therefore should exhibit similar behaviour. The objective of this beam was to conclude if the rehabilitation solution with the new concrete layer and the absence of the stirrup bottom branch would affect the performance of the rehabilitated beam. The REHABGFRP1 -A and B specimens are equal rehabilitated beams with three bottom GFRP longitudinal 12 mm diameter bars (3/12) designed to have the same ultimate load as the REF beam, keeping the cross-section geometry. Due to the lower modulus of elasticity of the GFRP material in comparison with steel, it is not possible to obtain the same deflection in these beams as in the REF beam. In fact, considering that the steel reinforcement area of the REF beam is 18.6% higher and its modulus of elasticity is 3.6 times higher, a significantly higher deflection is expected in the REHABGFRP1 beams. The most efficient solution to design a rehabilitated beam with GFRP bars with the same load capacity and deflection as the reference beam is to increase the height of the cross-section. However there are two main problems with this solution. It must be possible from an architectural point of view and most importantly it may have some deficiencies in its shear behaviour due to the fact that the stirrups vertical branches become too short, only overcome by a shear strengthening which would largely complicate the solution. Another possibility within the solution of increasing the height of the beam is to keep the longitudinal bars in the original position at the bottom of the stirrups vertical branches, meaning that the rehabilitated beam would have a higher concrete cover. Nevertheless, none of these possibilities were tested because it was decided in this research work to keep the original geometry of the specimen, which was the easiest and the best solution to avoid compromising the shear behaviour of the rehabilitated beam. Because of this issue, the REHABGFRP2 -A and B specimens are equal rehabilitated beams with the same cross-section geometry as the REF beam and five bottom GFRP longitudinal 25 mm diameter bars (5/25) designed to have the same service mid-span deflection. A schematic representation of the test set-up and the different cross-sections are shown in Fig. 3. Table 6 presents information about all reinforcement in all beams and a detailed description of the tests instrumentation used is presented in Section 2.2.2. Due to the three-point bending test configuration and the 4.0 m span, it is possible to conclude that the ratio between the applied load and the bending moment is one. Tests instrumentation The instrumentation used during the loading tests is indicated in Figs. 3 and 4. At each support there were two load cells (Fig. 5 (a)) (LC1 and LC4; LC2 and LC3), each one with 200 kN load capacity, to monitor the reaction forces. Their sum of values is the applied load. Two linear variable differential transducers (LVDT1 and LVDT2) ( Fig. 5(b)) with a maximum capacity of 100 mm were installed on each beam at the mid-span to measure the deflection. These sensors were placed on both sides and on top of the beams, as shown in Fig. 4, to prevent their being damaged during the tests. The tension reinforcement bars were instrumented with six glued strain gauges (SG1 to SG6) to monitor the strains at midspan and at each support (two at each location) According to the manufacturer, the used strain gauges are indicated for general use and are made of a single element, a foil of the elements copper and nickel with a length of 5 mm. The maximum strain capacity is of the order of 21 ± 1‰, the gauge factor (GF) is 2.13 ± 1% and the resistance is 120 ± 0.3 X [33]. The production of the beam specimens and the rehabilitation procedure To simulate the real situation where the corroded steel tension reinforcement has to be removed, the beam specimens were concreted in two phases. The main steps of the production of the beam specimens and their rehabilitation procedure are presented and summarized in Fig. 6. The beams were produced at LREC 1 in an upside-down position in order to facilitate the operations to simulate the rehabilitation procedure. In the first phase of their production the specimens were concreted from the top until the tension reinforcement level. The support areas were fully concreted in this phase to simulate the intersection with an existent 0.30 Â 0.25 m 2 column. The main stages of this phase are presented in Fig. 7. The formworks were filled with concrete until it reached the desired level, which was 0.10 m from the top for REHABSTEEL and REHABGFRP1 and 0.15 m for REHABGFRP2. These values were considered to ensure that the new reinforcement would be properly covered by the rehabilitation concrete layer. Then moulds were positioned at the ends of the beams and these areas were filled with concrete until the top. These negatives were introduced in the specimens to avoid the drilling of the holes of the corroded reinforcement, which had to be made in a real situation. The mould was removed after the hardening of the concrete, the holes for the new reinforcement were enlarged and the surface was pricked with a pneumatic hammer. The tension reinforcement bars were introduced and the holes were sealed with a two-component epoxy resin. After mixing the components the resin was applied with a silicone spray to prevent voids. After this, the rehabilitation layer was concreted in such a way as to ensure the proper cover of the reinforcement. These stages are presented in Fig. 8. Before testing, the beams were rotated to the correct position (Fig. 6). Tests loading history All beam specimens were tested in a three-point bending test, with a free span of 4.0 m and the load was applied using a 1500 kN hydraulic actuator. The actuator, the load cells, the strain gauges and the LVDTS were connected to a data acquisition system to continuously monitor and record the values. The loading was controlled by force and its history up to failure was divided into 5 kN steps to allow a beam inspection at the predicted load values of the different stages of the beam's behaviour: the cracking load, the mean and the design values of the yielding and failure loads. Pictures were taken and the development of the cracking pattern was visually observed and marked on the side of each beam. In each test, three total discharges were considered to assess the deflection recover ability of the beam. The first was at 10 kN and full recovery was predicted since at this value the beams were not expected to be cracked. The second and third discharges were near the predicted values of the cracking and the service loads respectively of each specimen. The cracking load was also determined from the loss of stiffness on the load-deflection relationships obtained from the tests. The service load of each beam was considered to be 30% of its flexural capacity, as suggested by some researchers [2]. Theoretical background Before the tests a theoretical prediction of the cracking moment, the failure moment and the mid-span deflection values for each beam was made based on the ACI 440.1R-06 [12], FIB-40 [13] and EC2 [27]. The equations used for the theoretical predictions are indicated in Table 7. The obtained values are presented in the tables within the experimental results in Section 3.2 to establish a comparison. Deflection at service load An actual beam needing rehabilitation due to the corrosion of the steel reinforcement is already cracked due to this problem and due to the service load. However, in the present research work it was assumed that the steel corrosion only occurred at the bottom of the beam, which is usually the most exposed side to the aggressive environment, and therefore the consequent concrete delamination did not occur in the whole element. Since the removed tensile concrete height was 0.10 m and 0.15 m, which corresponds to 25% and 37.5% of the beam total height respectively, it is supposed that all the cracked concrete is located in this area in the majority of the cases that are worth being repaired. Therefore it is assumed that the repaired beam begins its performance from an uncracked state and the cracking moment is the boundary between this state and the cracked state. Fig. 9 shows the ratio between the experimental-to-predicted values of the deflection at service load. The results of the deflection at service load are also presented in Table 8. Theoretical predictions of deflection were calculated using Equations from (1) to (4), I e ; modified by the factor b d of the ACI 440.1R-06 [12]. To predict the deflection of a steel RC beam the ACI 318 [17] formulation was used. It is possible to conclude that the main objective of the REHABGFRP2 beams was achieved, which was having the same deflection as the reference beam (REF) (5.00 mm), since the REHABGFRP2A beam presented almost the same value (5.15 mm) and the average of both beams was 5.84 mm. As expected, the average deflection of the REHABGFRP1 beams was 9.8 mm which was almost twice of the REF beam. Flexural capacity and failure modes The loading capacity of the rehabilitated beams with GFRP bars was also predicted according to ACI 440.1R-06 [12] and FIB 40 [13] (proposed modifications to EC2) and the results are presented in Table 9. The REHABGFRP beams were designed as overreinforced to fail by concrete crushing, whereas the REF and the REHABSTEEL beams were designed as under-reinforced to fail by steel yielding. The failure mode by concrete crushing is the usual design concept for concrete reinforced with FRP according to ACI 440.1R-06 [12]. Several authors [13,14,34] have studied the bending behaviour, and observed that the concrete crushing failure provides a better energy absorption, better member deformability, more gradual failure, lower deflections and crack widths and a relatively more ductile failure. For the theoretical predictions, firstly the reinforcement ratio, q f , calculated from Eq. (5) is compared with the balanced reinforcement ratio, q fb , which in the ACI 440.1R-06 [12] is given by Eq. (6) where b 1 is a coefficient obtained by [12]. With the same principle, Pilakoutas et al. [35] derived Eq. (8) from EC2 [27] for the balanced reinforcement ratio. The balanced failure is the case when the strains in the concrete and in the GFRP bars reach their limits simultaneously and the balanced reinforcement ratio is the limit between the compression and tension failure. For the concrete failure, the ACI 440.1R-06 [12] uses Eq. (7) to calculate the moment capacity prediction (nominal moment), M n , of a rectangular concrete cross-section. The term f f is the FRP reinforcement tension stress [MPa] obtained from Eq. (8). The FIB 40 Table 7 Equations used in prediction values. Cracking moment and deflection Mcr ¼ Flexural capacity Mn ¼ q f f f ð1 À 0:59 [13] suggests that the ultimate moment resistance, M u ., can be obtained by Eq. (9), where e f is the FRP reinforcement tensile strain The experimental to predicted values ratio using FIB 40 [13] was non-conservative for REHABGFRP1B and for REHABGFRP2A (Fig. 10), whereas the ACI 440.1R-06 [12] predictions were conservative. One reason for the differences between ACI 440.1R-06 [12] and FIB 40 [13] predictions is the fact that the ultimate concrete compression strain is considered as 3.0‰ and 3.5‰, respectively [2]. The ultimate loads from a design point of view, considering the safety factors, are shown in Table 9. In general, it is possible to conclude that the beams had an ultimate capacity on tests that ranged from 1.2 to 1.5 times the design ultimate load value of ACI 440.1R-06 [12] and from 1.4 to 1.7 times the design ultimate load value of FIB 40 [13]. The crack propagation of the beams during the tests was marked on one of the faces and reproduced in Fig. 11. All specimens presented bending failure modes and a detailed description is indicated in Table 10, complemented with some test pictures, shown from Figs. 12-17. The reference beam REF had a failure mode caused by the breaking of the bottom steel reinforcement at mid-span (Fig. 12). Several flexural cracks developed along the span (Fig. 11(a)). Although these cracks were initially vertical, they started to incline in the direction of the load-point with the failure approach ( Fig. 12 (a)). The failure occurred after the two central cracks reached the load-point ( Fig. 12(b)). The REHABSTEEL beam had a similar failure mode to the REF beam, with the yielding of the bottom steel reinforcement at mid-span region but followed by the crushing of the top concrete in compression (Fig. 13). The crack pattern was similar to the REF beam, differing in the fact that, close to mid-span, the cracks also developed horizontally at mid-height of bottom concrete layer, suggesting some slip of the tension reinforcement ( Fig. 11 (b)). No visible slip or separation between the two concrete layers occurred in this beam ( Fig. 13(b)-(d)). Both REHABGFRP1A and B beams had similar behaviour until failure. The cracks appeared along the entire span, propagating from the bottom concrete layer to the other in the direction of the load point. Horizontal cracks at the bottom reinforcement level started to appear with the load increase. A partial separation between the two concrete layers was also detected, which started at the mid-span zone and propagated into the support direction (Fig. 11(c) and (d)). In these two cases, the failure was caused by the crushing of the top concrete in compression at the load point, followed by the separation of the concrete layers which caused some spalling of the bottom concrete layer (Figs. 14 and 15). The REHABGFRP2A and B beams also had a similar crack pattern and behaviour until failure. As the load increased, horizontal cracks at the tension reinforcement level and the separation between the two concrete layers were detected. These two phenomena started at the mid-span zone and progressed to the supports, which have the same deflection as the REFREHAB at its service load. contributed to the failure (Fig. 11(e) and (f)). The failure occurred mainly due to the slip of the tension GFRP bars at the support areas. Debonding and spalling of the new concrete was also observed at mid-span and supports (Figs. 16 and 17). Despite the global slip at failure between the reinforcement and the concrete, no visible debonding between the GFRP bars and the resin at the supports area was found. The resin stayed bonded to the reinforcement and slipped from the concrete (Fig. 16(d)). A failure due to the slip of the tension reinforcement highlights the lack of anchorage length. However, it is important to mention that despite the failure occurred due to the slip of the tension GFRP bars at the supports, it did not compromise the desired behaviour and bending capacity. These two beams supported a load 2.5 times greater than the original beam, and this means that as a rehabilitation solution they will never be subjected to this loading level, except if this solution is also a strengthening solution to increase the bending capacity. To the level of load corresponding to the reference beam, the support areas were in perfect condition. This is shown in Fig. 11(e) and (f), where it is possible to state by the colour scaling that the support areas only started to be affected for a load higher than 126 kN. This is also evident in Fig. 18, as the support strains only increased for loads above 150 kN. Tensile reinforcement strain Eq. (13) was used to estimate the bar strains at the mid-span. This formula is obtained from the equilibrium equations of the cross-section and considering only the reinforcement contribution. With a free span length of L = 4.0 m, the bending moment at midspan is equal to the applied load, P, according to: M = P ⁄ L/4 = P. In order to study the bottom longitudinal reinforcement strains at the supports and at the mid-span, two strain gauges were glued to two bars at these positions of each beam. The load-strain relationships at the mid-span and at the supports are presented in Figs. 18 and 19. The curves present the mean strains of the two strain gauges at each monitored cross-section. The tensile strains at the mid-span of the steel reinforced beams (REF and REHABSTEEL) presented a three stage behaviour development until failure: the elastic, the cracked and the yielded stages. On the other hand, the GFRP reinforced beams only presented the elastic and the cracked stages. At maximum load capacity, the mean value of strains at midspan for REHABGFRP1A and for REHABGFRP1B were, respectively, 12.28‰ and 12.48‰. Theoretical strains were of 12.51‰ and 13.41‰ respectively for REHABGFRP1A and for REHABGFRP1B at mid-span for the maximum load. Although for REHABGFRP2A and REHABGFRP2B there was a strain gauge glued on the centre rebar and the other on the outside rebar, the strain values were similar and the mean maximum value was 3.75‰ at the maximum load capacity for REHABGFRP2A and 4.35‰ for REHABGFRP2B. Theoretical strains were of 3.66‰ and 4.16‰ respectively for REHABGFRP2A and for REHABGFRP2B. Comparing REHABGFRP1A and B and REHABGFRP2A and B relationships, it is possible to see that increasing the GFRP ratio behaviour can be explained by the fact that measured strains are point values from a specific location of the reinforcement. If the strains were measured over a crack, the values will be affected by the ''localized" increase in the reinforcement strain. At the supports of the steel reinforced beams, the maximum strain was 0.12‰ at maximum load capacity. For REHABGFRP1A and REHABGFRP1B the maximum strain value was 0.15‰. For REHABGFRP2A and REHABGFRP2B from 150 kN of load, strain values had a faster increase, which is noticeable by the change in the slope of the curve. Although this change gives the curve a yielding appearance graphically, it corresponds to the detachment of the concrete near the supports. It is important to state that this load level corresponds to the short-term load (F head ) which according to the manufacturer can be anchored by the end heads [30]. The maximum strain value was 3.35‰. The high level of strains at the supports shows insufficient anchorage length of the bars, resulting in the failure of the support zones of these two beams when the concrete is not able to bond the high force developed at the end of the GFRP bars. The mean of reinforcement strain at supports from the side of beam that failed is indicated in Table 11. For these values the total anchorage load (F) was calculated. Subtracting the short-term load from this load (F head ) which can be anchored by the end heads, and considering the value of bond given by the manufacturer, the anchorage length needed to prevent the failure was calculated at 0.32 m. Deflection behaviour The load-deflection at mid-span curves are shown in Figs. 20 and 21. Fig. 20 shows a comparison between the four groups. Each curve of Fig. 21 represents the average deflection obtained from the two LVDTs mounted at mid-span for each beam. These curves allow the evaluation of the flexural stiffness at the various stages of the beams until failure during the tests. The steel reinforced beams, REF and REHABSTEEL, presented similar behaviour, which can be summarized in three different stages. An elastic first phase where the relationship between the load and the mid-span displacement was linear, followed by a cracked phase where the load-displacement relation was approximately linear but with a lower slope, and a third phase, which is the steel yielding phase, characterized by a rapid increase in the deflection until failure (Fig. 21(a)). Although the two curves were similar, REHABSTEEL had slightly lower flexural stiffness after the first loading cycle, a lower yielding loading point and 4.5% lower maximum load capacity. This difference can be explained by the existence of a new layer of concrete and a possible slip between these layers. The failure mode is also shown in Fig. 11. The development of the load-displacement at mid-span curves of the REHABGFRP1A and REHABGFRP1B beams was different from the reference beams with only two distinct and approximately linear phases and no ductile behaviour. In the first elastic stage of the beams, the relationship between the load and the deflection was linear. Then the slope of the curve decreased as a consequence of the cracking, but the relationship continued to be linear until failure. The REHABGFRP2A and REHABGFRP2B beams had identical behaviour to the REHABGFRP1 beams. The major difference was that the transition point between the two stages was not so easily identified, since this transition was progressive, giving a non-linear aspect to the curves. From Fig. 20, it is possible to notice qualitatively that a change in the reinforcement ratio changes the load-displacement behaviour. The lower the GFRP ratio, the higher the mid-span deflection. Values of deflections at service load are shown in Table 8. Although the deflection of the REHABGFRP2 beams were higher than the REF beam, as expected the difference between the two values is lower than 10.0%, with similar deflection values for the groups, suggesting adequacy of the prediction formulas used in both cases [12]. Although the reinforcement ratio and the ultimate capacity were close between REF and REHABGFRP1 groups, GFRP reinforced beams exhibited 1.86 times higher deflection at service. These differences are due to the lower elasticity modulus of GFRP when compared to steel (about one third). The ACI 440.1R-06 [12] underestimates the mid-span deflections. For REHABGFRP1 the experimental deflections were on average 55.0% higher than predicted, and for REHABGFRP2 34.5%. Although this is a short-term deflection, assuming that the long-term deflection is three times higher than this value and comparing with the service limit of span/240, which corresponds to 16.7 mm, only the reference beams and REHABGFRP2 verified the limit. This shows that service can have a greater effect than ultimate limit states when designing RC structures with GFRP reinforcement. Deflections at ultimate limit load carrying capacity were also measured ( Table 9). The REF had the highest ultimate deflection, as expected due to the ductile property of the steel reinforcement, followed by the REHABGFRP1 group and then the REHABGFRP2 group, both with no ductile behaviour. Crack development In order to analyse the differences of the distance between cracks and their width and length until failure, the crack development was marked on the beams with different colours after each load step. The crack pattern of all beams until failure is reproduced in Fig. 11. The beams were initially uncracked before testing. The flexural cracks started to appear after reaching the cracking load. In general, the first cracks were vertical and developed closely to the mid-span, where the bending moment had the maximum value. As the load progressively increased, the cracks appeared along the entire span, starting vertically or at a slight angle and then taking the load-point direction. All cracks increased in width and length until the failure of the beam. Looking at Fig. 11, the increase in q f resulted in a higher number of cracks and reduced the crack spacing. Similar behaviour was also reported by El-Nemr et al. [2]. The mean crack spacing for REF was 0.172 m. For the rehabilitated beams, the distance between the cracks was different in the two concrete layers, with more cracks in the bottom layer (Table 12). REHABGFRP1A, REHABGFRP1B and REHABSTEEL had similar mean crack spacing. The crack spacing on the original concrete layer was approximately two times the spacing on the rehab layer. On REHABGFRP2A and B there was a better propagation of the cracks from the second layer to the first layer crack, since the crack spacing between layers was small. Another important information that can be extracted from Fig. 11 is the distance between the top compression fibre and the cracks top. At failure, due to the high curvature of the mid-span cross-sections, this measure is approximately the position of the neutral axis or, in other words, the height of the compression zone. The results are shown in Table 12 and were determined by measuring the height of all the mid-span flexural cracks and subtracting their mean value to the height of the section. Comparing REF and REHABSTEEL, the existence of two concrete layers caused a height lessening of the compression zone. This means that there was a rise of the neutral axis in the rehabilitated beams. REHABSTEEL and REHABGFRP1A and REHABGFRP1B had similar distances. REHABGFRP2A and REHABGFRP2B also had similar values, which comparing with REHABGFRP1A and REHABGFRP1B are two times higher. It also important to mention that these values were similar to predictions done with ACI 440.1R-06 [12] formulation, which were of 0.050 m for REHABGFRP1 and 0.092 m for REHABGFRP2. Conclusions This study proposed, tested and evaluated an efficient and easy to implement rehabilitation procedure that uses GFRP bars to replace the tension steel bars of deteriorated reinforced concrete beams. It is an ideal technique to repair and improve the longterm durability of the existing marine steel reinforced concrete structures with corrosion problems. The conclusions are based on the results of an experimental campaign performed with fullscale reinforced concrete beam specimens casted in two phases to simulate the replacement of the corroded and cracked concrete. Two different GFRP reinforcement ratios were tested in order to assess the behaviour of the repaired beam regarding its service and ultimate states in comparison with the original beam with steel reinforcement. The main findings of this research can be summarized as follows: 1-Although a new concrete layer with a more flexible tensile reinforcement has been introduced to the rehabilitated specimens, the construction joint was not the cause of the failure and did not compromise the serviceability and ultimate limit states of the beams. 2-Good result predictions were obtained with the formulas of the EC2 [36]/FIB40 [13] and ACI 440.1R-06 [12], which indicates that these documents can be used to design this solution. 3-The absence of the stirrups' bottom branch due to a possible corrosion did not compromise the shear behaviour of the rehabilitated beams. 4-The criterion to design a rehabilitated beam with the same load capacity or with the same deflection at service load as the reference RC beam with conventional steel reinforcement was satisfied. One of the proposed repaired solutions was able to keep both the deflection and the ultimate load capacity of the original beam. 5-The rehabilitated beams with GFRP bars exhibited a bilinear behaviour until failure in terms of load-deflection as expected since the ductile performance of the reference beam with steel reinforcement is not possible to replicate due to the GFRP material linear elastic property until failure. 6-The conic heads at the end of the GFRP bars inserted in the concrete holes filled with epoxy resin were sufficient to ensure their anchorage at the ends of the beams.
2019-04-15T13:06:58.730Z
2016-12-01T00:00:00.000
{ "year": 2016, "sha1": "2378cc347b1ce7d612bce00a81c13d82948859d9", "oa_license": "CCBY", "oa_url": "https://digituma.uma.pt/bitstream/10400.13/3468/1/Experimental%20study%20of%20a%20rehabilitation%20solution%20that%20uses%20GFRP%20bars%20to%20replace%20the%20steel%20bars%20of%20reinforced%20concrete%20beams.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "a84a1356e6442449587a060309d3135be518ea95", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Engineering" ] }
118321668
pes2o/s2orc
v3-fos-license
The anti-quark--quark potential from Bethe-Salpeter amplitudes on lattice Potentials of quark--anti-quark pairs are studied from the anti-q--q Nambu-Bethe-Salpeter (NBS) wave functions in quenched lattice QCD. With the use of a method which has been recently developed in the derivation of nuclear forces from QCD, we derive the anti-q--q potentials with finite quark masses from the NBS wave functions. We calculate the anti-q--q NBS wave functions in pseudo-scalar and vector channels for several quark masses. The derived potentials in both channels reveal linear confinement plus Coulomb potentials. We also discuss the quark-mass and channel dependence of the anti-q--q potentials. Introduction An inter-quark potential is the fundamental interaction in strongly interacting quark-gluon systems, which is governed by the complex dynamics of Quantum Chromodynamics (QCD). It is still difficult to analyze such low-energy phenomena of QCD in analytic ways, because the coupling constant becomes large at low energies and therefore the perturbation theory is not applicable. Experimentally, the linear behavior of inter-quark potentials is suggested by Regge slope [1], which shows the relation, J ∝ M 2 with the spin J and the mass M of hadrons. A naive estimation for the relation between M and J is J = M 2 /4σ with the string tension σ , and the value of σ is about 1.3 GeV/fm from the Regge slope. The Coulomb force of inter-quark potentials is suggested by the analogy between quarkonium and positronium. In fact, Coulomb plus linear confinement behavior of inter-quark potentials reproduces the quarkonium spectrum well in quark models. However, until now, there is no regolous proof of the emergence of the linear confinement potential. Lattice QCD simulation is the powerful tool for a numerical investigation with strong coupling regime of strong interaction. The inter-quark potential is one of subjects which is most actively studied on lattices. From the analyses of Wilson loops, the potential for an infinitely heavy quark and an anti-quark (Q-Q potential) can be obtained. TheQ-Q potential from lattice QCD simulations reveals the form of V (r) = σ r − A/r with σ = 0.89GeV/fm and A = 0.26 [2], and one can take into account corrections coming from finite quark masses order by order with the use of the heavy quark effective field theory such as potential nonrelativistic QCD (pNRQCD) [2,3,4,5]. We study potentials between light quarks and anti-quarks (q-q potentials) in pseudo-scalar and vector channels from lattice QCD simulations. In order to explore theq-q potentials, we apply the systematic method which utilize the equal-time Nambu-Bethe-Salpeter (NBS) amplitudes to extract hadronic potentials [6,7,8,9,10,11,12,13] to systems containing relatively light quarks and anti-quarks. Due to the absence of the asymptotic fields of quarks, the reduction formula cannot be applied directly. Therefore, we assume that the equal-time NBS amplitudes for theq-q systems satisfy the Bethe-Salpeter (BS) equation with constant quark masses which could be considered as the constituent quark masses. By using the derivation of the relativistic three-dimensional formalism from the BS equation developed by Lévy, Klein and Macke (LKM formalism) [14,15], we shall obtain theq-q potentials without the expansion in terms of quark masses. The paper is organized as follows. In Sec 2, we briefly present our method to extract thē q-q potentials, together with the lattice QCD setup. We then show our results in Sec 3. Theq-q potentials are discussed and summarized in Sec 4. Method and lattice QCD setup Following the basic formulation to extract the nuclear force [6,7], we briefly show how to extract theq-q potentials on the lattice below. We start with the effective Schrödinger equation for the equal-time Nambu-Bethe-Salpeter (NBS) wave function φ ( r): where µ(= m q /2) and E denote the reduced mass of theq-q system and the non-relativistic energy, respectively. For the two-nucleon case, it is proved that the effective Schrödinger equation is de-rived by using the reduction formula [7]. Due to the absence of the asymptotic fields for confined quarks, we suppose that theq-q systems satisfy the BS equation with their constant quark masses. In this study, constant quark masses m q are determined by half of vector meson masses m V , i.e., m q = m V /2. Then, one finds Eq. (2.1) by applying LKM method to the BS equation. The non-local potential U ( r, r ′ ) can be expanded in powers of the relative velocity v = ∇/µ of q-q systems at low energies, where the N n LO term is of order O( v n ). At the leading order, one finds In order to obtain the NBS wave functions of theq-q systems on the lattice, let us consider the following equal-time NBS amplitudes Here Γ represents the Dirac γ-matrices, and Jq q (t 0 ; J π ) denotes a source term which creates thē q-q systems with spin-parity J π on the lattice. The NBS amplitudes in Eq. (2.4) is dominated by the lowest mass state of mesons with the mass M 0 at large time separation (t ≫ t 0 ): with V being the box volume. Thus, theq-q NBS wave function is defined by the spatial correlation of the NBS amplitudes. The NBS wave functions in S-wave states are obtained under the projection onto zero angular momentum (P (l=0) ), where g ∈ O represent 24 elements of the cubic rotational group, and the summation is taken for all these elements. Using Eq. (2.3) and Eq. (2.7), we will find theq-q potentials and NBS wave functions from lattice QCD. Simulation setup is as follows. We employ quenched QCD with the standard plaqutte gauge action. Lattice size is 32 3 × 48 and β ≡ 6/g 2 = 6.0, which corresponds to the physical volume V = (3.2fm) 3 and the lattice spacing a = 0.10fm. We measure theq-q NBS wave functions for four different quark masses with hopping parameters κ = 0.1520, 0.1480, 0.1420, 0.1320: the corresponding pseudo-scalar (PS) meson masses m PS in the calculation are 0.94, 1.27, 1.77, 2.53GeV, and vector (V) meson masses m V =1.04, 1.35, 1.81, 2.55GeV, respectively. The number of configurations is 100 for each quark mass. For the source operator of mesons, we use wall source. We fix the gauge, because q andq operators are spatially separated at the time slice of source and sink, and we adopt Coulomb gauge in the calculation. Numerical results for theq-q potentials First, we show the numerical results of the NBS wave functions in Fig. 1. Figure 1(a) and 1(b) are the NBS wave functions for each quark mass in PS and V channels, respectively, at the time slice t = 20. The NBS wave functions mostly vanish at r = 1.5fm for all quark masses in both channels. This indicates that the spatial volume V = (3.2fm) 3 is enough for the present calculations. The size of a wave function with a lighter quark mass becomes smaller than that with a heavier one. Comparing the results in PS and V channels, little channel dependence between PS and V channels is found, although the quark-mass dependence of the wave functions is a bit larger for V channel. In Fig. 2, we show the Laplacian parts ofq-q potentials in Eq. (2.3), ∇ 2 φ (r)/φ (r), for each quark mass and channel. Figure 2(a) represents ∇ 2 φ (r)/φ (r) = 2µ(V (r) − E) in PS channel for each quark mass at the time slice t = 20. As shown in Fig. 2, one can see that the potential form is similar to that obtained from Wilson loop, namely, the potential form looks like linear plus Coulomb form, although the derivation of the potentials is largely different between these two methods. Figure 2(b) represents ∇ 2 φ (r)/φ (r) in V channel for each quark mass at the same time slice t = 20. The basic properties are same as that in PS channel, although quark mass dependence is a bit larger for V channel. Figure 3(a) and 3(b) are the plots of the potentials with arbitrary energy shifts E, i.e, V (r) − E=∇ 2 φ (r)/(2µφ (r)) in PS and V channels, respectively, for each quark mass at the time slice t = 20. Note that the quark mass m q (= 2µ) is determined by the half of vector meson mass, m q = m V /2, as mentioned in the previous section. We fit the analytic function to the data in Fig. 3(a) and 3(b). In the present study, we choose the linear + Coulomb (+ constant) form, i.e., V (r) = −A/r + σ r +C, as the analytic function. The fitting results are listed in Table 1. As shown in Table 1, we find that the string tension σ moderately increases as quark mass increases in both channels. Quark mass dependence of the string tension in PS channel is larger than that in V channel. The string tension at the heaviest quark mass, m PS = 2.53GeV, is 950 (1011) MeV/fm in PS (V) channel. These values are roughly consistent with the value in heavy quark limit predicted from Wilson loop. On the other hand, Coulomb coefficient has significantly large quark-mass dependence in both channels and is larger in PS channel than that in V channel. Discussion and summary We have studied the anti-quark-quark (q-q) potentials from theq-q Nambu-Bethe-Salpeter (NBS) wave functions. For this purpose, we have utilized the method which has been recently developed in the calculation of nuclear force from QCD [6,7]. We have calculated the NBS wave functions forq-q systems with four different quark masses in pseudo-scalar and vector channels and obtained theq-q potentials with finite quark masses through the effective Schödinger equation. As a result, we find Coulomb + linear form of theq-q potentials like the infinitely heavyQ-Q Fig. 3. The data is fitted by the fit function V (r) = −A/r + σ r + C. potential obtained from Wilson loop. By fitting the results, we have obtained the string tension and Coulomb coefficient, and found the quark mass dependence of these coefficients. We have found that the string tension moderately depends on the quark mass. On the other hand, Coulomb coefficient decreases as quark mass increases. We have also checked the volume and the cutoff dependence of the NBS wave functions and theq-q potentials. Then, we have found that the result shown here does not change quantitatively, although we do not show these checks here. This is the first step to study theq-q potentials from the NBS wave functions, and the main purpose of the present study to show that the method is applicable to theq-q potentials. We find that the obtainedq-q potential has basic property of that obtained from Wilson loop. Therefore, the method can be used for the study of theq-q potentials with finite quark masses. Since the efficiency of this method is confirmed, there are many extensions by using this method such as the dynamical calculations of theq-q potentials, theq-q potentials at finite temperature, the 3q potential with finite quark masses, color non-singlet q-q potentials, and so on. The results of these extensions inter-quark potentials will be reported elsewhere.
2010-11-12T10:02:54.000Z
2010-11-12T00:00:00.000
{ "year": 2010, "sha1": "9728d58309a7e4654c5cac10bb4cdd6a9ed78524", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9728d58309a7e4654c5cac10bb4cdd6a9ed78524", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
139989729
pes2o/s2orc
v3-fos-license
Diffusion Bonding and Brazing of Advanced Materials Advanced materials generally require the development of novel joining techniques, as this is crucial to integrate them into functional structures and to widen their application field. Additionally, joining constitutes a technology, which influences all the industrial sectors, playing a key role in the economic and social development of a country. Diffusion bonding and brazing are two straightforward techniques for producing sound and reliable joints since these processes are capable of joining a wide range of materials of interest in the aerospace industry, as well as in many other industrial applications, offering remarkable advantages over conventional fusion welding processes. Production of dissimilar joints is also crucial for application of these materials. For instance, the combination of advanced ceramic with lightweight alloys, such as titanium or aluminum alloys, is quite attractive, combining the extraordinary properties of the two materials and extending the potential applications particularly into components for the automotive and aerospace industries. The major challenge in the production of these joints is to overcome the enormous differences in mechanical behavior, as well as thermal expansion coefficients and so new approaches need to be developed to produce dissimilar joints successfully. Introduction and Scope Advanced materials generally require the development of novel joining techniques, as this is crucial to integrate them into functional structures and to widen their application field.Additionally, joining constitutes a technology, which influences all the industrial sectors, playing a key role in the economic and social development of a country. Diffusion bonding and brazing are two straightforward techniques for producing sound and reliable joints since these processes are capable of joining a wide range of materials of interest in the aerospace industry, as well as in many other industrial applications, offering remarkable advantages over conventional fusion welding processes. Production of dissimilar joints is also crucial for application of these materials.For instance, the combination of advanced ceramic with lightweight alloys, such as titanium or aluminum alloys, is quite attractive, combining the extraordinary properties of the two materials and extending the potential applications particularly into components for the automotive and aerospace industries.The major challenge in the production of these joints is to overcome the enormous differences in mechanical behavior, as well as thermal expansion coefficients and so new approaches need to be developed to produce dissimilar joints successfully. Contributions The current special issue is composed of papers that present the recent progress in the joining technologies of advanced materials, with a particular attention for the microstructure-mechanical property relationships of the joints. The successful production of dissimilar joints, such as advanced ceramics with lightweight alloys, titanium, or aluminum alloys, is quite attractive, combining the extraordinary properties of the two materials and extending their potential applications, particularly into components for the automotive and aerospace industries.The review paper of this special issue [1] provides an excellent description of the recent progress in the production of these dissimilar joints by diffusion bonding and brazing processes.However, on the basis of this review, it is clear that more research work is needed in order to promote process improvements in order to make them feasible in these industrial sectors. The other outstanding papers that are part of this special issue were selected because they are examples of crucial work that bring industry closer to achieving this goal.Although some alternative methods presented produce sound joints using advanced materials, they all share similar purpose: developing new approaches in order to produce joints with expected mechanical properties under less demanding processing conditions compared to conventional processes. The successful joining of titanium alloys is crucial for the fabrication of highly loaded aerospace components.Brazing and diffusion bonding is the most suitable process for this purpose.These joining processes present numerous challenges, such as the formation of brittle intermetallic interphases, the achievement of good mechanical properties without compromising the base materials, and the use of less demanding processing condition.Gussone et al. [2] present a study of the interphase formation in brazed joints, consisting of different titanium alloys (Ti-CP2, Ti-CP4, Ti-6Al-4V, Ti-6Al-2Mo-4Zr-2Sn) using Ag28Cu.The results of this study demonstrate that besides the already explored approaches for the dissimilar titanium alloys joints; i.e., modification of the brazing solders (e.g., by Sn or In) and application of interlayers (e.g., with Ag or Pd)-the composition of the base material can play a remarkable role.The diffusion bonding is the other process reported for the dissimilar joints.However, the processing conditions normally involved in these processes make them less attractive economically.Considerable effort has been made in the development of new brazing alloys or interlayers that make brazing and diffusion bonding processes more suitable for industrial implementation.Reactive multilayer thin films are an alternative interlayer for reducing the temperature and/or the pressure needed for diffusion bonding.Simões et al. [3] investigated the effectiveness of using the Ni/Al reactive nanolayers for the dissimilar joints between titanium alloys.The dissimilar joining of titanium alloys assisted by nanolayers is achieved by the reaction of the multilayers, which provides good quality, defection-free interfaces at less demanding processing conditions. In order to widen the applications of these alloys, the dissimilar joints of titanium alloys with other materials like Ni-based superalloys are also very interesting.The joining of a lightweight like TiAl alloy to other high-temperature materials can allow the production of extraordinarily complex components.Brazing and diffusion bonding are attractive options to produce dissimilar joints.The principal challenge in this field is to promote the formation of the joints under less demanding processing conditions, composed by an interface with similar mechanical properties of the base materials.Simões et al. [4] demonstrated that the brazing filler has a crucial influence on overcoming these challenges.The development of new brazing fillers can contribute to decreasing the bonding conditions as well as the formation of an interface composed of the microstructure that will not compromise the mechanical properties of the joints. Other lightweight materials are also extremely interesting for various industrial sectors.An example of this is magnesium and magnesium alloys (especially AZ31) since they are increasingly being used as a substitute for many traditional alloys.Transient liquid phase (TLP) bonding is one of the most reported bonding processes for the production of Mg based joints.However, the used interlayers have to be well selected in order to promote a quality bonding of these lightweight alloys.AlHazza et al. [5] investigated the use of Cu coatings and Cu coatings with Sn interlayer in the TLP bonding of an AZ31 alloy.The presence of the Sn interlayer promotes an improvement in the joint strength, revealing an excellent approach to apply in the implementation of this joining process. The 7075 aluminum alloy is also used in aircraft, automotive, and electronic industries.The joining technologies play a role in the production of the components for these industries.However, the welding processes revealed to be very challenging for aluminum and aluminum alloys.Although TLP bonding is an interesting joining process for these alloys, there are some limitations that make the implementation of this process difficult.Meengam el al. [6] investigated the possibility of using a ZA27 zinc alloy interlayer to bond 7075 aluminum alloy by TLP.The authors revealed that it was possible to obtain a sound dissimilar joint but the bonding time and temperature have a crucial influence on the microstructure interface that is an intimate bond with the mechanical properties of the joints. The joining of aluminum alloys to other materials such as steel can be also extremely interesting.The joining of these dissimilar joints through a fusion welding process present some problems, such as hot cracking.Muhamed et al. [7] have shown that the brazing of 7075 aluminum alloy to steel can be conducted using an Al-Si-Zn base filler metal. Aluminum foam sandwich (AFS) panels are multifunctional, stiffer, and offer excellent corrosion resistance for many industrial applications, including automotive, marine, aerospace, construction, and railway.AFS are made of thin rigid Al-alloy sheets (facing sheets) joined with a porous, lightweight Al-alloy foam (core).Bangash et al. [8] investigated the production of AFS by brazing using metal glasses.Sound joints were achieved using two different Al-based metal glasses.However, the formation of hard and brittle intermetallic phases was observed, which compromise the mechanical properties of the joints.Although the production of AFS needs improvement and optimization, this work shows a good approach to produced AFS that can be usefully applied at an operational temperature up to 520 • C. Advanced ceramics have attractive properties, such as high wear resistance, high thermal stability as well as high thermal and electrical conductivities.It is known that some of the advanced ceramics like alumina, silicon nitride, and zirconia, are also well established in the electronics, aerospace, nuclear, and automotive industries.However, their inherent brittleness, high cost, and high hardness limit the production of large and complex shape components.The successful application of these advanced ceramics depends strongly on the joining processes.Li et al. [9] present an approach for the joining between porous Si 3 N 4 to dense Si 3 N 4 using metal glasses.The brazing of this ceramic can be conducted with success at 1550, 1600, and 1650 • C. The bonding temperature has a strong influence on the infiltration of the glass filler on the base material that can impact on the formation of a strong or weak bonding between the base materials.
2019-04-30T13:08:36.728Z
2018-11-16T00:00:00.000
{ "year": 2018, "sha1": "c7658a412bbdd5ee1530695164cee55617650329", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4701/8/11/959/pdf?version=1542772230", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "12897131168c60b44130c6237c3df5f6510598dd", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
219710470
pes2o/s2orc
v3-fos-license
Pediatric COVID-19: what disease is this? The coronavirus disease 2019 (COVID-19) pandemic spares no nations or cities causing escalating incidence and mortality. Royalty, prime ministers, celebrities and high government officials alike have been affected by the disease. For peculiar reasons, children and infants have generally been spared in Hong Kong, China until recently, when returning students from affected cities with the virus who largely presented with mild symptoms. In fact, several countries have reported on pediatric COVID-19. According to the data gathered by the Centre for Health Protection, as of May 22, there have been 111 confirmed pediatric cases of COVID-19 in Hong Kong, China consisting of 62 males and 49 females, aged between 0 and 18 years old. All cases have been reported to be either mild or asymptomatic, with no pediatric intensive care unit (PICU) admissions and fortunately no deaths [1]. Most of the pediatric cases were imported cases (90%), and the remaining were mostly epidemiologically linked with local/ possible local cases (7.2%), followed by those epidemiologically linked with imported cases (1.8%) and local cases (1%). The mean age for the imported cases is much higher than that of the non-imported cases (15.1 versus 6.5 years, P < 0.05). When comparing the local proportion of COVID19 infections in the 0–19 years age group in Hong Kong, China with other countries (most of these countries use 19-years as their upper age limit), the percentage (14.3%) is very high (Table 1). This can be explained by the aforementioned group of overseas students that have been imported to our city. Most of the local imported cases were travelers returning from the UK and the USA [1]. With over 1064 confirmed cases and four deaths, 10.4% of the infected patients were children (≤ 18 years old). The infection is generally very mild in children, and 39.6% were asymptomatic. This phenomenon is consistent with our experience with SARS 17 years ago, when most of the infected children also had mild clinical manifestations [10, 11]. The Chinese mainland also has reported mortality and morbidity of pediatric COVID-19 cases and has concluded that the disease was generally mild [10, 12]. Mortality is very low in children, and most of the known cases were teenagers [13–16]. Similarly, low mortality and morbidity among children infected with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) or middle east respiratory syndrome coronavirus or SARS-CoV had been observed in the literature [12, 17]. Hence, it is reassuring that children are less likely to be adversely affected by COVID-19. In contrast, mortality appears to be higher in the local adult population at approximately 0.4% and even higher (3.7%) in the US [1, 5, 6]. Reports of children with confirmed COVID-19 in mainland China have described mild cold-like with/without gastrointestinal symptoms and suggest that severe complications (e.g., acute respiratory distress syndrome, septic shock) appear to be uncommon. However, as with other respiratory illnesses, certain populations of children with underlying health conditions may be at increased risk of severe infection. One report stated that the detection of human-CoV alone or in co-infection with rhinovirus-C was independently associated with pediatric intensive care unit admission in young children hospitalized for lower respiratory infection [18]. The virus does not pass from pregnant women to fetuses during pregnancy. It appears that transmission does not include vertical routes, such as amniotic fluid, cord blood, or breast milk [19]. Approved or clinically proven antiviral drugs recommended for COVID-19 in children do not exist. Clinical management includes prompt implementation of recommended infection prevention and control measures in healthcare settings and supportive management of complications [12]. Children should engage in the usual preventive actions to avoid infection, including cleaning hands often using soap and water or alcohol-based hand sanitizer, avoiding contact with others who are sick, and staying up to date on vaccinations, including influenza vaccine. * Kam Lun Ellis Hon ehon@hotmail.com with SARS 17 years ago, when most of the infected children also had mild clinical manifestations [10,11]. The Chinese mainland also has reported mortality and morbidity of pediatric COVID-19 cases and has concluded that the disease was generally mild [10,12]. Mortality is very low in children, and most of the known cases were teenagers [13][14][15][16]. Similarly, low mortality and morbidity among children infected with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) or middle east respiratory syndrome coronavirus or SARS-CoV had been observed in the literature [12,17]. Hence, it is reassuring that children are less likely to be adversely affected by COVID-19. In contrast, mortality appears to be higher in the local adult population at approximately 0.4% and even higher (3.7%) in the US [1,5,6]. Reports of children with confirmed COVID-19 in mainland China have described mild cold-like with/without gastrointestinal symptoms and suggest that severe complications (e.g., acute respiratory distress syndrome, septic shock) appear to be uncommon. However, as with other respiratory illnesses, certain populations of children with underlying health conditions may be at increased risk of severe infection. One report stated that the detection of human-CoV alone or in co-infection with rhinovirus-C was independently associated with pediatric intensive care unit admission in young children hospitalized for lower respiratory infection [18]. The virus does not pass from pregnant women to fetuses during pregnancy. It appears that transmission does not include vertical routes, such as amniotic fluid, cord blood, or breast milk [19]. Approved or clinically proven antiviral drugs recommended for COVID-19 in children do not exist. Clinical management includes prompt implementation of recommended infection prevention and control measures in healthcare settings and supportive management of complications [12]. Children should engage in the usual preventive actions to avoid infection, including cleaning hands often using soap and water or alcohol-based hand sanitizer, avoiding contact with others who are sick, and staying up to date on vaccinations, including influenza vaccine. It is still unclear why coronavirus disease is milder in the pediatric population, similar to other respiratory viral illnesses. Mortality and morbidity of coronavirus disease are postulated to be due to the exaggerated cytokine storm that results in selfdestruction of the lung parenchyma and other organ systems [20,21]. Similar to other respiratory viral diseases, such as the seasonal influenza, two demographic groups seem to have a higher propensity to die from the disease, namely frail elderly people with chronic disease and seemingly healthy adults with exacerbated autoinflammatory responses with cytokine storm syndromes [10,21,22]. In contrast, two groups of patients seem to survive epidemics of coronavirus infections with very mild symptoms, namely the children and infants [17]. Our pediatric experience concurs with global data and allows us to reassure anxious parents of the benign nature of coronavirus among children and young people. Nevertheless, from a public health perspective, our current imperative is to contain these imported cases and to prevent onward transmissions, especially from children and young people to the elderly and vulnerable patients with co-morbidities. Coronavirus in mild or asymptomatic adolescent returners, like soldiers in the Trojan Horse, have to be contained. Universal masking, vigilant contact tracing, surveillance programs for testing suspected cases and social distancing are proven effective non-pharmaceutical interventions that are indispensable for containing the epidemic. The global battle against the coronavirus continues. The latest engima associated with pediatric COVID-19 is a novel multisystem inflammatory syndrome (MIS) of hyperinflammation resembling toxic shock syndrome, atypical Kawasaki disease (KD) or the Kawasaki disease shock syndrome (KDSS) [23][24][25][26][27]. Another novel acronym, PIM-TS is coined which stands for pediatric multisystem inflammatory syndrome temporally associated with SARS-CoV-2 [26]. Although controversial, common respiratory viruses including adenovirus, enterovirus, rhinovirus, coronavirus and respiratory syncytial viral have long been reported to be associated with KD. We postulate that SARS-CoV-2 may behave like any respiratory virus that can occasionally cause MIS, KDSS or the multi-organ dysfunction syndrome so familiar to the intensivists. Perhaps, we do not need another acronym. Author contributions Both authors contributed to the drafting and opinions in this viewpoint article, and approved the final version of the manuscript. Funding None. Compliance with ethical standards Ethical approval None for this viewpoint article.
2020-06-17T14:21:01.142Z
2020-06-17T00:00:00.000
{ "year": 2020, "sha1": "d7b03e5ba1e6339d0cf7e906e196a244fb023365", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s12519-020-00375-z.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "d7b03e5ba1e6339d0cf7e906e196a244fb023365", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
21286573
pes2o/s2orc
v3-fos-license
The imbalance of procoagulant and anticoagulant factors in patients with chronic liver diseases in North India REFERENCES 1. Lossos IS. Molecular pathogenesis of diffuse large B-cell lymphoma. J Clin Oncol 2005;23:6351-7. 2. Kang YH, Park CJ, Seo EJ, et al. Polymerase chain reaction-based diagnosis of bone marrow involvement in 170 cases of non-Hodgkin lymphoma. Cancer 2002;94:3073-82. 3. Sandberg Y, van Gastel-Mol EJ, Verhaaf B, Lam KH, van Dongen JJ, Langerak AW. BIOMED-2 multiplex immunoglobulin/T-cell receptor polymerase chain reaction protocols can reliably replace Southern blot analysis in routine clonality diagnostics. J Mol Diagn 2005;7:495-503. 4. van Krieken JH, Langerak AW, Macintyre EA, et al. Improved reliability of lymphoma diagnostics via PCR-based clonality testing: report of the BIOMED-2 Concerted Action BHM4CT98-3936. Leukemia 2007;21:201-6. 5. Fey MF, Pilkington SP, Summers C, Wainscoat JS. Molecular diagnosis of haematological disorders using DNA from stored bone marrow slides. Br J Haematol 1987;67:489-92. 6. Odenthal M, Siebolts U, Ernestus K, Disse D, Dienes HP, Wickenhauser C. Immunoglobulin heavy chain gene analysis in bone marrow biopsies and corresponding lymph node specimens: dependency on pre-treatment, histological subtype and extension of B-cell lymphoma. Int J Mol Med 2008;21:569-76. 7. Abbas F, Yazbek SN, Shammaa D, Hoteit R, Fermanian P, Mahfouz R. Invivoscribe BIOMED-2 primer mixes in B-cell immunoglobulin gene rearrangement studies: experience of a molecular diagnostics laboratory in a major tertiary care center. Genet Test Mol Biomarkers 2014;18:787-90. 8. Cheson BD. Role of functional imaging in the management of lymphoma. J Clin Oncol 2011;29:1844-54. 9. Brisco MJ, Latham S, Sutton R, et al. Determining the repertoire of IGH gene rearrangements to develop molecular markers for minimal residual disease in B-lineage acute lymphoblastic leukemia. J Mol Diagn 2009;11:194-200. 10. Ghorbian S, Jahanzad I, Javadi GR, Sakhinia E. Evaluation of IGK and IGL molecular gene rearrangements according to the BIOMED-2 protocols for clinical diagnosis of Hodgkin lymphoma. Hematology 2016;21:133-7. 11. Shin S, Kim AH, Park J, et al. Analysis of immunoglobulin and T cell receptor gene rearrangement in the bone marrow of lymphoid neoplasia using BIOMED-2 multiplex polymerase chain reaction. Int J Med Sci 2013;10:1510-7. The imbalance of procoagulant and anticoagulant factors in patients with chronic liver diseases in North India TO THE EDITOR: Patients with chronic liver diseases (CLD) tend to experience severe hemostatic anomalies because of reduced levels of most of the coagulant proteins and anticoagulant factors such as protein C, protein S, and antithrombin. In contrast, it has been observed that levels of certain procoagulant factors such as factor VIII and von Willebrand factor (vWF) levels may be increased [1]. Various mechanisms such as increased levels of vWF antigen and reduced synthesis of ADAMTS 13 cleavage protease have been described to explain elevated factor VIII levels in these patients [2]. Additionally, the fact that factor VIII is an acute-phase reactant could elucidate partly these findings [3]. Not only increased procoagulant factor levels but a reduction in these factors may also lead to prothrombotic tendency in these patients. Concurrent reduction in protein C and factor VIII may also result in the procoagulant imbalance. It is important to distinguish the mechanism for increased factor VIII level in CLD patients since its sustained elevations may provoke the thrombosis. This study was aimed to compare the levels of factor VIII and protein C in CLD patients with a superimposed acute insult [acute-on-chronic liver failure (ACLF)] and in patients with compensated cirrhosis (CC), and to detect the correlations between the these factors and the disease activity using Model for End-Stage Liver Disease (MELD) scores in these respective groups. Furthermore, the ratio of factor VIII and protein C levels was evaluated as an indicator of the severity of liver disease in both groups. This prospective study comprising 2 groups of patients with underlying CLD in a tertiary care center in North India was approved by the institutional Review board, with written informed consent obtained from all participants. Group 1 included 58 patients with ACLF (Asian Pacific Association for the Study of the Liver criteria [4]), and group 2 included 58 patients with biopsy-proven CC. The blood samples for coagulation study were collected from both groups using vacutainers containing buffered sodium citrate (0.109 M, 3.2%). The samples were processed within 30 minutes of collection. The citrated tubes were centrifuged at 3,000 g for 10 minutes to obtain plasma and analyzed for factor VIII and protein C on a fully automated coagulometer. The factor VIII and protein C values between the 2 groups were compared using the Mann-Whitney test and those of each group were analyzed the correlation with their MELD scores using Pearson's correlation. P-values of <0.05 were considered as statistically significant. Patient characteristics are summarized in Table 1. The mean age in group 1 was 44.46±11.3 years with 89.7% being men, while in group 2, the mean age was 50.32±10.45 years with 94.8% being men. The median [interquartile range (IQR)] factor VIII and protein C levels in group 1 were 232.55% (150.0-331.5%) and 10.5% (10.25-22.10%), respectively, with a mean MELD score of 26.06±8.19. In group 2, the median (IQR) factor VIII and protein C levels were 178.20% (105.60-261.45%) and 36.8% (25.3-45.07%), respectively, with a mean MELD score of 16.19±3.91. The differences in factor VIII (P=0.04) and protein C (P<0.001) levels between the 2 groups were statistically significant. The factor VIII levels in group 2 showed significantly positive correlation with MELD score, while those in group 1 did not show the significant correlation with their MELD score. A weak and negative correlation of protein C with MELD scores was seen in both groups, but it did not reach statistical significance. In addition to the above parameters, the ratio of factor VIII to protein C levels was calculated as an index of the procoagulant tendency in both groups. A statistically significant difference in the ratios between the 2 groups (P<0.001) was observed. The factor VIII to protein C ratio in group 1 showed a weak positive correlation with the MELD scores that was statistically insignificant, while the ratio in group 2 showed a weak positive but significant (P<0.001) correlation with MELD scores (Table 2). Patients with CLD do not experience only bleeding complications but also thrombotic events. The main procoagulant drivers in CLD include elevated factor VIII and vWF and reduced protein C levels. Factor VIII elevations can arise from increased vWF levels, decreased expression of low-density lipoprotein receptor, and an acute-phase response to inflammation [2]. Of the 2 study groups included in our study, the ACLF group had patients with increased levels of C-reactive protein (44.0±29.3 mg/L) and procalcitonin (mean >2.10 ng/mL). However, the CC group had patients with no elevations of C-reactive protein (3±0.5 mg/L) and procalcitonin (mean <0.05 ng/mL). The factor VIII levels in both groups were elevated, but the elevation was significantly higher in the ACLF group, which can be attributed to additional acute insults. High factor VIII levels are a major risk factor in venous thrombosis [5] and may lead to thrombosis in CLD, especially in ACLF. Treatment of the acute-phase response in these patients might reduce the thrombotic tendencies. Protein C levels are known to decrease in CLD as the liver is the major site of protein C synthesis. Our study has shown a significantly decrease in protein C levels in patients with ACLF (compared with the patients with CC), which may lead to an exacerbation of thrombotic tendencies in these patients. A negative correlation of protein C with the MELD score was observed in both groups, although the values were not statistically significant ( Table 2). Based on the fact that factor VIII is one of the most important components of thrombin generation and protein C is one of its most important inhibitors [6], the ratio of the 2 components was considered as an indicator of prothrombotic tendency. We found values in patients with CC similar to those of Tripodi et al. [7], but patients with ACLF had significantly higher ratios ( Table 2). The ratio in the patients with CC had a direct and significant correlation with the MELD score compared to the ACLF group in which the coagulopathic defects were more serious. In patients with ACLF, other causes of hemostatic defects except for CLD, making more complex and heterogeneous coagulopathies, might interrupt correlation with MELD scores compared to those with CC [4,8]. To conclude, the patients with ACLF have higher factor VIII and lower protein C than those with CC. The factor VIII levels and the ratio of factor VIII to protein C may be used as a predictable marker for the severity of liver disease in patients with CC. Priyanka Saxena 1 , Chhagan Bihari 2 , Roshni Mirza The first case of paroxysmal nocturnal hemoglobinuria and Budd-Chiari syndrome treated with complement inhibitor eculizumab in Korea TO THE EDITOR: Budd-Chiari syndrome (BCS) is a rare and potentially life-threatening disorder characterized by hepatic venous outflow obstruction [1]. BCS is associated with thrombogenic conditions such as myeloproliferative neoplasms or inherited deficiencies in protein C, protein S, and antithrombin in at least 75% of patients [2]. However, paroxysmal nocturnal hemoglobinuria (PNH) is another well-recognized cause of BCS [3]. PNH is an acquired disorder of hematopoietic stem cells, characterized by chronic intravascular hemolysis, thromboembolic episodes, and varying degrees of bone marrow failure caused by uncontrolled complement activation [4]. Patients with BCS, in whom no other etiological factor has been identified after a thorough clinical and laboratory investigation, are required to be tested by routine flow cytometry screening for PNH in Western countries [5]. Eculizumab, a humanized monoclonal antibody that blocks the activation of terminal complement C5 components, is currently used in the treatment of PNH. Treatment with eculizumab reduces transfusion requirements, ameliorates anemia, decreases the risk of thrombosis, and improves quality of life by resolving the constitutional symptoms associated with chronic intravascular hemolysis [6,7]. Long-term treatment with eculizumab in patients with concomitant BCS and PNH has shown a favorable safety profile [8][9][10]. To the best of our knowledge, this is the first report of eculizumab treatment in a patient with BCS and PNH in Korea. CASE A 39-year-old man was admitted to our hospital with newly developed abdominal pain, fatigue, pancytopenia, abdominal distension, and jaundice. He had a history of liver cirrhosis secondary to BCS and undergone splenectomy and inferior vena cava (IVC) stent insertion 15 years ago. The laboratory results on admission were as follows: white blood cell count, 3.2×10 9 /L; hemoglobin, 5.3 g/dL; platelets, 41×10 9 /L; reticulocyte count, 10.3%; haptoglobin, <100 mg/L (lower limit of reference range, 300 mg/L); lactate dehydrogenase (LDH), 5,005 IU/L (upper limit of reference
2018-04-03T04:17:03.080Z
2017-06-01T00:00:00.000
{ "year": 2017, "sha1": "ef36e540f635b70ac2a7e6e8c79e80adff8865c1", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5045/br.2017.52.2.143", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ef36e540f635b70ac2a7e6e8c79e80adff8865c1", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
254367793
pes2o/s2orc
v3-fos-license
A deep learning model based on concatenation approach to predict the time to extract a mandibular third molar tooth Background Assessing the time required for tooth extraction is the most important factor to consider before surgeries. The purpose of this study was to create a practical predictive model for assessing the time to extract the mandibular third molar tooth using deep learning. The accuracy of the model was evaluated by comparing the extraction time predicted by deep learning with the actual time required for extraction. Methods A total of 724 panoramic X-ray images and clinical data were used for artificial intelligence (AI) prediction of extraction time. Clinical data such as age, sex, maximum mouth opening, body weight, height, the time from the start of incision to the start of suture, and surgeon’s experience were recorded. Data augmentation and weight balancing were used to improve learning abilities of AI models. Extraction time predicted by the concatenated AI model was compared with the actual extraction time. Results The final combined model (CNN + MLP) model achieved an R value of 0.8315, an R-squared value of 0.6839, a p-value of less than 0.0001, and a mean absolute error (MAE) of 2.95 min with the test dataset. Conclusions Our proposed model for predicting time to extract the mandibular third molar tooth performs well with a high accuracy in clinical practice. Background Extracting impacted mandibular third molar tooth is one of the most routine surgeries performed by oral and maxillofacial surgeons. Predicting the difficulty and time of tooth extraction is the most important factor to consider before performing a surgery [1]. In most of the previous studies on difficulty of extracting third molar tooth, tooth extraction difficulty assessment was performed based on radiological characteristics using the anatomical position, angulation, and adjacent anatomical structure of the mandibular third molar. MacGregor was the first to develop a model to predict operative difficulty using radiographs [2]. Pell & Gregory, Winter's classification and Pederson index are classification methods mainly used to predict the difficulty of third molar extraction. Based on these studies, many comparative studies and additional suggestions have been made [3,4]. However, in many cases, the classification method does not match well with the actual clinical situation. Recently, a study using a convolutional neural network (CNN) to predict the difficulty of extraction of third molars from radiographic characteristics was published [5]. However, it had a limitation in that only radiological characteristics were considered when predicting the difficulty of tooth extraction using deep learning. Using anatomical elements of panoramic X-rays and clinical data as variables in a study using deep learning, it is possible to develop a better predictive model for the difficulty of extraction of the mandibular third molar. The time taken for the tooth extraction is one of the variables highly related to the surgical difficulty of the tooth extraction, and many dentists expect the expected extraction time before surgery [6]. Thus, in the present deep learning study, as a one way to predict the surgical difficulty of the mandibular third molar tooth extraction, the time taken for the extraction was predicted by considering both radiological and clinical factors such as gender, age, body mass index (BMI), and surgeon's skill. Furthermore, the actual time taken for tooth extraction was compared with the time estimated through a deep learning model. The purpose of this study was to create a practical model for predicting the time to extract the mandibular third molar tooth using deep learning based on a concatenation approach. Patients In this study, panoramic images and clinical data from 724 patients aged 15 to 90 years who visited the Department of Oral and Maxillofacial Surgery, Samsung Medical Center from March 2020 to September 2020 were collected. Inclusion criteria were: (1) patient age between 15 and 90 years; (2) no relevant systemic diseases (American Society of Anesthesiologist's classification ASA I and ASA II); and (3) no congenital and acquired deformity in the craniomaxillofacial area. Exclusion criteria were: (1) patients with systemic diseases (≥ ASA III); and (2) congenital and acquired deformity in the craniomaxillofacial area. Patients who were judged to require general anesthesia or sedation by the clinician were also excluded. The Institutional Review Board (IRB) of Samsung Medical Center approved this study (IRB number: 2021-11-109). All patients signed an informed consent agreement. Surgical technique Extraction of mandibular third molars was performed by four operators (JYP, CSK, JMA, MKY) with 30, 24, 10 and 2 years of professional experience, respectively, as oral and maxillofacial surgeons using similar surgical techniques with the same instruments, high-speed and low-speed drills. Local anesthesia was administered with epinephrine 1:100.000 (Shinhung, Seoul, Korea) for the inferior alveolar nerve block and the third molar area around the gingiva. A 4-0 Vicryl suture (Ethicon Inc., Somerville, NJ, USA) was used to close the wound. Study variables In this study, operation time was used as a target variable to measure the difficulty of extraction of the mandibular third molar. A set of predictor variables was divided into three groups: patient, operator, and radiologic variables. Patient variables included age, gender, BMI, and maximum mouth opening. Radiologic variables included the angulation, depth, bone density, morphology of the third molar, and the space between mandibular ramus and mandibular second molar. Operator variables included years of experience as an oral and maxillofacial surgeon and the class of each operator. Patient's age, sex, body weight, height, and maximum mouth opening were recorded prior to extraction surgery. The time (minutes) from the start of the incision to the start of suture was recorded. Patient's body mass index (BMI) was calculated from their weight and height. Dataset The region of interest (ROI) around the mandibular 3rd molar was manually cropped into a 300-400-pixel square shape with surrounding structures including the ramus of the mandible, distal part of mandibular 2nd molar, and inferior alveolar nerve. Consistency of ROI cropping was achieved in a way that two oral and maxillofacial surgeons with 10 and 25 years of experience reached consensus through discussion. The image of the right third molar was transformed into the image of the left through horizontal flipping to facilitate the deep learning process (Fig. 1). A total of 724 data were randomly classified into 644 training data and 80 test data set. To test the accuracy of the AI model, validation was performed using data from additional 60 patients. There was a case of data exclusion that the panoramic X-ray showed extra supernumerary tooth hiding behind the wisdom tooth. Deep learning model The deep learning model was constructed by concatenating Multilayer Perceptron (MLP) and Convolutional Neural Network (CNN) to handle two input data, panoramic X-ray images and clinical data. Figure 2 shows a clear description of the model which consists of the CNN part and the MLP part. The CNN part learns image features through Convolutional Neural Network with 3 × 3 filters, Max-pooling, Flatten, and Fully Connected layers. From the other side, the MLP part learns patients' clinical data through fully connected layers. Outputs from the MLP part and the CNN part were concatenated. Finally, there is an output layer that infers the extraction Rectified Linear Unit (ReLU) was used as the activation function. This has the advantage of quickly training the model and solving the vanishing gradient problem [7]. L2 Regularization is one of the weight decay methods that can train weights of the model to be not too big [8]. Although training data were increased through data augmentation, concerns about overfitting remained. Methods such as Ridge Regulation (L2 Regularization), Dropout, Early Stopping, and Batch Normalization were applied to the model for normalization. The Adagrad optimizer with default Keras settings for 200 epochs was found to work well for convergence. Grad-CAM Deep learning models based on valid algorithms shows great performance. Sometimes it seems like a clear mathematical function. However, there is an opaque and complex process between its input and output. To solve this uncertainty, Gradient-weighted Class Activation Mapping (Grad-CAM) was used to visualize which part of the panorama X-ray was referenced by the CNN model of this study when making a prediction (Figs. 3a, b). With Shapley Additive exPlanations (SHAP) for calculating the contribution of each feature of clinical data, we could visualize effects of each feature's value in predictions made by the MLP (Fig. 4). Data augmentation Data augmentation was applied to both images and clinical data classified as training sets. Changes applied to data augmentation included rotation (randomly within 10 degrees), horizontal shifting (horizontal shifting range = [− 20,20]), vertical shifting (vertical shifting range = 0.05), adjusting brightness (brightness range = [0.8,1,2]). No vertical or horizontal flipping was performed to prevent topographical changes in the left third molar.. In addition, augmentation was applied to clinical data matching each image by randomly changing values in a certain range (0 to 10% of each feature's mean). As a result, the size of training dataset was increased from 644 to 8602. Distribution of patient, dental, and surgical variables A total of 724 patients were included in the study with a mean age of 28.6 ± 11.2 years (range 15-90 years) (Fig. 5). To evaluate the prediction model, the predicted extraction time and the actual time spent on extraction were compared through correlation analysis. The correlation coefficient was 0.8315 (p-value < 0.05). The paired t-test between two groups showed a p-value of more than 0.05. The mean absolute error (MAE) was 2.95 min (Fig. 6). Table 1 shows performance metrics of CNN, MLP, and Combined Model (CNN + MLP). The combined model was better in all measures of R-value, R-squared, p-value, and Mean Absolute Error (MAE) than when CNN and MLP were used separately. When CNN and MLP were used alone, extraction time was predicted only based on images or clinical data, respectively. However, in clinical practice, both panoramic images and clinical data are required to properly predict the extraction time. This means that models trained with both images and clinical data are expected to show better performance than using CNNs and MLPs separately. Performance evaluation When 60 external validations were performed, the mean absolute error (MAE) was 4.66 min and standard deviation was ± 3.72 min. Discussion Extracting the impacted mandibular third molar is one of the routine procedures performed by oral and maxillofacial surgeons [1]. However, the difficulty of extraction varies from simple extraction to cases requiring general anesthesia. Categorizing the difficulty of extractions or estimating the time required has been of interest to oral and maxillofacial surgeons. MacGregor made the first attempt to establish a model for assessing surgical difficulty [2]. Winter's and Pell & Gregory classification is a classic method [2,9,10]. Many studies have found that radiological factors are related to surgical difficulty for both upper and lower third molars, with depth of impaction, distal space available, molar angulation, and root morphology being main variables contributing to the difficulty [11][12][13][14]. However, these methods have recently been found to be inappropriate for judging the difficulty of surgery. The Pederson index showed low sensitivity and specificity in predicting the difficulty of surgery for impacted mandibular third molars [15]. Furthermore, the unreliability of radiographs for classification of impacted third molar irrespective of training or experience of the evaluator has been reported [15][16][17]. Other important variables not calculated by Pederson include maximum mouth opening, age, bone density, body mass index (BMI, kg/m 2 ), and proficiency of the surgeon. Most of the previous deep learning approaches related to the difficulty prediction of third molars used panoramic radiographs only. De Tobel et al. [18] developed an automated method to assess the degree of development using mandibular third molars on panoramic radiographs. Another study that predicted the difficulty of wisdom tooth extraction using CNN considered only radiological characteristics for the prediction [5] Furthermore, most of the previous AI-based difficulty prediction study were difficulty classifiers or wisdom tooth detectors rather than predicting the actual wisdom tooth extraction time [5]. Other study used AI-driven molar -angulation measurement and predicted the third molar eruption potential [19]. Zhu et al. [20] developed a deep learning based-detection model for assessing the contact relationship between wisdom tooth and the inferior alveolar nerve based on panoramic X-rays. Unlike other deep learning studies using only panoramic X-ray or cone beam computed tomography (CBCT), this study is significant in that it is the first study to create a practical model for predicting the time to extract the mandibular third molar by considering both panoramic X-ray and clinical data. Our proposed deep learning model predicts the time taken for extraction like clinicians predicts the estimated time considering both radiographic information and clinical information. A statistically significant positive correlation between the predicted extraction time and real extraction time was observed. In this study, a set of predictor variables were divided into three groups: patient, operator, and radiologic variables. Patient variables included age, gender, BMI, and maximum mouth opening. Operator variable included the surgeon's years of experience based on the years of practice as an oral and maxillofacial surgeon and the class of each operator. Radiologic variables included angulation, depth, bone density, morphology of the third molar, and the space between mandibular ramus and mandibular second molar. These radiological factors were not specified as numerical or nominal variables. Radiographs were imported as image data and included into the CNN model. Shapley Additive exPlanations (SHAP) for calculating the contribution of each feature of clinical data was used in this study. The goal of SHAP is to account for the prediction of an instance by calculating the contribution of each feature to the prediction. The Python SHAP package allows you to visualize feature attribution as "forces" as Shapley values [21]. In this AI model, features with the largest absolute Shapley value was operator's clinical experience(years). Followed by the maximum mouth opening (MMO) (Fig. 5) In Table 1, when CNN and MLP were used alone, MAE(min) was 4.1213 and 5.3923 respectively. This result tells us that MLP showed better prediction than CNN. The combined model was even better in all measures than when CNN and MLP were used separately. However, it was difficult to clearly demonstrate how much each factor affects in the CNN + MLP combined model. When looking at the SHAP result in the MLP model, it can be interpreted that the operator's clinical experience(years) had a great influence. Although these predictors used in this study have not been used in other AI-based predictive studies of wisdom tooth extraction, most clinicians understand that these variables are essential for predicting extraction time and difficulty. Methods such as Ridge Regulation (L2 Regularization), Dropout, Early Stopping, and Batch Normalization were applied to the model for normalization. Dropout can prevent the model from overfitting in a way that does not involve some weights in training. Early Stopping is a method of terminating training if the performance of the model for the validation set does not improve any more during the epoch. Batch Normalization is a way to train a model faster and more stably by normalizing the input distribution of each layer. Data augmentation is commonly used to overcome limitations of small data sets that are unique to the medical field. The size of each training dataset was increased from 624 to 8602 after data augmentation. Renton has reported that age, patient weight, and ethnicity are associated with extraction times [3]. When patients were divided by age, those over 30 years were at a significantly more risk of difficult extractions than younger patients. The difficulty was further increased as patient's age exceeded 50 years [3]. However, in the present study, no significant differences in the difficulty of wisdom tooth extraction were found according to age. This might be because age distribution of the study population was biased towards younger patients. A majority of patients included in the study were in their 20 s (Fig. 5). To the best of our knowledge, this is the first study to predict the time to extract the wisdom tooth through artificial intelligence using both panoramic images and clinical data. This study has some limitations. First, the study population of the study was skewed toward the younger age group. The number of subjects in the age group of more than 50 years in the study population was limited. Such age inhomogeneity of this study group might have led to less consideration of the age factor when artificial intelligence was used to predict the difficulty of tooth extraction. Most of other variables showed normal distribution in this study. Although the age distribution of the patients was skewed, no further action was taken as our AI model was already showing good results. If the number of subjects in the entire study group is increased through additional studies in the future with age group of patients uniformly included, a more sophisticated extraction difficulty prediction model is expected to emerge. In addition, in this study, a panoramic image was used as an image variable. If CBCT data are used for the same prediction model in the future, a more sophisticated model might be obtained. Experienced dentists and oral and maxillofacial surgeons predict and prepare for the difficulty of tooth extraction by considering various factors such as panorama, CBCT, patient's age, gender, and morphology. However, in the case of novice dentists, the difficulty is often unpredictable. Novice dentists often fail to predict the difficulty of extraction, leading to a very long extraction time or an increase in patient discomfort after surgery. There is no clinician who judges the difficulty of extraction using only panoramic X-ray or cone beam CT images, but comprehensively predicts through various information such as the patient's age, BMI, morphology, and gender. The AI prediction model of this study is an AI model that can predict the time to extract the wisdom tooth by considering both radiographic images and patient's clinical data rather than simply predicting the time through AI using a panoramic image. This can be of great help to novice dentists when predicting the difficulty and time before wisdom tooth extraction or when deciding whether to refer to a specialist without extraction. Conclusions We proposed a concatenated model combining a convolutional neural network (CNN) model using an X-ray image (panoramic view) and a multilayer perceptron (MLP) model using patient's clinical data to predict the time to extract a mandibular third molar tooth. This concatenated model accurately predicted the time to extract the third molar tooth in actual clinical practice.
2022-12-08T14:43:55.577Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "1197fecf440fe5d823a95cf133f9a488b7482665", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "1197fecf440fe5d823a95cf133f9a488b7482665", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
9327746
pes2o/s2orc
v3-fos-license
The Roles of Several Residues of Escherichia coli DNA Photolyase in the Highly Efficient Photo-Repair of Cyclobutane Pyrimidine Dimers Escherichia coli DNA photolyase is an enzyme that repairs the major kind of UV-induced lesions, cyclobutane pyrimidine dimer (CPD) in DNA utilizing 350–450 nm light as energy source. The enzyme has very high photo-repair efficiency (the quantum yield of the reaction is ~0.85), which is significantly greater than many model compounds that mimic photolyase. This suggests that some residues of the protein play important roles in the photo-repair of CPD. In this paper, we have focused on several residues discussed their roles in catalysis by reviewing the existing literature and some hypotheses. Introduction The sun gives warmth and light to the living beings on the earth. However, the ultra-violet (UV) radiation in the sunlight stimulates lesions forming in DNA. The UV-induced lesions in DNA block the replication and transcription events in the living cells, cause growth delay, mutagenesis, or lethal effects to organisms [1]. In order to survive under the sunlight, the organisms have evolved several repair mechanisms to resist the harmfulness of UV. Direct reversal by DNA photolyases is one of the mechanisms. There are two types of DNA photolyases, CPD photolyases and photolyases, which, respectively, reverse the two major UV-induced lesions in DNA, cyclobutane pyrimidine dimers (CPD), and (6-4) photoproducts, utilizing blue or near-UV light (350-450 nm) as energy source [1][2][3][4]. CPD photolyases can be further categorized into two subclasses, class I (microbial) and class II (animal and plant), based on their amino acid sequence similarity [3,5,6]. Flavin adenine dinucleotide (FAD) is catalytic cofactor of all photolyases [3]. And a second cofactor, usually a derivative of folate, deazaflavin or flavin, acts as a photoantenna to increase the repair efficiency of the enzymes under limiting light conditions [3,[7][8][9]. The repair reactions are proposed through a photon-induced electron transfer mechanism which is supported by many model compounds studies. However, the quantum yields (Φ = 0.7-0.98) in the repair of pyrimidine dimers by DNA photolyase is significantly higher [3,[10][11][12][13] than those model compounds (Φ = 0.016-0.4) [14][15][16]. These results indicate that some amino acid residues of photolyases play important roles in the repair reactions. Escherichia coli DNA photolyase is a representative of them. By reviewing the existing literature and some hypotheses, we discussed the roles of some residues of E. coli photolyase in the highly efficient catalysis. This paper would provide the further insights into the catalytic mechanism of the enzyme. Escherichia coli DNA Photolyase Escherichia coli DNA photolyase is a class I CPD photolyase [17], containing 471 amino acids [18,19] and two cofactors, FAD [20] and a folate derivative, 5,10methenyltetrahydropterolypolyglutamate (MTHF) [7]. The enzyme was found in 1950s by Rupert et al. [21]. Its gene was first cloned by Sancar et al. [18,19], which settled the problem that the little expression of the gene in the cells prevents the high yield of pure enzyme for researches. During the following years, the enzyme has been extensively studied. The physiological form of the enzyme contains a fully reduced FAD (FADH − ) that is required for its activity both in vivo and in vitro [22]. It binds a CPD in DNA independent of light [17] and flips the dimer out of the double helix into the active site cavity to make a stable enzyme-substrate complex [23][24][25][26]. The light-dependent catalytic reaction was proposed through these steps: FADH − is excited directly by a photon or by the photoexcited MTHF cofactor and then transfers an electron to CPD to generate a charge-separated radical pair (FADH • + CPD •− ); then the CPD radical anion cleaves, and the excess electron returns to FADH • to restore the reduced form and close the catalytic photocycle [3,11,22,[27][28][29]. By the techniques such as time-resolved spectroscopy, laser flash photolysis [30][31][32][33][34][35], and transient electron paramagnetic resonance [36,37], this photon-induced electron transfer mechanism has been substantiated. However, the roles of the amino acid residues in the steps of the high efficient enzymatic reaction, such as substrate docking and splitting, electron transfer, and intermediate stabilization, need further investigation. Trp277: A Residue for CPD Docking and Splitting E. coli DNA photolyase contains 15 tryptophan residues. Trp277 lies in a highly conserved region Trp277-Tyr281, which is considered to be important for DNA binding [38]. By the site-directed mutagenesis studies, it was found when Trp277 was replaced with arginine or glutamate, the binding affinity for CPD substrate was lower for 300-or 1000fold, respectively, although the photochemical properties and the quantum yields for catalyses (under the irradiation wavelengths at 366 nm and 384 nm) of the mutants were indistinguishable from the wild-type enzyme [38]. Later on, it was discovered that Trp277 can also directly and efficiently repair CPD under 280 nm light [39]. These results revealed that Trp277 is crucial for substrate binding, and under certain conditions it also acts as a catalytic residue. The crystal structure of E. coli photolyase (Protein Data Bank entry 1DNP) shows that a positively charged groove on the surface of the protein which might interact with the DNA backbone and a hydrophobic cavity locates at the center. The cavity has the right dimension to hold a cis,syn CPD, and Trp277 forms one side wall of it [40] (Figure 1(a)). It is proposed that photolyase binds DNA chain containing a CPD, flips it out into the cavity, and Trp277 stacks with the 5 side of the CPD by π-π interaction [23][24][25][26]38]. This is confirmed by the crystal structure of the complex of CPDlike lesion in DNA and photolyase from Anacystis Nidulans (Synechococcus sp.) (Protein Data Bank entry 1TEZ) [41]. There is another tryptophan, Trp384, in the cavity forming a wedge with Trp277 (Figure 1(a)) [40]. By the examination of the cocrystal structure of Thermus thermophilus photolyase with a thymine, it is proposed that CPD might be sandwiched by these two tryptophans [42], and its 3 side might stack with Trp384 in a similar manner. However, from the structure 1TEZ, it is concluded that the 3 side should stack with a methionine residue, Met345, but not Trp384. Met345 is to be discussed in the next section. Met345: A Discriminating Residue of CPD Photolyase from Photolyase-Cryptochrome Super Family A methionine residue in the active cavity of Saccharomyces cerevisiae photolyase, which corresponds to Met345 of E. coli photolyase, was predicted to interact with CPD [24]. By the structure of the complex of Anacystis Nidulans photolyase and CPD-like lesion, it was confirmed that Met345 should stack with the 3 side of CPD [41] (Figure 1(a)). Methionine is a sulfur-containing amino acid. From the studies of the crystal structures of many proteins, Morgan and coworkers proposed that the sulfur atoms might interact with aromatic rings by the so-called sulfur-π interaction [43][44][45][46][47][48][49][50][51]. Although the mechanism of this interaction is still controversial, it does exist. For example, in the structure of the flavodoxin of Clostridium beijerinckii (Protein Data Bank entry 5ULL), a methionine residue is located near the xylene ring side of the flavin cofactor [52] the conformations are just like those of Met345 and the 3 side of CPD (Figure 1(b)). The sulfur-π stack might also contribute to substrate-binding affinity. In addition, this interaction together with that between Trp277 and the 5 side of CPD might have some effects on substrate splitting. The electron transfer from excited FADH − to CPD is now considered to be through a direct pathway [33,53,54]. However, a theoretical calculation with the CPD-photolyase complex structure shows that the indirect electron transfer via protein mediators is as important as the direct electron transfer [55]. The electron-tunneling pathway analysis suggested that there were two typical electron-tunneling routes for the electron transfer of photolyase, one was an adenine route, and the other was through a methionine corresponding to Met345 of E. coli photolyase [55]. It is widely accepted that the electron transfers from FADH − to the 5 side of CPD first, then to the 3 side [3,32]. And the pathway for excess electron transfer back to FADH • remains unclear till now. Considering Met345 adjacent to the 3 side of CPD, we speculate that it might be an electron back transfer pathway. It is intriguing that Met345 is proposed to be a residue for the discrimination of CPD photolyase from the photolyasecryptochrome super family [55]. The super family contains CPD photolyases, (6-4) photolyases, and cryptochromes, all of which are flavoproteins. (6-4) photolyases repair (6-4) photoproduct but not CPD [4]. Cryptochromes play roles in photomorphogenesis in plants and entrain the circadian biological clocks in animal [2,3,56]. Recently, it is found that some cryptochromes in insects and birds might function as light-activated magnetoreceptors [57][58][59][60][61]. Although these proteins are functionally diverse, they have relatively high degree of homology. Met345 is conserved in all CPD photolyase whereas in (6-4) photolyases, it is replaced by a histidine which is also important for catalysis [62]. In cryptochromes, it is replaced by histidine, valine, glutamine, and so forth. It might be one of the residues responsible for Journal of Nucleic Acids Asn378: A Stabilizer of the Neutral FAD Radical Although E. coli photolyase contain reduced FAD in vivo, it is usually purified with FAD in the blue neutral radical form (FADH • ). It is known that E. coli photolyase is one of the unusual flavoproteins in which the radical is extremely stable [63]. The purified enzyme can hold its radical flavin cofactor unoxidized in aerobic conditions for several days whereas it hardly exists free in aqueous solutions because the dismutation of the radical is favored. These results indicate that the protein environment gives a strong stabilization to the radical. Clostridium beijerinckii flavodoxin is another example that has the ability to hold stable radical flavin cofactor. It was proposed that a hydrogen bond between the flavin N(5)H group and the backbone carbonyl oxygen of Gly57 in the flavodoxin is important for the modulation of the redox potentials of the cofactor and the stabilization of the radical form (Figure 1(b)) [64,65]. Interestingly, there is a similar hydrogen bond between the flavin N(5)H group and the side carbonyl of the Asn378 residue (Figure 1(a)). We had replaced the asparagine residue with serine and found that the mutant has no stable radical state [66]. Moreover, the catalytic activity of the mutant was lost [66]. These experiments show that Asn378 is crucial both for the stabilizing the neutral flavin radical cofactor and for catalysis. It is convincible because the catalytic reaction of CPD splitting is through a radical mechanism: FADH − gives an electron to CPD and becomes FADH • . If the transient radical intermediate is well stabilized, it will give enough time for cleavage of the cyclobutane ring to achieve high repair efficiency [33,67]. When the stabilizing effect is disrupted, the unwanted back electron transfer might be accelerated, leading to low repair efficiency. Asn378 is a highly conserved residue in photolyases. In most class I CPD photolyases and (6-4) photolyases, the residues are unchanged or replaced with asparate [6,68]. The residue is also conserved in mammalian cryptochromes [69]. In a plant cryptochrome Arabidopsis thaliana CRY1, it is replaced with asparate that is proposed to be responsible for the down shift of the flavin redox potentials which make the difference between the cryptochrome and photolyases [70]. Meanwhile, some insect cryptochromes have replaced the residue with cysteine, which may be responsible for a red anionic radical (FAD •− ) state but not the blue neutral radical state in photolyases [69,71]. However, in many class II CPD photolyases, the asparagine residues are not conserved [6,72]. There is evidence that class II CPD photolyases have the similar photochemical properties and the FAD binding environments as compare to class I CPD and (6-4) photolyases [72]. Thus, a stabilizer near the flavin N(5)H group is also required in a class II enzyme. By examining a model structure of a class II enzyme (Oryza sativa CPD photolyase) calculated by comparative modeling (http://modbase.compbio.ucsf.edu/modbase-cgi/index.cgi) [72,73], we find another asparagine residue, Asn421, might be an alternative candidate for the stabilization function, which is also highly conserved in class II CPD photolyases. Journal of Nucleic Acids From the model structure of Oryza sativa CPD photolyase, it is of interest to find that the residue corresponding to Trp277 in E. coli photolyase is not conserved, which is considered to be crucial for substrate binding (vide ante). It indicates that the class II CPD photolyases might use a different substrate binding mechanism as compared to class I group. However, it seems that the residues corresponding to Met345 and Trp384 in E. coli photolyase are conserved in the substrate binding cavity, emphasizing their important roles for substrate binding and/or catalysis. Therefore, to fully uncover details of substrate binding and FAD usage, a real crystal structure of a class II photolyase is highly awaited [72]. A mutagenesis study on Anacystis nidulans photolyase shows that two residues (Trp384 and Gly381 of E. coli photolyase) are crucial for the kinetics stability of the neutral flavin radical in the enzyme (Figure 1(a)) [74]. In Synechocystis sp. PCC6803 CRY-DASH, these residues are replaced with tyrosine and asparagine, respectively [74]. The difference might also be a reason for the diverse functions of photolyases and cryptochromes. Perspectives Although the first photolyase has been discovered more than 50 years [21], photolyases still occupies a unique position in biochemistry [1,2]. Photolyases bind the substrate in a light-independent manner, but the catalysis is absolutely dependent on light that makes the possible to analyze the binding and catalysis steps of the enzymes independently [2,3]. Furthermore, the level of substrate in the cell can be controlled easily by simply changing the UV dose, and the repairing of the bound substrate is ultrafast by a single light flash. All characteristics make the photolyases to be useful tools for biochemical research, especially the in vivo enzymology [2]. Cryptochromes are homologues of photolyases, which are widely concerned now for their functions in the circadian clocks of animal, the photomorphogenesis of plants and the migration of birds and insects by acting as light-activated magnetoreceptors [2,3,[56][57][58][59][60][61]. The action mechanisms of cryptochromes are still unclear at present. However, as photolyases, flavin radical is also proposed to be crucial for their functions [57-61, 69-71, 74-76]. Thus, further exploring the photolyase system might give new insights into the researches of both photolyases and cryptochromes. Summary In this paper, we have discussed some of important residues of E. coli DNA photolyase. Evidence suggests that they play significant roles in substrate docking and splitting, electron transfer, and intermediate stabilization. Of course, due to the limits of our knowledge, there must be a lot of amino acid residues that might be much more important in the catalysis of enzyme, which are not discussed here. Together with all these amino acid residues and the cofactors of the enzyme, they built a very high efficient system for the photo-repair of cyclobutane pyrimidine dimers in DNA. Further research on this system will not only contribute to understanding its efficient catalytic mechanism, but also give new insights to the other biochemical research in the future.
2014-10-01T00:00:00.000Z
2010-08-31T00:00:00.000
{ "year": 2010, "sha1": "363b68cbad33f2248b0aebb10a3013b72398cd92", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/jna/2010/794782.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c8505bc0125459a253b5ec8b4845c71565cd9f78", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
246063674
pes2o/s2orc
v3-fos-license
Synthesis, Spectroscopic and Biological Investigation of a New Ca(II) Complex of Meloxicam as Potential COX-2 Inhibitor Drug development on basis of coordination compounds provides versatile structural and functional properties as compared to other organic compounds. In the present study, a new Ca(II) complex of meloxicam was synthesized and characterized by elemental analysis, FT-IR, UV–Vis, 13C NMR, SEM–EDX, powder XRD and thermal analysis (TGA). The Ca(II) complex was investigated for its in vitro, in vivo biological activities and in silico docking analysis against COX-1 and COX-2. The spectral analysis indicates that the meloxicam acts as a deprotonated bidentate ligand (coordinated to the metal atom through the amide oxygen and the nitrogen atom of the thiazolyl ring) in the complex. SEM–EDX and powder XRD analysis depicted crystalline morphology of Ca(II) complex with a crystalline size of 32.86 nm. The in vitro biological activities were evaluated by five different antioxidant methods and COX inhibition assay, while in vivo activities were evaluated by carrageenan-, histamine- and PGE2-induced paw edema methods and acetic acid-induced writhing test. The Ca(II) complex showed prominent antioxidant activities and was found to be more selective toward COX-2 (43.77) than COX-1 as compared to meloxicam. It exhibited lower toxicity (LD50 1000 mg/Kg) and significantly inhibited carrageenan- and PGE2-induced inflammation at 10 mg/Kg (P < 0.05), but no significant effect was observed on histamine-induced inflammation. Moreover, Ca(II) complex significantly reduced the number of writhes induced by acetic acid (P < 0.05). The in silico molecular docking data revealed that Ca(II) complex obstructed COX-2 (dock score 6438) more effectively than COX-1 (dock score 5732) as compared to meloxicam alone. Introduction In the recent times, the inflammatory diseases such as autoimmune and infectious diseases have exponentially escalated. The management of never-endingly transmitted infectious diseases and various microbial and viral infections like multisystem inflammatory syndrome and Coronavirus Disease 2019 (COVID-19) requires effective anti-inflammatory and analgesic drugs to cure the post-infection inflammatory events [1,2]. Likewise, the severe and prevalent autoimmune diseases such as allergic asthma and rheumatoid arthritis manifest chronic and acute inflammation characterized by pain, edema and redness and are usually treated by non-steroidal anti-inflammatory drugs (NSAIDs), glucocorticoids and disease-modifying antirheumatic drugs (DMARDs) [3][4][5]. The NSAIDs are widely prescribed and also used over the counter for alleviating the conditions of the different diseases by inhibiting the prostaglandin endoperoxide synthase enzyme also called cyclooxygenase (COX) [6]. Among the isoforms of enzyme, designated as COX-1 and COX-2, COX-1 plays housekeeping actions and is responsible for maintaining the protective lining of the gastrointestinal tract (GIT), while COX-2 is responsible for the synthesis and migration of pro-inflammatory cytokines to the site of inflammation [7,8]. The NSAIDs are designed to suppress inflammation by obstructing the enzyme COX-2 through receptor analogy, thus preventing the synthesis and accumulation of proinflammatory cytokines and macrophages at the site of inflammation [9]. Free radicals triggered by neutrophils and macrophages play a key role in inflammatory reactions as they enhance the inflammatory response by increasing the production of cytokines and chemokines (in the positive feedback mechanism) and also damage local cells through oxidation and nitration [10,11]. Reactive oxygen species (ROS) production typically increased during inflammatory process, and it was shown that some of the NSAIDs have potential to interact with reactive species and therefore preventing oxidative damage [12]. Meloxicam (4-hydroxy-2-methyl-N-(5-methyl-1,3-thiazol-2-yl)-1,1-dioxo-1λ 6 ,2-benzothiazine-3carboxamide, C 14 H 13 N 3 O 4 S 2 , Fig. 1) is NSAID of oxicam family, often used in inflammatory diseases such as rheumatoid arthritis or osteoarthritis [13]. It is designed to selectively inhibit the COX-2; however, the presence of acidic groups in its structure still hinders the proper management of inflammation [14]. Other than its anti-inflammatory and analgesic properties, meloxicam has been reported to reduce the formation of singlet oxygen and other reactive oxidants [15]. Given that inflammation has a complex pathogenesis, complexation of meloxicam with suitable metal ion may increase its anti-radical effectiveness and efficiency of anti-inflammatory treatment. Till date numerous metal complexes of meloxicam have been synthesized to alleviate the inflammation as well as oxidative stress without causing harm to the body. Metal ions perform remarkable roles in biological regulations [16], and the existence of the metallic ions can also affect the bioavailability of drugs [17]. Metal complexes have a greater impact on the target tissues as compared to the sole drugs due to their synergic action [18]. However, most of the reported metal complexes have greater toxicity plausibly due to the poisonous nature of the metals used [19]. Therefore, bio-metals are considered more desirable for preparing such drugs as these metals would not pose toxicity in the body [20]. The present work was designed to synthesize and characterize a new calcium complex of meloxicam through different spectroscopic techniques. The Ca(II) complex was assessed for in vitro antioxidant and COX inhibition assays, in vivo anti-inflammatory activities through carrageenan-, histamine-and prostaglandin E2-induced edema, while analgesic efficacy was evaluated by acetic acid-induced writhings. Inhibitory effect on COX-2 was investigated by in silico molecular docking for visualizing the in vivo anti-inflammatory and analgesic efficiency of new Ca(II) complex of meloxicam. Synthesis of Ca(II) Complex of Meloxicam The new calcium metal complex of meloxicam was synthesized by following previously reported protocol with few modifications [21]. Briefly, 0.5 mM ethanolic solution of anhydrous CaCl 2 was treated with 1 mM ethanolic solution of meloxicam at pH 7. The mixture was stirred overnight and was then placed in water bath at 60°C until precipitates were condensed. The precipitates were filtered, washed with hot ethanol and then dried at ambient temperature. The overnight stirring was done to obtain good yield of calcium metal complex which is difficult to achieve due to inert nature of calcium metal as compared to reactive metals we reported in our previous work [22]. Characterization The UV-visible spectroscopy was performed on T90 + UV/VIS spectrometer PG instrument within the range of 200-450 nm using DMSO as blank. Melting and decomposition temperatures were measured on Gallen Kamp melting point apparatus. FTIR of the dry sample was performed using PerkinElmer Spectrum-100 (KBr crystal). Elemental analysis was performed using a Euro EA elemental analyzer, and 13 C NMR spectra were recorded by Bruker Avance 400. Metal contents were obtained through atomic absorption spectrometry and flame photometry performed by PerkinElmer AA Analyst-100 and Sherwood 410 flame photometer, respectively. Field emission scanning electron microscopy with energy-dispersive spectroscopy analysis (FESEM-EDS) of the sample was done by Nova Nano SEM 450 field emission scanning electron microscope (FESEM). Powder X-ray diffraction (XRD) was carried out with Cu K α between 2θ° 5-50°with a step size of 0.04°using Bruker D8 Discover XRD system. The thermogravimetric analysis (TGA) was performed by SDT Q600 thermogravimetric analyzer to determine the thermal stability of the metal complex. The sample (0.05 g) was subjected to temperature rise from 25 to 1000°C in the presence of N 2 environment, and the rate of temperature rise was kept 10°C per minute. Temperature T max (°C) at each step of degradation was recorded. Coats Redfern equation was used to determine the energy of activation based on the results of TGA [23]. In Vitro Cyclooxygenase Inhibition Assay The COX inhibitor Screening Assay Kit (Catalog No. 560131, Cayman Chemical, Ann Arbor, MI, USA) was used to measure the cyclooxygenase inhibitory potential of synthesized Ca(II) meloxicam complex. The ovine COX-1 and human recombinant COX-2 activity directly measure PGF 2α produced by SnCl 2 reduction of COX-derived PGH 2 . The prostanoid product is quantified via enzyme immunoassay (EIA) using a broadly specific antibody that binds to all the major prostaglandin compounds. Briefly, control value was obtained in the absence of compound. The COX enzyme was mixed with different concentration of tested compound and heme and incubated for 10 min at 37°C. The reaction was initiated by adding arachidonic acid, and all tubes were incubated for another 2 min at 37°C. The efficacy of compound was determined as the concentration causing 50% enzyme inhibition (IC 50 ). The selectivity index (SI values) was calculated as IC 50 (COX-1)/IC 50 (COX-2) [25,26]. In Vivo Assays Animals All in vivo experimentation of the synthesized Ca(II) complex of meloxicam was performed using Sprague-Dawley (SD) rats, 10-12 weeks old (150-200 g), maintained at controlled temperature (25 ± 5°C) and humidity (50 ± 10%) in the institutional animal house. The animals were exposed to 12 h light and dark cycle and had free access to autoclaved tap water and pathogen-free feed for 24 h. International ethical guidelines were followed for the care of laboratory animals to provide them with a healthy and clean environment. Experiments were approved by Institutional Ethical Committee, University of the Punjab, Lahore (Approval No. D/025/2018, March 07, 2018). Acute Toxicity of Ca(II) Complex of Meloxicam Acute toxicity of the Ca(II) complex was tested, and safe dose was calculated as per organization for economic cooperation and development (OECD) test guideline 425 [27]. Rats were randomly divided into eight groups of either sex (n 5) and were treated with 5, 25, 50, 100, 250, 500, 1000 and 2000 mg/Kg dose of Ca(II) complex orally. The control group received CMC (0.5%) at a dose 10 mL/kg. All the animals were observed randomly for any signs of toxicity during first 4 h, and then, the numbers of dead animals were counted after 24 h. LD 50 was calculated by the previously used method [28,29]. In Vivo Anti-Inflammatory Activity Carrageenan-induced paw edema method was followed to evaluate the antiinflammatory effect of meloxicam and its Ca(II) complex [30]. The rats were randomly divided into five groups (n 6): the Carrageenan group, low (5 mg/Kg), medium (10 mg/Kg) and high (20 mg/Kg) dose Ca(II) complex groups and standard meloxicam (10 mg/Kg) group [31]. Rats in the carrageenan control group received oral gavage of 0.5% CMC and served as control group, while rats in standard meloxicam and low, medium and high dose Ca(II) complex groups received respective compound dosage. All of the groups received 1% carrageenan solution in sub-plantar region of right paw. Thickness of rat paws measured by water displacement after 5 min (t o 0) exhibited the initial paw volume. The thickness of paw volume was measured after 1, 2, 3, 4 and 5 h. Histamine-and PGE 2 -Induced Paw Edema The antiinflammatory mechanism of synthesized Ca(II) complex was evaluated using the histamine-and prostaglandin E2 (PGE 2 )-induced paw edema assays. The SD rats were randomly divided into five groups (n 6): the histamine or PGE 2 group, low (5 mg/Kg), medium (10 mg/Kg) and high (20 mg/Kg) dose Ca(II) complex groups and standard meloxicam (10 mg/Kg) group [31]. Rats in the histamine or PGE 2 group received 1 mL oral gavage of 0.5% CMC and served as inflammatory control group, while rats in standard meloxicam and low, medium and high dose Ca(II) complex groups received respective compound dosage. After 1 h, paw edema was induced by sub-planter injection of 0.1 mL of histamine (1 mg/mL) or prostaglandin E2 (0.01 μg/mL). Paw volume of each rat was immediately measured before and after the sub-planter administration of inflammatory agents at 1, 2, 3 and 4 h [32]. Acetic Acid-Induced Analgesic Potential Acetic acidinduced writhing test was performed to evaluate the analgesic potential of meloxicam and its Ca(II) complex. The rats were randomly divided into three groups (n 6): acetic acid control group (0.5% CMC), Ca(II) complex (10 mg/Kg)and meloxicam (10 mg/Kg)-treated groups. Rats in both the meloxicam and complex group (10 mg/Kg body weight) were administered orally as a suspension in 0.5% CMC, to 16 h fasten rats. Later, upon 1 h of treatment, 0.6% acetic acid (10 mL/Kg) was injected intraperitoneally to induce the characteristic writhing in the rats. The number of writhings occurring between 5 and 25 min after the acetic acid injection in control and treated animals was recorded. The anti-nociceptive activity of drugs was calculated by the following formula. Percentage inhibition of writhing ((mean writhing of control -mean writhing of test /mean writhing of control) × 100 In-Silico Molecular Docking Study The in silico molecular docking was studied by preparing and optimizing the three-dimensional structure of the drugs meloxicam, diclofenac sodium and calcium complex in software Avogadro, respectively [33]. The protein receptors COX-1 and COX-2 were downloaded from RCSB PDB (ID: 1CQE and 6COX, respectively). The binding pockets of COX-2 and amino acids lining the cavity were identified using software Deepsite-PlayMolecule [34]. The receptor and ligand files were subjected to Patchdock Beta 1.3 software for docking [35,36]. The docking scores, approximate interface protein-ligand complex area and values of atomic contact energy (ACE) (kJ/mol) were recorded for the proposed inhibitor ligands. The docking mode of enzyme inhibition was selected, and the root-mean-square deviations of atomic positions were 1.5 [37]. The results were analyzed by using UCSF Chimera 1.14 [38]. Statistical Analysis Statistical analyses were performed using one-way ANOVA with Tukey test by GraphPad prism (version 7.03). Data for in vivo study were presented as mean ± SEM values. P value < 0.05 was accepted as statically significant. Synthesis of Ca(II) Complex of Meloxicam The synthesis of calcium complex of meloxicam was accomplished by mixing ethanolic solutions of meloxicam and CaCl 2 at pH 7 (Fig. 2). The inert nature of calcium required overnight stirring of the mixture; afterward, the pale white precipitates were obtained after heating the mixture at 60°C. The UV-visible absorbance peaks shifted to 375 and 280 nm for Ca(II) complex from 365 and 275 nm of the ligand, respectively. This shifting showed the involvement of meloxicam in complexation with calcium (Fig. 3a). The UV-Vis spectra of the complex did not change on keeping the DMSO solutions for 48 h, and no precipitation, turbidity or decomposition were observed even after long storage at room temperature (at least 3 months after preparation). This clearly indicates the stability of Ca(II) complex. 13 C NMR spectra in DMSO-D 6 , δ ppm, showed suppression of peaks in C=O and C-OH region (Fig. 3b). The 13 C NMR of meloxicam shows definitive peaks in region > 150 ppm as reported in literature [39]. The diminishment of peaks in 156 and 160 ppm ranges reported for meloxicam in the literature depicts the formation of complex through these groups [40]. The FTIR spectra of meloxicam depicted prominent absorption bands at 3282(sh, s) cm −1 , 1614(sh, s) cm −1 , 1550 (s) cm −1 , 1342(s) cm −1 and 1182(s) cm −1 which can be attributed to the stretching vibrations of ν(N-H) amide , ν(C=O) amide , Ѵ(C=N) thiazolyl ring , ν(SO 2 ) asym and ν(SO 2 ) sym , respectively (Fig. 3c). The FTIR results of synthesized Ca(II) complex showed no absorption peaks in the region of 3200-3300 cm −1 relating to N-H stretching mode because the N-H group of meloxicam is involved in a strong intramolecular hydrogen bond to enolate oxygen. The bands of stretching vibrations for the ν(C=O) amide and Ѵ(C=N) thiazolyl ring are shifted to lower wavenumber in the spectra of the Ca(II) complex which indicates the coordination of H 2 mel through these two groups [21,41]. In comparison with the meloxicam spectrum, the two stretching bands of the SO 2 group (ν as and ν s ) also shift slightly to lower frequencies (Fig. 3d). Flame emission spectroscopy results showed complete absence of sodium metal, while the atomic absorption spectroscopy showed the incorporation of calcium in our synthesized Ca(II) Complex. FESEM and EDS Analysis FESEM was employed to study surface and structural morphology of synthesized Ca(II) complex of meloxicam. The SEM images of Ca(II) complex revealed perfectly crystalline morphology (Fig. 4a) with rectangular-shaped crystals, while uncoordinated meloxicam has rock-like morphology, with irregularly shaped crystals of various sizes [42]. SEM analysis depicted significant changes in the shape and surface morphology of Ca(II) complex which occurred due to the complexation, and it might improve the properties of meloxicam. The cavities in the crystal system could be used for the electrostatic drug loading [22]. The EDS is used to calculate the percentage level of the elements present in the metal complexes like C, O, N, S and respective metal present in the complex [43]. This result confirms the presence of calcium along with other elements in the synthesized complex (Fig. 4b). The revealed data are in good agreement with that of the elemental analysis. Powder XRD Analysis The crystallinity of Ca(II) complex of meloxicam was evaluated by powder XRD measurement. The X-ray diffractogram of Ca(II) complex exhibited several sharp peaks at different angles (2θ ) 8.04, 9.08, 12.56, 15.04, 16.72, 17.08, 18.28, 19.52, 20.4, 21.96 and 28.4°suggested that the synthesized Ca(II) complex existed as crystalline material (Fig. 5), while the X-ray diffractogram of pure meloxicam exhibited sharp peaks at different angles (2θ ) 13.0, 15.0, 18.5 and 26.0° [44]. It is observed from X-ray diffractogram that complexation of meloxicam with calcium metal enhanced its crystallinity. The average crystalline sizes of the Ca(II) complex (d XRD ) were calculated using Debye-Scherrer equation (D Kλ/β Cos θ ), where D particle size, K dimensionless shape factor, λ X-ray wavelength (0.15406 Å), β full width at half maximum (FWHM) of the diffraction peak, and θ diffraction angle [45]. The synthesized Ca(II) complex has a crystalline size of 32.86 nm suggesting that the complex is in a nanocrystalline phase. Thermogravimetric Analysis Thermogravimetric analyses (TGAs) are used to get information about the thermal stability of new complexes and suggested a general scheme for their thermal decomposition [46]. The TGA results depicted the degradation of Ca(II) complex in three steps. The mass (7.34%) loss for water molecules was found in first step of degradation at T max 190°C. In the second step of degradation, 48.91% mass was lost due to ethene and nitrous oxides of the organic ligand at T max 261, 288°C. The third step of degradation was recorded at T max 566°C owing to the decomposition of second ligand moiety, and CaCO 3 was the final decomposition product with some residual carbon about 0.024 g (Fig. 6). The activation energy (E α 323.84 kJ/mol) value for Ca(II) complex was calculated by Coats Redfern equation, which shows a high thermal stability of the complex [23]. Antioxidant Assays The free radical scavenging, metal chelating and reducing abilities of meloxicam and its synthesized Ca(II) complex determined by using five different in vitro assays as more than one method are used to evaluate the antioxidant capacity/activity of a desired sample [47]. The radical scavenging, metal chelating and reducing capabilities of compounds were summarized in the form of IC 50 values (Table 1). DPPH and ABTS radical scavenging activity has been widely used to test the ability of compounds to act as free radical scavengers or hydrogen donors and thus to evaluate the antioxidant activity [48]. The ABTS assay is better than DPPH as ABTS is soluble in water and organic solvents, and it reacts relatively rapidly with the tested compounds compared to DPPH [49]. The tested compounds interact with the stable free radical DPPH which shows their radical scavenging ability. This interaction was found to be concentration and time dependent, but the IC 50 values of meloxicam and its Ca(II) complex were higher; > 2000 μM, while in ABTS radical scavenging assay, Ca(II) complex showed prominent radical scavenging potential in a concentration-dependent manner as compared to meloxicam and standard ascorbic acid. The IC 50 values for ABTS radical scavenging of Ca(II) complex, meloxicam and standard ascorbic acid showed decreasing trend with respect to time, and lowest IC 50 values were recorded at 120 min: 44.8, 82.22 and 104.12 μM, respectively. Meloxicam itself has anti-radical properties and has been reported to reduce the formation of reactive oxidants [50], but complexation with calcium metal enhanced its ABTS radical scavenging properties. Iron chelating activity is based on the absorbance measurement of iron (II)-phenanthroline complex. This complex produced a red chromophore with a maximum absorbance at 512 nm [51]. The synthesized compounds act as chelating agents and capture the ferrous ion (Fe 2+ ) before phenanthroline. This method is used to determine the extent of Fe 2+ + chelation by meloxicam and its Ca(II) complex. It was found that IC 50 values of meloxicam were much lower as compared to its Ca(II) complex showing its greater Fe 2+ chelating abil- ity. Higher IC 50 value (> 2000 μM at 120 min) of Ca(II) complex corresponds to its lower chelating ability, which might be due to complexation as meloxicam is already in complexed form with calcium. It might be possible that the moieties of meloxicam responsible for chelating Fe 2+ are now involved in complexation with calcium metal and therefore exhibited lower iron chelating activity. In FeCl 3 reducing power assay, meloxicam and its Ca(II) complex reduced potassium ferricyanide (Fe 3+ ) to potassium ferrocyanide (Fe 2+ ) which then reacted with ferric chloride to form ferric ferrous complex [52]. In this assay, IC 50 of Ca(II) complex was found to be lower than un-coordinated meloxicam at all time intervals. Lower IC 50 values indicate a higher antioxidant power of Ca(II) complex to reduce Fe 3+ to Fe 2+ . So, it is observed that complexation of meloxicam with cal-cium enhanced its iron reducing capabilities but lowers iron chelating capacity. Similarly, phospho-molybdenum (PM) activity is based on the reduction of Mo (VI) to Mo (V) by the test compounds, giving a direct approximation of reducing capacity of the compounds [53]. Similar results were observed in PM activity, and IC 50 of Ca(II) complex was found to be lower than meloxicam and ascorbic acid, indicating higher reducing activity of meloxicam in complex form. From the overall results, it is concluded that the Ca(II) complex of meloxicam exhibited potent radical scavenging and reducing abilities in ABTS, FeCl 3 reducing power and phospho-molybdenum antioxidant assays than uncoordinated meloxicam. The Ca(II) complex of meloxicam did not show promising iron chelating ability which is the indication of complexation between meloxicam and calcium. In Vitro Cyclooxygenase Inhibitor Assay The COX-1/2 inhibitory activities of the synthesized Ca(II) complex were evaluated and compared with free meloxicam using the enzyme immunoassay (EIA) method against ovine COX-1 and human recombinant COX-2. The half maximal inhibitory concentration IC 50 values calculated from experimental data are shown in Table 2. The selectivity index was calculated as the ratio IC 50 COX-1/IC 50 COX-2. The results obtained for the standard NSAIDs meloxicam are similar to those described by the previous studies [26,54]. The tested Ca(II) complex showed half maximal inhibitory concentration for COX-1 by concentrations of 211.95 μM, while for COX-2 half inhibitory concentration is achieved by concentrations as low as 4.84 μM. The Ca(II) complex and meloxicam both have a potent COX-2 inhibitory effect with IC 50 < 30 μM. When considering the selectivity index as well as the inhibitory potency, the Ca(II) complex was proved to be more selective toward COX-2, as compared to meloxicam. According to Warner et al., selectivity indices ranging from 5 to 50 are associated with compounds that markedly inhibit more the COX-2 isoform than COX-1 and represent the ideal range for the development of safe and selective COX-2 inhibitors [55]. This Ca(II) complex can be considered as a selective COX-2 inhibitor as it showed 43.77 selectivity index (SI) toward COX-2, while meloxicam with 10.39 selectivity is considered as preferential COX-2 inhibitor [56]. Acute Toxicity of Ca(II) Complex of Meloxicam Before the in vivo evaluation, we evaluated the toxicity of Ca(II) complex to demonstrate safety and their safe dose [27]. The results suggested that the oral LD 50 of Ca(II) complex is 1000 mg/Kg (Table 3, rats 10). The toxicity of Ca(II) complex appeared to be much lower than the toxicity of the reference drug meloxicam. The LD 50 threshold for oral meloxicam is 470 mg/Kg [57,58]. According to OECD guidelines for acute oral toxicity, an LD 50 dose of > 300-2000 is categorized as category 4 and hence the drug is found to be safe. The extended safety index of Ca (II) complex of meloxicam is probably related to the complexation of Calcium with meloxicam through amide oxygen and thiazolyl ring nitrogen. In Vivo Anti-Inflammatory Activity To evaluate the in vivo anti-inflammatory potential of synthesized Ca(II) complex at different doses (5, 10, 20 mg/Kg), the carrageenan-induced paw edema method was employed (Fig. 7). It was found that meloxicam and its Ca (II) complex showed a significant anti-inflammatory effect by reducing the paw volume after 2 h of activity, while the paw volume of carrageenan group rats was at peak after 3 h. The Ca (II) complex at 10 mg/Kg showed significant reduction in paw volume (*P < 0.05) followed by its 20 mg/Kg dosage and standard meloxicam, while Ca (II) complex at 5 mg/Kg exhibited the least response. The development of edema induced by carrageenan is biphasic: The first phase occurs within 1 h of carrageenan inflammation and is attributed to the release of the neurotransmitter molecules histamine and serotonin. The second phase (over 1 h) is mediated by an increased release of prostaglandins in the inflammatory area, and the continuity between the two phases is provided by kinins [59]. After 3 h of activity, carrageenan group rats exhibited maximum swelling and redness in inflamed paw as compared to other treated groups which was scored from 0-4 on the basis of severity (Fig. 8). In scoring, 0 was assigned to normal; 1 represented minimal inflammation and edema of injected paw; 2 was denoted to mild paw inflammation and edema; 3 was allocated to moderate inflammation and edema; and 4 was assigned to severe inflammation and redness of paw. Histamine-Induced Paw Edema Histamine-induced inflammation has been well established as a valid model to study paw edema because histamine evokes the release of neuropeptides and prostaglandins from endothelial cells leading to hyperalgesia [60,61]. Rats of histamine-induced paw edema group showed rise in paw volumes after 1, 2 and 3 h of sub-plantar injection of histamine (Fig. 9a). Rats pretreated with oral Ca(II) complex and meloxicam showed slight decrease in paw volumes after 2 h. There was no significant difference between paw volumes among the treated groups, but Ca(II) complex at 10 mg/Kg showed prominent response at 2, 3 and 4 h (Fig. 9a). After 2 h of activity, histamine group rats exhibited maximum swelling and redness in the inflamed paw as compared to other treated groups (Fig. 9b). Inflammation likely consists of three stages; an increase in vascular permeability; leukocyte migration; and proliferation of connective tissue. Swelling is the first stage in the inflammatory process. Although the second stage of inflammation is not dependent on the first stage, the therapeutic effects of agents acting on the second stage may be influenced by the first stage [61]. PGE 2 -Induced Paw Edema PGE 2 is a very important mediator of all types of inflammation and is responsible for increased prostaglandin production in inflamed tissue [62]. Rats of PGE 2 -induced paw edema group showed rise in paw volumes after 1 and 2 h of sub-plantar injection of PGE 2 compared with the paw volume of rats of the untreated control group (Fig. 10a). Rats pretreated with oral Ca(II) complex at 10 mg/Kg showed significant decrease in paw volumes after 1 h of activity (P < 0.05). All the treated group showed decrease in paw volume, but Ca(II) complex at 10 mg/Kg showed significant response at 1, 2, 3 and 4 h (Fig. 10a). After 2 h of activity, PGE 2 group rats exhibited maximum swelling and redness in the inflamed paw as compared to other treated groups (Fig. 10b). Our results indicated that Ca(II) complex was more potently inhibited PGE 2 -induced paw edema than meloxicam at similar doses (10 mg/Kg). In the prostaglandin (PGE 2 ) biosynthesis pathway, (COX-2) is the key enzyme that catalyzes the conversion of arachidonic acid to PGE 2 [63] and this finding was confirmed by our observation in the in vitro inhibition of COX-2 by Ca(II) complex. These results indicate that the inhibitory effect of Ca(II) complex on carrageenan edema is probably due to PGE 2 reduction since its effect on carrageenan edema was more pronounced than that one produced by histamine. Acetic Acid-Induced Analgesic Potential Acetic acid-induced writhing test was used to evaluate the analgesic potential of meloxicam and its Ca(II) complex at 10 mg/Kg. Acetic acid-induced writhing test is a model of peripheral pain that is useful for anti-nociceptive drug development [64]. The Ca(II) complex showed more significant response (*P < 0.05) and reduced the number of characteristic writhing in rats as compared to meloxicam (Fig. 11). Consequently, highest percentage (85.26%) inhibition of writhing response was produced by Ca(II) complex at 10 mg/Kg, while meloxicam showed 67.89% inhibition (Table 4). NSAIDs act by the reduction of sensitization of pain receptors caused by prostaglandins at the inflammation site [65]. In Silico Molecular Docking Study The molecular docking study was performed in order to investigate the binding interaction of Ca(II) complex at the binding/active site of COX-1 and COX-2 protein (PDB code: 1CQE and 6COX, respectively) [66,67]. The software Deepsite-PlayMolecule helped to assess the binding sites in the three-dimensional structure of enzymes (Fig. 12). The molecular docking was performed according to the given literature for binding pocket of COX enzymes containing THR 118 , ARG 120 , GLN 192 , VAL 349 , TYR 355 , GLU 364 , PHE 518 , VAL 523 (ILE 523 in COX-1), ALA 527 , MET 535 . Structural differences among the binding sites of COX-1 and COX-2 provided valuable guidelines for the design of selective COX-2 inhibitors [68,69]. The main difference consists in the existence of a second pocket inside of COX binding site, which is more accessible in COX-2 because of the replacement of ILE 523 in COX-1 with a smaller side chain residue VAL 523 , linked with conformational changes at TYR 355 , which opens the hydrophobic chain of the additional pocket including LEU 352 , SER 353 , TYR 355 , PHE 518 and VAL 523 [70]. Meloxicam showed close interaction with VAL 116 , LEU 359 , LEU 352 , SER 353 , GLU 524 , TRP 387 , LEU 384 , TYR 385 , GLY 526 , ALA 527 , respectively. The two residues LEU 352 and SER 353 of hydrophobic chain showed interactions with meloxicam and thus account for the selectivity of meloxicam for COX-2. The docking score, approximate area of interface and ACE of meloxicam were found as 5150, 643.2 and − 281.06 kJ/mol for COX-2 and 4350, 565.1 and − 280.03 kJ/mol for COX-1, respectively. Diclofenac sodium showed the parameters as 5054, 621.10 and − 121.24 kJ/mol for COX-2 while 4336, 474.7 and − 113.5 kJ/mol for COX-1, respectively. The Ca(II) complex showed close hydrophobic interaction with LEU 80, LEU 81 , LEU 82 , LYS 83 , VAL 89 , ARG 120 , TRY 122 , SER 471 , LEU 472 , LYS 473 , SER 119 , VAL 523 and GLU 524 with docking score, approximate area of interface and ACE as 6438, 857.6 and − 289.87 kJ/mol for COX-2 ( Fig. 13a and b) while 5732, 767.2 and − 193.77 kJ/mol for COX-1, respectively ( Fig. 13c and d). The highly negative ACE values depict greater potential of formation of the enzyme-inhibitor complex due to exothermic energy change. The Ca(II) complex fitted well into COX-2 binding site occupying a similar but greater region in the binding site as meloxicam. This pose might benefit from additional interaction energy due to the relative proximity of the VAL 116 , which can generate an additive effect determining the selectivity for COX-2 [71]. The docking of Ca(II) complex revealed intricate interactions with COX-2 channel, including hydrogen bonds and hydrophobic interactions having highest dock/binding score compared to meloxicam. Furthermore, it was observed previously that the compounds having higher selectivity index (SI) for COX-2 than COX-1 in the in vitro experiments also showed higher binding interactions (dock score) with COX-2 than COX-1 in molecular docking studies [26,32,[72][73][74][75]. On this basis, the compounds with higher docking score are recognized as selective COX-2 inhibitors. Conclusion A new Ca(II) complex of meloxicam was synthesized and investigated by various spectroscopic and biological techniques. The spectral studies envisaged that the meloxicam ligand is bidentate in nature, which coordinate with the Ca(II) metal ion through oxygen of amide group and nitrogen of thiazolyl ring. The correlation of the experimental data allows in assigning an octahedral geometry for the Ca(II) complex. SEM and XRD analysis showed crystalline morphology of complex and is in good agreement with each other. TGA data revealed that the complex decomposes into three steps resulting in a metal carbonate as final decomposition product. The new Ca(II) complex showed prominent in vitro antioxidant activities and higher selectivity toward COX-2 than uncoordinated meloxicam. It showed lower toxicity with LD 50 1000 mg/Kg and acts as a potent anti-inflammatory and analgesic agent. It inhibits PGE 2 -induced inflammation more strongly than histamine which justifies the anti-inflammatory and analgesic action of complex. Molecular docking data provide new insights about COX-2 inhibitions by the new Ca(II) complex, having higher binding score as compared to COX-1, and may be considered as a potent COX-2 inhibitor. Moreover, Ca(II) meloxicam complex can be further used to explore its beneficial impacts at molecular level.
2022-01-21T05:14:49.754Z
2022-01-19T00:00:00.000
{ "year": 2022, "sha1": "622a8e14f607016cca1252d6089ad31a9b1cfca0", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s13369-021-06521-7.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "622a8e14f607016cca1252d6089ad31a9b1cfca0", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
237735922
pes2o/s2orc
v3-fos-license
Multi-granularity feasibility evaluation method of the partial destructive disassembly for an end-of-life product Partial destructive disassembly (PDD) is essential for end-of-life products to improve their automatic disassembly efficiency and reduce disassembly cost. A feasibility evaluation of the PDD is the key step to evaluate whether the PDD can be implemented. However, it has not been studied previously to our knowledge. To deal with this problem, a multi-granularity feasibility evaluation method is proposed. A multi-granularity feasibility evaluation model of the PDD was constructed based on the complex product’s hierarchical structure, which not only described the evaluation indices from the product level to the component level but also presented methods and rules to quantify them. Thus, disassembly entropy was introduced into the target group’s coarse granularity evaluation. The feasibility of the fine-grained index of the PDD for the component layer was constructed based on the product’s failure characteristic. The fine-grained index was calculated by the fuzzy trigonometric function, and its weighting was obtained based on the structure entropy weight method. Thus, the results of the evaluation were used as feedback to guide the PDD process. Finally, a Passat engine case study illustrates the feasibility and effectiveness of the method. Introduction Remanufacturing is an effective method to recover and reuse the residual value of end-of-life (EOL) products [1]. Disassembly is one of the key steps in remanufacturing. According to the depth of the disassembly, dismounting technology, and degree of automation, disassembly can be divided into several categories, such as the serial, parallel, partial destructive, and normal disassembly [2][3][4][5][6]. A reasonable remanufacturing disassembly mode can improve the mass disassembly efficiency and reduce the remanufacturing cost. Disassembly is mainly divided into normal disassembly, destructive disassembly, and partial destructive disassembly (PDD). Normal disassembly means obtaining the target component without destroying any component. Destructive disassembly means material recovery through violent destruction of dismantling products. PDD is between normal disassembly and destructive disassembly and aims to dismantling components that cannot be disassembled due to serious failure by destroying connectors or low value parts. At present, the study of the PDD is attracting widespread attention. Thus, the separation of components is realized by cutting a certain connector, which is mainly applied to non-detachable connections, such as riveting and welding [7]. Normal disassembly must bypass these connections, whereas partial destructive disassembly combines the advantages of the complete destructive and normal disassembly. Thus, the parallel partial destructive disassembly has important research significance for improving the remanufacturing disassembly efficiency. PDD is an effective and efficient disassembly mode, which is essential for the automatic disassembly system to perform batch disassembly. It is known that the remanufacturing core often has un-disassembly connections (e.g., riveting or welding) and a structure with severe failures (e.g., corrosion or fractures). However, owing to the limitations of cost and demolition time, the feasibility of the partial destructive disassembly becomes an important problem to be solved. Much work has been done in the evaluation of disassemblability, which can be divided into two categories: product level evaluation and component level evaluation. In the product level evaluation, Du et al. [8] evaluated the feasibility and effectiveness of machine tool disassembly from the technical, economic, and environmental feasibility perspectives. Suga et al. [9] used the disassembly entropy to evaluate the overall disassemblability of products. Sabaghi et al. [10] used the disassembly accessibility, contact surface, coupling means, and number of connections to evaluate the overall disassemblability of products. In the component level evaluation, Hander et al. [11] extracted the geometric constraint information of the product and interference in the disassembly process, established the interference and correlation matrix, and evaluated the disassembly ability using the interference information. Achillas et al. [12] took the residual life, quantity, quality, disassembly convenience, and environmental impact of parts as the evaluation indexes to obtain the global ---multi-criteria index by weighted evaluation and to determine the disassembly feasibility of parts. Chen et al. [13] calculated the disassembly efficiency of end-of-life products based on the fuzzy analytic hierarchy process evaluation method. On this basis, Sun et al. [14] introduced the failure rate of parts and established the comprehensive evaluation model of the failure rate and disassembly time. Evaluation of product disassemblability in partial destructive mode (PDM) Compared with feasibility evaluation of product disassemblability in NDM, the feasibility evaluation of product disassemblability in PDM has the advantages of guiding the actual disassembly process and a lower disassembly time and energy consumption. It has gradually become an active research topic for scholars. In terms of the feasibility evaluation of product disassemblability in PDM, the main index was constructed from the economy and disassembly efficiency. Song et al. [15] compared the cost of the disassembly sequence scheme between the partial disassembling mode and the normal one. Zhou [16] introduced time, cost, tools, noise, environment, and other indicators to evaluate the partial destructive disassembly sequence planning scheme and screen out the optimal disassembly sequence. Zeng et al. [17] evaluated the balance problem of a partial destructive disassembly line from the aspects of profit and energy consumption. Wang et al. [18] destroyed inexpensive parts and then evaluated the partial destructive disassembly line from the number of stations, smoothness, energy consumption, and disassembly profit. Studies have conducted the disassembly feasibility evaluation from different perspectives. However, there is no systematic study on the feasibility evaluation system of the partial destructive disassembly for EOL products, and the indicator system is incomplete. Some scholars put forward the method of combining the whole and individual in the construction of the normal disassembly feasibility evaluation indicator, such as Zhang et al. [19], who constructed a multi-granularity of hierarchy disassemblability evaluation model from product level and design units level based on complex products' hierarchy structure. Zhu et al. [20] constructed a two-level evaluation index system from two levels: product and component. The component level index reflected the features of both current operations, and the product level reflected the previous disassembly process. Finally, the index was quantified by a Look-up Table. Research motivation To sum up, there are two main problems in the partial destructive disassembly evaluation of end-of-life products. 1. Disassembly feasibility evaluation studies focus on the evaluation of individual components and neglect the evaluation of the whole product. However, the number of product components is large, so the overall evaluation is difficult. 2. Most studies focused on the ideal disassembly feasibility analysis rather than considering the influence of the failure characteristics on the disassembly feasibility. In fact, the serious failure characteristics of the components also have an important impact on the disassemblability of the components. The current weighting methods are mainly based on the analytic hierarchy process (AHP), and thus, there exists subjectivity and randomness in the evaluation results. To address these problems, we propose a multi-granularity evaluation method for disassembly feasibility of EOL products in partial destructive mode. The remainder of this article is organized as follows. In Section 3, the multi-granularity feasibility evaluation model of a partial destructive disassembly for EOL products is proposed. Section 4 introduces the feasibility evaluation method of a partial destructive disassembly. In Section 5, the proposed model and method are validated with a case study. Concluding remarks are provided in Section 6. 3 Multi-granularity feasibility evaluation model of partial destructive disassembly PDD aims to improve the disassembly efficiency by partial destructive disassembly operation under the premise of ensuring the integrity of the target component. The PDD encompasses two different disassembly methods: normal disassembly and destructive disassembly. The disassembly's direction, time, and tools of the EOL products are different with different disassembly methods. However, these factors cannot fully reflect the feasibility of the PDD of the products. Based on the complex products' hierarchy structure, this study presents a multi-granularity feasibility evaluation model (MGFEM) of the PDD for EOL products. The evaluation objects of the MGFEM are target group and components, and several evaluation indexes are constructed from target group layer and component layer, as shown in Fig. 1. Construction of target group Owing to the large number of complex product components, it is difficult to evaluate them. Therefore, the target group was defined according to the component failure characteristics, which is the parts set composed of those with a high failure probability and high value parts [20] to reduce the complexity of the whole evaluation. Expression and quantification of the failure characteristics of components Uncertain changes have taken place in the external characteristics and internal materials of the EOL products, as shown in Table 1. The failure type matrix M 1 was established to describe the failure type of complex product part v i , which can be expressed by Eq. (1) as follows: w h e r e To facilitate the overall feasibility evaluation of a product's PDD, the products are simply classified according to the failure characteristics of complex products. According to the evaluation of various failure characteristics of parts by many experts, the failure characteristics of the parts are quantified by Eqs. 1 and 2, which can be expressed by Eq. (3). where e ij represents the characteristic value of the component failure and r ij represents the failure type of components. Quantization of coarse-grained index of target group based on disassembly entropy Three major factors affecting the PDD feasibility of the target group were selected: failure rate, number of joints, and cost of PDM. In this study, the disassembly entropy introduced by Suga et al. [21] is expanded. The PDD feasibility of the target group is quantitatively assessed using the disassembly entropy. The smaller the disassembly entropy is, the better the PDD feasibility of the target group is. (1) The failure rate means the proportion of high failure probability components in the target group. The difficulty of PDD is related to the failure rate of parts. The higher the failure rate is, the more difficult the PDD is, and the greater the feasibility of PDD is. The disassembly entropy of failure rate is where N i indicates the total number of sub-assemblies in the target group and N s is the total number of sub-assemblies with high failure probability in the target group. (2) The destruction of the joints is mainly carried out by means of destruction in PDD due to the low value of the joint in general. The more the number of joints are, the more selectivity to destroy disassembly components are, and the higher the feasibility of PDD. Thus, the disassembly entropy of the number of joints is where N i indicates the total number of sub-assemblies in the target group and N k is the total number of connectors methods k (such as screws, bolted joints, welding, riveting, and non-removable connections) in the target group. (3) Cost of PDM means the proportion of the sum of normal disassembly cost and destructive disassembly cost in normal disassembly cost. The smaller the disassembly income is, the lower the cost of PDD is, and the higher the feasibility of PDD is. The disassembly entropy of cost of PDM is where C n indicates the cost of normal disassembly and C m is the cost of destructive disassembly. For complex products, the coarse-grained evaluation of the target group is conducted, and then, the total disassembly entropy of the target group is where k i (i = 1, 2, 3; k 1 +k 2 +k 3 = 1) is the weight of the disassembly entropy and its value is determined by the degree of influence on the PDD feasibility of the target group. The total disassembly entropy of the target group fairly comprehensively reflects the overall feasibility evaluation of PDD for EOL products and is the foundation of construct MGFEM of an EOL product. Construction of the fine-grained index of the component level A fine-grained evaluation object is a component. In the actual disassembly process, owing to the serious failure of the Fig. 4 Overall structure and disassembly of a Passat engine. a Overall structure of the Passat engine. b After the engine is disassembled components, they cannot be disassembled, and the disassembly efficiency is improved by destroying some components. Therefore, the PDD feasibility is related to the failure degree and disassembly process of the components. The PDD process of general components is as follows: (1) The position of recognition PDD and normal disassembly components The position of recognition PDD and normal disassembly components can be evaluated by the index of recognition PDD components, including the recognition of components' connection types and component s' failure characteristics. The more serious the failure characteristics are, the easier the components' recognition is, and the stronger the feasibility of PDD is. (2) Different disassembly tool is replaced and alignment with the connector positioning. The disassembly efficiency of PDD is influenced by transformation between tools with different disassembly methods and the relative positioning accuracy with the connectors. If different disassembly tools are replaced more times and have high positioning accuracy, the cost of PDD will increase, and the energy will be consumed. Therefore, disassembly tools and positioning accuracy were used to evaluate the disassembly feasibility of PDD. (3) Destructive or remove corresponding connectors. The destruction of connectors involves disassembly direction, disassembly time, pushing force, and accessibility. When the failure degree of the component is serious, the destructive connection will change the previous disassembly direction, and the frequent change of the disassembly direction will reduce the disassembly efficiency and bring inconvenience to the PDD. The serious failure of components will lead to the increase of the required disassembly force and the difficulty of tool accessibility, which seriously affects the disassembly efficiency of PDD. Therefore, the disassembly direction, disassembly time, disassembly force, and accessibility were used to evaluate the disassembly feasibility of PDD. Removing the corresponding connector involves component structure size. The failure degree of components is serious, and the structure is large; the tool is difficult to grasp, and it is difficult to remove the corresponding connectors and the connected components, which affects the disassembly efficiency of local damage. To sum up, eight fine-grained indexes, including recognition of PDD components, disassembly tool, disassembly direction, disassembly time, pushing force, accessibility, positioning accuracy, and component structure size, were determined based on the failure characteristics and shown in Fig. 1. Quantization of fine-grained index of the component level based on the failure characteristics The above indexes are related to the component failure degree, so the component fine-grained indexes were quantified based on the failure characteristics. In addition, the expert evaluation method was used to quantify the failure characteristics of components. Because the influence of the component failure characteristics on the fine-grained evaluation index is fuzzy, the fuzzy trigonometric function was integrated. The assessing scale set was used to grade the effect of the component failure degree on fine-grained indexes. The evaluation grade and value are shown in Table 2. The different failure grades have different influences on the PDD feasibility. Moreover, the membership function of each index and failure grade was established according to the experts' experience and the literature [22,23], as shown in Fig. 2. Taking the index of recognition of PDD components as an example (Fig. 2), according to the range of the failure characteristic value (Table 2), the membership of the interval [Δa i , , and the index of recognition of PDD components can be quantified as. Weight calculation based on structure entropy weight method To avoid the distortion of the evaluation results caused by the mutation value, we used the structural entropy weight method [24] to determine the weighting, and the specific steps are as follows. Step 1. Certain experts are invited to rank the importance of each fine-grained indicator. Step 2. The establishment of the fine-grained indicator set, collection of the expert opinions, and construction of the expert opinion matrix are denoted as follows: where a ij (i = 1, 2, ⋯, k; j = 1, 2, ⋯, n) indicates the importance ranking of the ith expert for item j indicators. The membership of a ij is b ij , where b ij is calculated according to the following formula: where m is the number of transformation parameters [24], in this paper, letting m = n + 2. Determining that the average awareness b j of k experts on index u j is then, the blindness Q j [24] is Then, the overall awareness x ij of k experts on index u j is The evaluation vector of k experts for indicators of the whole U is Step 3. According to Eq. (12), the weight of index u j is Comprehensive evaluation of fine-grained indicators For components, a fine-grained evaluation of the component layer is performed, and then, the fine-grained comprehensive evaluation result T of the component layer is calculated according to the following equation: where α j (j = 1, 2, ⋯, 8) is the weight for each fine-grained index calculated by structural entropy weight method and u j is the fine-grained index. The comprehensive evaluation method In Section 2, the target group was used as the representative of the whole product level. If the coarse-grained disassembly entropy is less than a given threshold, it indicates that the PDD feasibility of the product is high. Then, the fine-grained evaluation of the component layer is needed to obtain the exact destructible disassembly components. The specific process is shown in Fig. 3. The specific steps are as follows. Step 1. Data (such as product failure information) are obtained according to the literature and practical experience. Step 2. The product failure characteristic matrix is constructed, and the expert evaluation method is used to quantify the failure characteristics and calculate the eigenvalues. Step 3. According to Section 2, the target group is selected according to the failure characteristics. Step 4. The disassembly entropy of the failure rate and the disassembly entropy of the number of joints can be calculated according to Eqs. (4)- (6). The total disassembly entropy of the target group can be calculated according to Eq. (7). Step 5. Determine whether the total disassembly entropy S is greater than the user-defined threshold; if it is greater, go to Step 6. Otherwise, go to Step 7. Step 6. PDD is not feasible, so take normal disassembly. Step 7. The fine-grained indexes are determined based on failure characteristics. Step 8. Fuzzy triangular quantization of fine-grained indexes is performed based on failure characteristics. The structural entropy weight method is used to calculate the weight of each index, and the fine-grained comprehensive evaluation value T of components is calculated using Eq. (15). Step 9. Sort the PDD components according to the comprehensive evaluation value in descending order. Thus, the disassembly priority of components is determined, and the results of the evaluation used as feedback to guide the PDD process. Case study To verify the feasibility and effectiveness of this method, we used a Passat engine (Fig. 4) as an example. Assuming that its failure information is known, the failure characteristic matrix of the product can be constructed according to Section 2 through human-computer interaction. Table 3 lists the main components and the related information of the engine. The eigenvalue of engine parts failure is quantified according to Eq. (3), as shown in Eq. (16), and the target group is constructed according to Eq. (16); the information of the target group of the Passat engine is shown in Table 4. 1 4 7 8 9 10 11 12 13 14 15 16 17 18 20 21 24 27 29 According to Table 4, letting N i = 43, N s = 34, N k = 26, C n = 17.498, and C s = 0.43, the disassembly entropies of the coarse-grained indexes of the target group are shown in Table 5. The total disassembly entropy of the target group S is obtained as 0.482 according to Table 5, assuming that the user-defined threshold is 0.6, and S is less than the user-defined threshold. The smaller the disassembly entropy is, the better the PDD feasibility of the EOL products is, and then, the fine-grained evaluation of the component level is needed to identify specific destructive disassembly components. According to Fig. 2, the fine-grained index of PDD was quantified based on the component failure characteristics, which is shown in Table 6. Furthermore, the weight of the fine-grained index is determined by the structural entropy weight method, and the steps are shown in Table 7. According to Eq. (15), the fine-grained comprehensive evaluation results of the engine components are shown in Table 8. As shown in Table 8, the PDD of the engine is feasible, and the comprehensive score of the train valve cover is the highest, which should be disassembled preferentially. The train valve cover is to cover and seal the cylinder head and isolate pollutants, such as dirt and humidity from the outside. Therefore, it is in a bad environment for a long time, which can result in a serious failure. In engineering practice, destructive disassembly is preferred owing to its low remanufacturing value and possibility of serious failure, which is consistent with the above evaluation result. Conclusions This paper proposes a multi-granularity feasibility evaluation method for EOL products to judge the PDD feasibility from the product level to the component level based on the disassembly entropies and the product's failure characteristic. The highlights of the proposed method are as follows. (1) The multi-granularity feasibility evaluation method proposed in this paper is an effective process. The coarse-grained evaluation of the whole product is performed first. If the total disassembly entropy of the target group is greater than the user-defined threshold, the following fine-grained evaluation is unnecessary, which reduces the blindness of the evaluation and improves the evaluation efficiency. (2) The target group is constructed based on the failure characteristics of product components to simplify the overall evaluation difficulty of PDD for EOL, which can provide technical support for the PDD feasibility evaluation of large complex products. (3) The fine-grained index of PDD was determined and quantified based on the failure characteristics and the fuzzy membership function, which solves the fuzzy effect of product failure on PDD and greatly improves the efficiency in the feasibility evaluation of PDD process. However, environmental factors have an important impact on the feasibility evaluation of local failure disassembly, but it is difficult to quantify. Environmental factors are not considered in this paper, in the construction of indicators and consider environmental factors in subsequent studies. Otherwise, the construction of the target group, only the high failure probability, and key components were considered, and the failure characteristic information of components was difficult to obtain. Thus, software should be developed to perform the extraction of the failure characteristics of an EOL product automatically to increase the evaluation efficiency in the follow-up work.
2021-09-27T20:55:45.307Z
2021-02-23T00:00:00.000
{ "year": 2021, "sha1": "d91720c28f405c21ef1a3f42e810b9a92487d8e4", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-226834/latest.pdf", "oa_status": "GREEN", "pdf_src": "SpringerNature", "pdf_hash": "d84808f14eadd490ca721226305c556190e78c9d", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Computer Science" ] }
199571603
pes2o/s2orc
v3-fos-license
Hydrophobic nanostructured wood membrane for thermally efficient distillation Derived from whole wood without structural change, the nanowood membrane provides a new material for efficient water desalination. This PDF file includes: Section S1. The hydrophobic silane treatment mechanism Section S2. The nanowood membrane before and after hydrophobic treatment Section S3. The natural wood membranes Section S4. Comparison of the wood membranes and common papers Section S5. Anisotropic thermal insulation property of the nanowood membrane and the potential benefits Section S6. Commercial hydrophobic membranes Section S7. Pore size distribution of the commercial membranes Section S8. Morphology and pore structure of the commercial membranes Section S9. Surface hydrophobicity/hydrophilicity Section S10. DCMD reactors and configurations Section S11. Water flux of commercial membranes Section S12. Theoretical thermal conductivity estimation Section S13. Thermal insulation of commercial membranes Section S14. Experimental thermal conductivity and membrane permeability Section S15. Theoretical permeability coefficient and intrinsic permeability Section S16. Wood membrane durability Section S17. Wood membrane application and fouling Fig. S1. Schematics of hydrophobic treatment of wood membranes using silane coupling agent (50). Fig. S2. Surface morphologies and pore size distribution of the nanowood membrane before and after hydrophobic treatment. Fig. S8. Water contact angles of the commercial and hydrophobic natural wood membranes. Fig. S9. Schematics, images, and control interface of the apparatus for direct contact membrane distillation (DCMD). Fig. S10. Water flux of the commercial polymeric membranes in DCMD with feed [NaCl (1 g liter -1 )] temperature continuously varying between 40° and 60°C and distillate (DI water) temperature of 20°C. Fig. S11. IR thermographs of the commercial membranes with the heat source temperature of 60°C. Fig. S12. Temperature plots of anisotropic nanowood and isotropic commercial membranes from a point heat source. Fig. S13. Comparison of experimentally measured intrinsic (thickness-normalized) membrane permeability of the wood and commercial membranes. Fig. S14. Water flux of the hydrophobic wood membranes in DCMD with feed [NaCl (1 g liter -1 )] and distillate (DI water) temperatures controlled at 60° and 20°C, respectively. Fig. S15. Water flux of the hydrophobic nanowood membrane in DCMD with [NaCl (35 g liter -1 ) and synthetic wastewater] and distillate (DI water) temperatures controlled at 60° and 20°C, respectively. Table S1. Comparison between nanowood and common paper. Section S1. The hydrophobic silane treatment mechanism Fig. S1. Schematics of hydrophobic treatment of wood membranes using silane coupling agent (50). Section S2. The nanowood membrane before and after hydrophobic treatment Paper is one of the most widely used cellulosic products. During its manufacturing, wood chips were separated into cellulose fibers by removing the lignin from the wood, and the degraded cellulose fibers were then randomly mixed together to form an isotropic structure. This was deferent from the preparation of nanowood membrane, during which lignin and semi-cellulose fibers were removed via in situ chemical treatment and freeze-drying, allowing the preservation of the antistrophic microstructure and hierarchical alignment ( Figure S4A, Figure 2A, Figure S2A). The different processing resulted in distinct properties between the paper and nanowood membrane. Credit to the large porosity of the hierarchical structure, the nanowood membrane demonstrated ~10 timer lower density (0.13 g cm −3 ) than that of the paper (1.20 g cm −3 ). Moreover, the alignment of cellulose (Table 1 and Section S12), while the anisotropic thermal conductivity in x-, y-, and zdirection were 0.060, 0.030 and 0.030 W m −1 k −1 , respectively (39). The modeled dimension was 40 mm (Width) × 40 mm (Height) × 0.502 mm (Thickness). Software: ANSYS 19.2 Academic. Section S4. Comparison of the wood membranes and common papers Anisotropic Thermal Insulation Property of the Nanowood Membrane: When exposed to a local hot spot (60 o C), the isotropic thermal properties lead to an evenly distributed temperature contour profile with a circular shape on the front surface ( Figure S5A). Credit to the hierarchical structure with aligned nanocellulose fibers (Figure 1), the nanowood membrane demonstrated an anisotropic thermal property, with an thermal conductivity in x-, y-, and z-direction is 0.060, 0.030 and 0.030 W m −1 k −1 , respectively (39). Heat can be easily transported in the x-direction (water flow direction in MD) due to directional high thermal conductivity, which yields an elliptical temperature distribution on the front surface ( Figure S5B). The front-side temperature profiles were similar to those of the back-side temperature profiles. However, the low thermal conductivity in the traverse direction (vapor transport direction) together with heat dissipation along the aligned fibril direction with higher thermal conductivity of the nanowood prevented the conductive heat loss across the membrane. The Potential Benefits: During MD, heat is continuously lost to the feed side through the membrane either via convective or conductive heat transfer, which led to a temperature gradient on the membrane surface in the water flow direction. Since monitoring the membrane surface temperature was difficult, we input our experimentally monitored temperatures (the temperatures of feed influent and effluent, and the temperatures of distillate influent and effluent), the obtained thermal conductivity and permeability coefficient back to a modified Schofield model (Section S14 in Supporting Information) to estimate the membrane surface temperatures at the feed inlet and outlet (22). Our modeling showed that at 60 o C, the temperature difference between the feed inlet and outlet points was ~0.8 o C for nanowood membrane, indicating a temperature gradient along the membrane surface on the feed side. Though the temperature gradient of 0.8 o C was not much in our experiment, it can be significant in a membrane module with a long flow channel. The anisotropic property of our nanowood membrane with improved thermal conductivity along the membrane surface (parallel to the fiber growth direction) was presumably to facilitate the heat transfer along the membrane thereby help maintaining the temperature gradient across the membrane and promote flux. Section S12. Theoretical thermal conductivity estimation The theoretical axial thermal conductivity of a membrane sample was estimated using a simple theoretical model (59), where φ is the porosity of membrane sample, κ g is the thermal conductivity of gas, and κ m is the thermal conductivity of the membrane material, respectively. For the commercial membranes with single phases, κ g is the thermal conductivity of the materials, with PP and PTFE of 0.17 and 0.25 W m −1 K −1 , respectively. Since the wood membranes contain multiple phases, including cellulose (0.23 W m −1 K −1 ), hemicellulose (0.34 W m −1 K −1 ) and lignin (0.39 W m −1 K −1 ), the material thermal conductivity was estimated according to their portions in the wood (39, 47). The thermal conductivity of gas in the confined space nanofibrils can be estimated by (39), where κ g,0 = 0.026 W m −1 K −1 is the thermal conductivity of gas in the free space, α ≈ 2 for air, l is the mean free path of gas and D is the mean pore size (Table 1). The mean free path of air is ~70 nm at ambient condition. Section S13. Thermal insulation of commercial membranes Due to the small thickness (< 200 μm), the thermal conductivity of the commercial membranes cannot be directly measured using LFA. Thus, we compared the insulating performance of the commercial membranes under the contact heat source at 60 o C ( Figure S12). The backside temperatures of the PP and PTFE membranes were comparable, representing similar thermal insulation performance, which corresponded to the theoretical calculations (Section S12 and Table 1). With the verified theoretical thermal conductivity, we compared the insulating performance of the nanowood and commercial membranes at the same thickness (502 μm) using the ANSYS software with a point heat source of 60 o C ( Figure S13). The backside maximum temperatures of the PP and PTFE membranes were comparable, representing similar thermal insulation performance. However, these were ~4 o C higher than that of the nanowood membrane, indicating that our nanowood membrane possessed much better thermal insulation property, which was due to its anisotropic property with reduced conductive heat transfer in the traverse direction (vapor transfer direction) and heat dissipation along the fiber growth direction (water flow direction). Section S14. Experimental thermal conductivity and membrane permeability In this research, we used a modified Schofield model to estimate the experimental thermal conductivity (22), along which we determined the water vapor permeability and intrinsic permeability (23, 24) of the wood and commercial membranes ( Figure S15). Eq. S.7 Where ΔT is the bulk temperature difference, J v the water flux, β the heat of vaporization of water (slightly dependent on temperature), B w the membrane permeability coefficient, dp/dT the derivative to the temperature of the Antoine equation evaluated at the mean temperature of the membrane (T m ), h the total boundary layer heat transfer coefficient, η th the thermal efficiency, p 0 the vapor pressure of pure water, and R the gas constant (8.314 m 3 Pa K −1 mol −1 ), Am the membrane area, m d,in the mass flow rate of the distillate stream entering the distillate flow channel, C p,d,in the heat capacity of distillate influent, C p,d,out the heat capacity of distillate influent, T d,out the temperature of distillate effluent, and T d,in the temperature of distillate influent. Bw is obtained by interpolating the experimental data into Eq. S.3 Section S15. Theoretical permeability coefficient and intrinsic permeability Transport of water vapor molecules from the bulk feed solution to the bulk permeate solution may be simulated using the widely-adopted "Dusty Gas Model" (DGM), which considered four different potential transport mechanisms through the membrane, including surface diffusion, viscous diffusion, molecular diffusion and Knudsen diffusion (23). Since MD membrane is hydrophobic, the surface diffusion resistance of water along membrane pores is high thereby considered negligible (24). Assuming a uniform pore size for a given MD membrane, a modified DGM equation considering viscous diffusion, molecular diffusion and Knudsen diffusion, is therefore employed: Eq. S.13 However, with smaller pore sizes (r < λ), Knudsen diffusion dominates, thus, Eq. S.14 While at larger pore sizes (r > 100λ), ordinary diffusion dominates, therefore, The membrane permeability can be normalized with the membrane thickness to obtain the intrinsic membrane permeability (kg m −1 s −1 Pa −1 ). Where M H2O is the molecular weight of water (kg mol −1 ). Fig. S13. Comparison of experimentally measured intrinsic (thickness-normalized) membrane permeability of the wood and commercial membranes. Membrane permeability modeled using DGM is typically lower than experimentally-measured permeability. Modeling condition: 1 g L −1 NaCl as feed solution (60 o C), DI water as distillate (20 o C). Section S16. Wood membrane durability The fabricated nanowood membranes demonstrated good stability in water flux and salt rejection in 6 hours DCMD experiment ( Figure S16). After ~6 hours, water flux increased with salt rejection starting to decrease, which should be caused by membrane wetting. This is because the wood material is made of nanocellulose, which is extremely hydrophilic. Water vapor may be adsorbed by the nonfluorinated cellulose fibers during long term exposure, leading to wetting. However, the nanowood membrane can be fully restored by throughout rinsing using DI water and ethanol followed by drying at 120 °C for 4 h in a vacuum oven (-80 kPa). It is interesting to note that the natural wood membrane indicated better stability than the nanowood membrane under the experimental conditions, which might be attributed to its lower vapor flux thereby less vapor exposure as well as its better wetting resistance due to its smaller pore size and higher liquid entry pressure (Table 1). We believe that improved silane treatment without impact on the wood structure, such as adding silica nanoparticles, is needed to extend its longevity. Section S17. Wood membrane application and fouling MD is an emerging thermally driven process for desalination (10). Since the hydrophobic membrane blocks the passage of liquid water and the performance is less affected by the feed ionic strength, it shows great potential in treating highly contaminated and/or high-salinity streams, such as wastewater and seawater (5, 6). However, fouling may occur during the treatment of contaminated water containing foulants, which can result in the blocking of membrane pores, restriction of the passage of targeted resources, the degradation of the membrane material, and the need for frequent process disruptions designed to clean (physically and chemically) the membrane. So we investigated the fouling property of our nanowood membrane in treating synthetic seawater and wastewater ( Figure S17). The synthetic seawater contained 35 g L −1 NaCl; and the synthetic wastewater contained 100 mg L −1 alginate, 75 mg L −1 humic acid, 25 mg L −1 BSA, 1.5 mM CaCl 2 and 6.0 mM NaCl, which were based on the secondary effluent quality in California. We operated the experiments for 5 hours and found the nanowood membrane showed stable flux in treating the synthetic seawater. However, the flux declined gradually when treating the synthetic wastewater, which should be due to membrane fouling caused by the strong hydrophobic-hydrophobic interactions between the membrane surface and hydrophobic domains present on natural organic matters ( Figure 4A). The organic fouling can be exacerbated by the divalent ions (e.g., Ca 2+ and Mg 2+ ) which promoted the coagulation of organic matters on the membrane surface. However, membrane fouling is affected by many factors, such as solution chemistry, temperature and flux, comprehensive studies should be carried out. Usually, the wetted or fouled hydrophobic membranes can be restored via rinsing or backwashing using water or chemicals, followed by drying. However, it is unclear how restoring membrane performance would impact the nanocellulose. Future work should explore the limits of nanowood durability under more extreme operating conditions, especially during chemical cleaning.
2019-08-16T13:04:05.574Z
2019-08-01T00:00:00.000
{ "year": 2019, "sha1": "0cd6dc162a8a2a1232784f7fa8141ebe48238c5a", "oa_license": "CCBYNC", "oa_url": "https://www.science.org/doi/pdf/10.1126/sciadv.aaw3203?download=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0cd6dc162a8a2a1232784f7fa8141ebe48238c5a", "s2fieldsofstudy": [ "Environmental Science", "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
236263218
pes2o/s2orc
v3-fos-license
Improved electrochemical conversion of CO2 to multicarbon products by using molecular doping The conversion of CO2 into desirable multicarbon products via the electrochemical reduction reaction holds promise to achieve a circular carbon economy. Here, we report a strategy in which we modify the surface of bimetallic silver-copper catalyst with aromatic heterocycles such as thiadiazole and triazole derivatives to increase the conversion of CO2 into hydrocarbon molecules. By combining operando Raman and X-ray absorption spectroscopy with electrocatalytic measurements and analysis of the reaction products, we identified that the electron withdrawing nature of functional groups orients the reaction pathway towards the production of C2+ species (ethanol and ethylene) and enhances the reaction rate on the surface of the catalyst by adjusting the electronic state of surface copper atoms. As a result, we achieve a high Faradaic efficiency for the C2+ formation of ≈80% and full-cell energy efficiency of 20.3% with a specific current density of 261.4 mA cm−2 for C2+ products. T he rapid increase in the atmospheric carbon dioxide (CO 2 ) levels has motivated the development of carbon capture, utilization, and storage (CCUS) technologies. In this context, the electrochemical reduction of CO 2 to hydrocarbons using renewable energy is regarded as an effective way to close the carbon cycle via the conversion of CO 2 into chemical precursors or fuels 1,2 . The electrochemical CO 2 reduction reaction (CO 2 RR) toward single carbon products has achieved enormous progress 3 , especially for the production of C 1 molecules such as carbon monoxide (CO) or methane (CH 4 ) [4][5][6][7] . Copper (Cu) is one of the few transition metals that can efficiently catalyze the electrolysis of CO 2 to multicarbon products such as ethylene, ethanol, acetate, propanol 8 . Because multicarbon products possess higher market values and are more energy concentrated 1 , intensive efforts have been devoted to improve the reaction selectivity towards the production of C 2 and C 2+ molecules. Examples of strategies for optimizing the Faradaic efficiency towards the production of C 2+ species include alloying [9][10][11][12] , surface doping 13,14 , ligand modification 15,16 , and interface engineering [17][18][19][20] . Designing Cu-based catalysts by adapting some of the concepts of molecular catalysts in order to finely tailor the behavior of the active sites of metallic surfaces is currently regarded as the longstanding interest for the controlled design of novel electrocatalytic materials. Increasing the oxidation state of copper has been suggested to improve the CO 2 RR performance and notably the formation of C 2+ species 14,21,22 . Various strategies are being explored to prepare Cu δ+ by using controlled oxidation via plasma treatments or doping with boron and halides 14,[23][24][25] . Alternatively, molecular engineering of either the electrolyte or the catalyst surface has recently been proposed for orienting the selectivity of the reaction by stabilizing intermediates, inhibiting proton diffusion, or acting as redox mediators during the electrochemical CO 2 reduction reaction (CO 2 RR) [26][27][28][29][30] . Organic species such as N-aryl pyridinium salts 31,32 , imidazole [33][34][35] , thiol 36,37 , and cysteamine 38 have been reported as an effective lever to tune the reaction selectivity toward the formation of specific products by stabilizing key reaction intermediates. Functionalization of alkyl chains can also lead to better CO 2 RR performance by suppressing the competitive hydrogen evolution reaction (HER) via the creation of hydrophobic regions on the surface of the catalyst 37,39,40 . Here we present an effective strategy to control the surface oxidation state of bimetallic Ag-Cu electrodes by using functionalization for tuning the oxidation state of Cu δ+ . By combining Auger and X-ray absorption spectroscopies (XAS), we identified that the grafting of aromatic heterocyclic functional groups can efficiently dope the surface of Cu by withdrawing electrons from the metal surface leading to the formation of Cu δ+ species. Compared to pristine non-functionalized and alkylfunctionalized electrodes, the modified electrodes display a clear improvement of the reaction rates and Faradaic efficiency towards the production of C 2+ products. Operando Raman and X-ray absorption spectroscopy (XAS) suggest that the presence of Cu δ+ with 0 < δ < 1-due to the p-doping of the Cu surfacefavors the formation of adsorbed CO with the atop conformation which is a known key intermediate species involved in the C-C coupling step associated with the formation of multicarbon products. When assembled in a membrane electrode-assembly electrolyzer, the catalyst delivers a Faradaic efficiency (FE) for C 2+ products of 80 ± 1% and a total C 2+ energy efficiency (EE) of 20.3% for the full cell. Results Catalyst design and characterization. We fabricated the functionalized bimetallic catalyst by using a two-step strategy based on the controlled electrodeposition of Ag and Cu followed by the modification of the catalyst surface via functionalization (Fig. 1a). The Ag-Cu electrodes were prepared by firstly depositing Ag on gas diffusion electrodes (GDE) using pulsed electrodeposition. The silver structure grows in the form of a dendritic fish-bone structure with sharp Ag nanoneedles ( Supplementary Fig. 1). The Ag layer was then used as a scaffold for the deposition of copper. The final structure of the catalyst on the GDE electrodes is found to be porous where Cu is preferentially deposited on Ag (Fig. 1b, c and Supplementary Fig. 2). The catalytic performance of pure Cu and Ag-Cu electrodes were systematically investigated (Supplementary Figs. 2 and 3), and our results indicated that appropriate loading of Ag contributes to the enhancement of the formation of CO, which may further facilitate C 2+ production on copper. And the optimum composition is 15% at. Ag in Ag-Cu (labeled as 15% at. Ag-Cu). To control the oxidation state of Cu, we sought to functionalize the catalyst with thiol molecules via dip coating. We selected thiadiazole (N 2 SN) and triazole (N 3 N) derivatives as electron-deficient functional molecules to react with the surface of the catalyst [41][42][43][44] (Supplementary Fig. 4). For comparison, the bimetallic electrodes were also modified with 1-propanethiol (C 3 ) and cysteamine (C 2 N) as model short alkyl and alkyl amine functional groups (Supplementary Fig. 4 and Supplementary Figs. 5c, d). The modification of the electrode is clearly visible from the change of the water contact angle that varies between 86°and 129°depending on the nature of the functional groups compared to 84°for the pristine catalyst ( Supplementary Fig. 6). To verify the presence of the functional groups, we performed energydispersive X-ray spectroscopy (EDS) analyses in a SEM. The corresponding elemental maps at low magnification show the uniform distribution of S, N, and C on Ag-Cu electrode. A thin amorphous layer is observed under high-resolution TEM at the surface of the catalyst with a thickness of ≈2.5 nm which also corresponds to an increase of the S signal in the corresponding EDS elemental map (Figs. 1d-f and Supplementary Fig. 7). The existence of an organic layer on the Ag-Cu electrodes is further confirmed by the high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) and the electron energy-loss spectroscopy (EELS) mapping of the carbon element. Remarkably, the EELS spectrum of the C-K edge displays fine structural characteristics of carbon linked to heteroatoms at ≈292 eV (Figs. 1g, h and Supplementary Fig. 8). Raman and Fourier transformed infrared (FTIR) spectroscopies were also used to further confirm the successful attachment of the functional groups on the surface of the catalyst ( Fig. 1i and Supplementary Fig. 9). The Raman signatures of the different grated molecules were detected on the surface of the Ag-Cu electrodes, while strong FTIR bands at 1303, 1584, and 1622 cm −1 are only presented on N 2 SN-, N 3 N-and C 2 N-functionalized Ag-Cu electrodes and attributed to the C-C or C-N stretching, the NH 2 scissor and the C-N stretching modes respectively [45][46][47] (Supplementary Fig. 9). The successful functionalization with thiadiazole and triazole is further confirmed from the deconvolution of the X-ray photoelectron spectra from the S2p and N1s regions respectively ( Supplementary Figs. 10b, c). The peak of S2p was deconvoluted into three doublets at 162.75, 164.23, and 168.31 eV for the S2p 3/2 , corresponding to S-H and S-C bonds on both thiadiazole and triazole, respectively 48 . Analogously, the N1s spectrum ( Supplementary Fig. 10c) can be divided into three components at 398.24, 399.63, and 400.70 eV, which reflects the existence of N-N, C-N, and N-H bonds on the surface of functionalized electrodes. The presence of crystalline Ag and Cu on the gas diffusion electrode was further observed from the X-ray diffraction patterns, whereas the presence of distinct peaks from the Ag and Cu facets agrees with the absence of alloy structure of the bimetallic catalyst ( Supplementary Fig. 11). To clarify the orientation of the aromatic heterocycles on the catalyst surface, we carried out density functional theory (DFT) calculations to estimate the total energy and the binding energy of thiadiazole on Cu using a model with 5 Cu (111) slabs (Supplementary Figs. 12 and 13). Among the different configurations tested, the adsorption of thiadiazole is more stable when the N 2 -N 3 nitrogen atoms of the diazole sit on Cu (111) and the binding energy is estimated to be −1.08 eV-at least 0.37 eV lower than for the other configurations (Supplementary Table 1). CO 2 electroreduction performance in H cell. The functionalized electrodes were electrochemically tested in a H-cell reactor using Argon and CO 2 -saturated 0.5 M KHCO 3 electrolyte solutions. Figure 2a shows that thiadiazole (N 2 SN) and triazole (N 3 N) functionalized electrodes exhibit the highest current density and lowest onset potential in CO 2 -saturated solution. We then evaluated the Faradaic efficiency (FE) by using nuclear magnetic resonance (NMR) and gas chromatography (GC) (See details in the Methods section). H 2 , CO, formate, CH 4 , and C 2+ products were formed on the bimetallic electrode ( Supplementary Fig. 14). Remarkably, the Faradaic efficiency for C 1 and H 2 -obtained via the CO 2 RR and HER-decreased after functionalization with thiazole and thiadiazole, while the FE for C 2+ products sharply increases (Fig. 2b). Ethylene and ethanol are the major C 2+ products detected, together with a trace amount of acetate and n-propanol (Supplementary Figs. 14 and 15). The FE for C 2+ on N 2 SN-and N 3 N-functionalized electrodes are estimated to 57.3% and 51.0% at −1.2 V versus the reversible hydrogen electrode (vs. RHE) compared to only 18% for the pristine catalyst corresponding to enhancements of 3.1 and 2.8 folds respectively (Fig. 2b). The selectivity towards the formation of C 2+ products for both thiazole and thiadiazole functional groups increases Supplementary Figs. 16a, b). This leads to an obvious enhancement of the specific current density for C 2+ products (j C 2 þ ) up to 5 folds at −1.2 V vs. RHE (Fig. 2c). Conversely, the functionalization of the Ag-Cu electrodes with short alkyl or aminoalkyl chains does not suppress the HER pathway nor improve the CO 2 RR activity (Fig. 2d). C 2 N-and C 3modified catalysts clearly display lower activities towards the CO 2 RR, notably with minimal production of C 2+ species and a relatively large FE for the evolution of H 2 . Our results, therefore, highlight the importance of the nature of the functional groups on the CO 2 RR performance. To better evaluate the selectivity of C 2+ products on thiadiazole-and triazole-functionalized Ag-Cu electrodes, we calculated the ratio in FE for C 2+ products and hydrogen (FE C 2þ =FE H 2 ) (Fig. 2e). Compared with pristine and alkyl-functionalized electrodes, both N 2 SN and N 3 N functional groups present the largest FE C 2þ =FE H 2 ratios-illustrating that the functionalization with aromatic heterocycles efficiently directs the reaction pathway towards the formation of C 2+ products while suppressing the HER. To get a more accurate estimation of the intrinsic CO 2 RR performance of the functionalized Ag-Cu electrodes, we estimated the electrochemically active surface area of Cu (Cu ECSA) and Ag (Ag ECSA) for the 15 at.% Ag-Cu and the N 2 SN-15 at.% Ag-Cu catalysts using Pb underpotential deposition (Pb UPD) (Supplementary Figs. 17, 18 and Supplementary Table 2). The partial current densities for C 2+ products measured in H-cell were normalized by the ECSA values for Cu. Remarkably, we found that the ECSA-normalized partial current density on N 2 SN functionalized Ag-Cu is 5.3 mA cm −2 , which is around five times larger than that for the pristine 15 at.% Ag-Cu catalyst ( Supplementary Fig. 18). Electrochemical impedance spectroscopy (EIS) measurements were performed to explore the charge transfer processes on the surface of the different electrodes during the electrolysis of CO 2 . The charge transfer resistance of the N 2 SN-and N 3 N-functionalized electrodes is not substantially perturbed compared to that of the pristine bimetallic catalyst ( Supplementary Fig. 19). On the contrary, the resistance is significantly larger in the case of electrodes functionalized with 1-propanthiol and cysteamine indicating that the charge transfer is strongly affected; likely due to the strong hydrophobicity of the surface of the alkyl-functionalized catalyst. To gauge the stability of the functionalization, we operated the electrodes at a potential of −1.2 V vs. RHE for more than 20 h in the H-cell reactor, while recording the current density and continuously analyzing the products of the reaction (Supplementary Fig. 20). The N 2 SN-and N 3 N-functionalized electrodes demonstrated stable performance with a retention of the current density of 94% and 91% respectively-sharply improved compared to that of pristine Ag-Cu at 78%. The FE for C 2+ of N 2 SN and N 3 N functionalized Ag-Cu electrodes remains as high as 54% and 46.5% after 20 h, which demonstrates that the selectivity for the reaction pathway on the surface of the electrode is not modified during electrolysis. To further confirm the apparent stability of the functionalized electrode, we performed XPS spectroscopy to evaluate the N:Cu ratio after 30 min, 1 h, 24 h, and 100 h. The ratio is found to be virtually constant suggesting a robust grafting of the functional groups on the catalyst surface (Supplementary Figs. 21, 22 and Supplementary Table 3). Ex situ and in situ mechanistic investigations. Next, we sought to explain the fundamental mechanism responsible for the 3 . c j-V plots of the partial current densities for the C 2+ products (ethylene and ethanol). d Relationships between the FE for C 2+ and the total current density for all the catalysts. (e), Selectivity for C 2+ products over hydrogen based on the ratio in FEs of C 2+ and hydrogen. The error bars in b-e correspond to the standard deviation of three independent measurements. improved CO 2 RR properties using ex situ X-ray photoelectron spectroscopy (XPS) and operando XAS. XPS was firstly used to characterize the surface composition and determine the oxidation state of Cu. From the Cu2p region, no significant change of the oxidation state of Cu can be detected from the functionalized catalysts ( Fig. 3a, left). For comparison, after exposure to H 2 O 2 , the electrodes are clearly oxidized as confirmed by the apparition of Cu2p 3/2 signals at binding energy at 934.6 eV and the satellite peak at 942.6 eV, which is attributed to the formation of Cu 2+48 . Our XPS results confirm that functionalization does not lead to a dramatic modification of the oxidation state of the surface of the Cu since there were no evident oxidation peaks in Cu2p. It is well-known that the small change of binding energy between Cu 1+ and Cu 0 makes the precise identification of Cu 1+ impossible from the Cu2p regions 22 . To overcome this limitation, we, therefore, used the Cu Auger L 3 M 45 M 45 transition to qualitatively discuss the presence of Cu 1+ in functionalized Ag-Cu as this mode is known to be more sensitive to the modification of the electron density on the d-band of the metals 49,50 . It is well-known that the formation of Cu Auger L 3 M 45 M 45 transition comes from the L 3 (2p 3/2 ) core-hole decay during the Auger process, in which two M 45 (3d) electrons are responsible for the formation of a final 3d 8 configuration of Cu [51][52][53][54] . The right panel of Fig. 3a presents the two final-state terms splitting from L-S coupling 1 G and 3 F, whose peak energy positions provide information on the valence configuration of Cu 22,51 . According to the previous investigations, the peak energy positions of 1 G for the different oxidation states copper are detected at 917.1, 915.8, and 918.0 eV for CuO, Cu 2 O, and Cu, respectively [51][52][53] . Such differences are mainly due to the modification of the 3d and O2p electron configurations 54 . Compared with Cu 0 , the 1 G peak in copper oxide is downshifted in energy and presents a broader shape, while the 3 F peak is solely visible in the case of Cu 0 22,55 . For pristine and C 3 -and C 2 Nfunctionalized Ag-Cu, we observed that the energy positions of the 1 G peak are located at 918.3 eV(pristine), 915.9 eV (C 3 and C 2 N), respectively, while the distinct 3 F peak is detected at 918.2 eV for both C 3 -and C 2 N-Ag-Cu, in agreement with the existence of Cu 0 (Supplementary Table 4). Conversely, in the case of the N 2 SN-and N 3 N samples, the 1 G peak is identified at 915.8 and 916.0 eV, respectively, which is lower than that for Cu 0 and The different colored shading areas represent the peaks of Cu 2p 1/2 (blue), Cu 2p 3/2 (light purple), 1 G (pink) and 3 F (light green), respectively. b Ex situ and operando Copper K-edge X-ray absorption near-edge structure (XANES) spectra of pristine and functionalized Ag-Cu electrodes. Inset: Average oxidation state of copper for the corresponding electrodes. c Operando Cu K-edge XANES spectra of N 2 SN-functionalized Ag-Cu electrode during CO 2 RR. The measurements were performed after holding the applied potential for 30 min. d Evolution of the Faradaic efficiency for C 2+ and H 2 measured at −1.2 V vs. RHE with the oxidation state of Cu. e Operando Raman spectra for pristine, C 3 -, C 2 N-, N 3 N-and N 2 SN-, functionalized Ag-Cu during CO 2 RR at a fixed potential of −1.2 V vs. RHE. The different colored shading areas represent the peaks of 280 cm −1 (light green),~365 cm −1 (pink), and~2000 cm −1 (blue), respectively. The spectra for all the other potentials are presented in Supplementary Fig. 26. f Relationship between the FE for C 2+ products and the Raman peak areas of the frustrated rotational mode of CO at 280 cm −1 , the Cu-CO stretch at 365 cm −1, and the C≡O stretch at 1900-2120 cm −1 , respectively. g Relationship between the FE for C 2+ molecules and the ratio of CO atop and CO bridge on different Ag-Cu electrodes. The ratio was obtained from the integrated areas of the deconvoluted peaks of the Raman spectra ( Supplementary Fig. 27). The error bars in b, d, f, and g correspond to the standard deviation of three independent measurements. Cu 2+ and close to that of Cu 1+ (915.8 eV). We also note that the 3 F peak is also visible for both samples pointing out the presence of Cu 0 . These results indicate that the valence state of the N 2 SN and N 3 N samples may be Cu δ+ with 0 < δ < 1. To precisely evaluate the electronic states of copper on functionalized Ag-Cu electrodes and eliminate the air effect on the electrode, we then performed in situ X-ray absorption nearedge spectroscopy (XANES). The absorption edges of functionalized catalysts reside between those of copper metal (Cu 0 ) and Cu 2 O (Cu 1+ ) used as references (Fig. 3b). To better compare the influence of the different functional groups, we estimated the copper oxidation state as a function of copper K-edge energy shift (Fig. 3b). The oxidation state of copper in the N 2 SN-and N 3 Nfunctionalized Ag-Cu was found to be +0.53 and +0.47 respectively-pointing out the withdrawing properties of the selected heterocycles (Supplementary Table 5). Remarkably, C 3and C 2 N-functionalized samples displayed a minimal shift by comparing with pristine Ag-Cu electrode and the Cu reference, suggesting the alkyl groups are not prone to modulate the oxidation state nor the coordination environment of Cu. To explore the stability of the electron-withdrawing ability of the grafted heterocycles, we measured the oxidation state of Cu post CO 2 RR using in situ XANES. After 30 min of operation at −1.2 V vs. RHE in the testing cell, the oxidation state of copper was estimated be +0.51 (inset Figs. 3b, c). This value is similar to that obtained from the freshly prepared samples: +0.53, which demonstrates the stability of the oxidation state of the functionalized Ag-Cu electrodes. Similarly, no obvious shift of the Cu K-edge was observed from the in situ XANES measurements at increasing applied potential up to −1.2 V vs. RHE and the spectra virtually overlap. This confirms the robustness of the oxidation state of the Cu thanks to the stable attachment of the functional groups ( Fig. 3c and Supplementary Fig. 23). To better understand the role of Cu δ+ on the CO 2 RR properties, we investigated the influence of the copper oxidation state on the FE for C 2+ and H 2 (Fig. 3d). Remarkably, we identified a strong correlation between the oxidation state and the FE for C 2+ , which points out that the larger oxidation state of Cu benefits the CO 2 RR properties and the formation of C 2+ products in line with recent findings from the literature 51,56 . To finally exclude any hydrophobicity effect on the enhanced selectivity for formation of C 2+ products, we sought to prepare functionalized electrodes with similar water contact angles as for pristine Cu counterparts. We identified 1,3,4-thiadiazole-2,5-dithiol, N 2 SS, that shares the same thiadiazole structure but exhibits a water contact angle of 81°compared to 83.9°for pristine nonfunctionalized Cu. In H-cell configuration, the Faradaic efficiency for the formation of C 2+ molecules on N 2 SS-Ag-Cu reaches 43.7% at −1.2 V vs. RHE compared to only 18.3% for Ag-Cu ( Supplementary Fig. 24). To further demonstrate that the water contact angle has limited influence on the improved C 2+ selectivity, we plotted the Faradaic efficiency as a function of the water contact angle. No relationship is clearly observed, emphasizing that the origin of the improved selectivity for C 2+ is not primarily due to the surface properties of the Cu electrodes but rather the electron-withdrawing nature of the aromatic heterocycles as evidenced by our operando X-ray absorption spectroscopy measurements (Supplementary Figs. 6, 24 and 25). It is well-known that the formation of multicarbon products in CO 2 RR proceeds via the formation of the *CO intermediate, and its subsequent dimerization in CO=CO or *CO-COH intermediates [57][58][59] . To gain insight into the C-C coupling mechanism on functionalized and pristine Ag-Cu during CO 2 RR, the surface of the catalysts was probed using operando Raman spectroscopy in order to elucidate the interactions between the catalyst surface and the adsorbed *CO intermediate (Fig. 3e and Supplementary Fig. 26, and Supplementary Table 7). The presence of the surface-absorbed *CO was identified from the vibration modes at ≈280 and ≈365 cm −1 that originate from the Cu-CO frustrated rotation and Cu-CO stretch, respectively 60,61 . The broadband in the range of 1900-2120 cm −1 was assigned to the C≡O stretch. To confirm that the detected signals are solely due to the CO 2 RR, the Raman spectra were also recorded using Ar-saturated K 2 SO 4 as a controlled experiment and no peaks were detected at these frequencies ( Supplementary Fig. 26f). The Raman vibration modes around 1900-2120 cm −1 have recently been the focus of several studies and there is currently a general agreement that the high frequency region (>2000 cm −1 ) and the low-frequency region (1900-2000 cm −1 ) originates to atopbound CO and bridge-bound CO. Atop (CO top ) and bridge (CO bridge ) configurations correspond to a CO bound on top of one Cu atom and between two Cu atoms, respectively 50,62,63 . Compared to pristine as well as 1-propanthiol-and cysteaminefunctionalized electrodes, N 2 SN-and N 3 N-functionalized Ag-Cu exhibit the relatively intense signals at 365 and 1900-2000 cm −1 . Our systematic investigations revealed that the intensities of both regions are also found to increase with the overpotentials 32 ( Supplementary Fig. 26a, b). Importantly, we observed that there is an obvious relationship between the peaks at 365 cm −1 and 1900-2100 cm −1 and the Faradaic efficiency towards the formation of C 2+ products (Fig. 3f) by following literatures to fit these peaks area 32,50 . These results, therefore, point out the strong correlation between the density of adsorbed *CO on the catalyst surface and the formation of C-C bonds in agreement with the *CO being the key intermediate involved in the dimerization reaction and the formation of C 2+ products. We note that 1-propanethiol functionalized Ag-Cu electrodes display the most intense peak at 280 cm −1 whereas no peak is detected at 1900-2120 cm −1 . This indicates the adsorbed *CO is not present in the form of CO atop nor CO bridge configurations. We speculate that the hydrophobic surface of the 1-propanethiol functionalized Ag-Cu induces the existence of a high energy barrier for the protons to reach the surface of the catalyst that prevents the stabilization of the *CO in these bound configurations as previously proposed for other transition metals 50 . Interestingly, we observed a volcano-shaped relationship between the Faradaic efficiency for C 2+ products and the ratio of atop-bound CO to bridge-bound CO on the surface of Ag-Cu ( Fig. 4g and Supplementary Fig. 27). The Faradaic efficiency reaches a maximum for a ratio of CO atop to CO bridge of 0.4-0.5 corresponding to thiadiazole and triazole-functionalized catalysts, while the ratio decreases for 1-propanethiol and increases for pristine and cysteamine respectively. We hypothesized that the density of CO atop and CO bridge on the surface of the catalysts is influenced by the electron-withdrawing ability of the heterocycles as suggested by the volcano-shaped relationship between the oxidation state of Cu and the ratio of CO atop to CO bridge ( Supplementary Fig. 28). Overall our ex situ and operando characterizations of the modified bimetallic catalyst establish an obvious correlation between the electron-withdrawing ability of the functional groups and the oxidation state of Cu, which translate into a larger concentration of adsorbed *CO on the electrode surface and ultimately a higher probability for *CO to dimerize. CO 2 RR using a membrane electrode-assembly (MEA) flow electrolyzer. To evaluate the potential of our approach for practical applications towards the electrosynthesis of C 2+ products, we integrated the different functionalized bimetallic electrodes into 4 cm 2 membrane electrode-assembly (MEA) flow electrolyzers ( Supplementary Fig. 29). The synthesized liquid products at the cathode were collected by using a cold trap connected to the cathode gas outlet. We also analyzed the liquid products in the anolyte to detect liquid products that may have crossed over the membrane electrolyte. We firstly scrutinized the activity of N 2 SN-functionalized Ag-Cu in a MEA electrolyzer by flowing Ar (used as a blank experiment) and CO 2 in the cathode compartment ( Supplementary Fig. 30) and found that the catalyst can convert CO 2 when operating in a catholyte-free MEA system. We then characterized the current-voltage response of all the functionalized catalysts between −2.8 and −4.8 V and a constant flow of CO 2 of 10 standard cubic centimeters per minute (sccm) (Fig. 4a). The total current for the different Ag-Cu electrodes increased from 4·10 −2 A up to over 1.6 A. The N 2 SNfunctionalized electrodes displayed the largest specific current density for C 2+ at 261 mA cm −2 together with the maximum FE for C 2+ products and the lowest FE for H 2 at~80% and 14%, respectively ( Fig. 4b and Supplementary Figs. 31a, 32a). Remarkably the selectivity for the C 2+ products increases together with the electrolysis response when increasing the operating potential of the full cell. The catalytic activity towards the competitive HER concurrently decreases up to −4.55 V (Fig. 4b and Supplementary Fig. 32c). Compared to pristine Ag-Cu, the FE for C 2+ products from N 2 SN-and N 3 N-functionalized electrodes demonstrated an average enhancement for C 2+ of 3.1 and 2.6 folds respectively over the extended range of full-cell potentials ( Fig. 4c and Supplementary Fig. 33). To further assess the performance of the functionalized Ag-Cu electrodes in the MEA devices, we calculated the ratio of j C 2þ to j C 1 for the different potentials. We found that Ag-Cu functionalized with thiadiazole displays the largest values and the ratio reaches at a maximum value of ≈10 at a current density of 261.4 mA cm −2 (Supplementary Fig. 34). These results demonstrate that the controlled Supplementary Fig. 29). We also found that the total FE for gaseous products gradually decreased with the increase of the full-cell voltage indicating a shift toward the formation of liquid products at high operating potential. The Faradaic efficiency for ethanol and n-propanol reached 16.5% and 6.1% at a voltage of −4.4 V (Supplementary Fig. 31a). To better understand the influence of operating conditions on the CO 2 RR performance of the MEA device, we varied the CO 2 flow rate from 3 to 100 sccm at a constant full-cell potential of −4.55 V. When using N 2 SN-functionalized Ag-Cu electrodes, the FE for ethylene reached a peak at 56% at~10 sccm (Fig. 4d) together with a sharply reduced FE for H 2 at only 15.2%. The selectivity for ethylene rapidly drops down to only~5% for a CO 2 flow rate of 3 sccm, suggesting that the feed in CO 2 is not sufficient to produce enough *CO to dimerize on the surface of the catalyst. The relationships between CO 2 flow rates, cell voltages and Faradaic efficiencies for the main gas products (H 2 , CO and C 2 H 4 ) were explored on N 2 SN-functionalized Ag-Cu electrodes and we found that the FE C2H4 decreases when increasing the CO 2 flow rate and the optimal flow rate is 10 sccm even when operating under high voltage and high current density ( Supplementary Fig. 35). Conversely, the Faradaic efficiency for H 2 increases when increasing the CO 2 flow rate, which further demonstrates that the decrease in the C 2+ performance is not caused by insufficient feed in CO 2 . We also estimated the full-cell energy efficiency (EE full-cell ) for N 2 SN-functionalized Ag-Cu for the different operating potential. Both the FE and EE full-cell values for C 2+ products increased with the increase of the current density and achieved a maximum FE C2+ of ≈80 ± 1% and an EE full-cell of 20.3% at a specific current density larger than 260 mA cm −2 for the production of C 2+ (Fig. 4e). By comparing the performance metrics of N 2 SN-functionalized Ag-Cu with previous literature benchmarks based on MEA devices, we observed that thiadiazole-functionalized Ag-Cu allows achieving outstanding performance notably thanks to a high CO 2 -to-C 2+ conversion rate of 785 µmol h −1 cm −2 (Fig. 4f). We finally examined the stability of the N 2 SN-functionzalized Ag-Cu electrodes in a full-cell MEA electrolyzer under continuous operation at a CO 2 flow rate of 10 sccm and a cell voltage of −4.55 V. The performance of the cell was found to be stable over 100 h with an average FE of 51% for ethylene and an average current of around 1.6 A (Fig. 4g). After 100 h, the retention of the FE for ethylene and the current were estimated to be 48% and 1.58 A corresponding to retentions of 94% and 99%, respectively. The stability of the CO 2 RR properties is further accompanied by a high stability of the catalyst morphology and microstructure ( Supplementary Fig. 36). Discussion Our study describes an original and robust molecular engineering strategy to tune the oxidation state of Cu electrodes via functionalization. We identified that strong electron-withdrawing groups based on aromatic heterocycles can effectively orient the pathway of the CO 2 RR reactions towards the synthesis of C 2+ molecules. Functionalization of the surface of a bimetallic Ag-Cu catalyst with thiadiazole and triazole derivatives led to an enhancement of the FE C 2þ up to ≈80 ± 1%, corresponding to ratios of FE C 2þ to FE C 1 and FE C 2þ to FE H 2 of 10 and 5.3, respectively. By combining Auger and XANES spectroscopy we identified that the superior performance towards the CO 2 -to-C 2+ conversion originates from the controlled p-doping of the Cu and presence of Cu δ+ with 0 < δ < 1. The functionalized Ag-Cu electrodes were found stable, which translates into a prolonged production of C 2+ products for >100 h. Electrodes preparation. Before depositing catalysts, gas diffusion electrode (GDE) was treated with sulfuric acid by sonicating 20 min. After acid treatment, the remaining acid was rinsed with deionized water for 5 min three times, and gas diffusion layer was dried at room temperature. To obtain the working electrodes, 15% at. Ag-Cu catalysts were prepared through a pulse electrodeposition approach under CO 2 bubbling condition. Firstly, electrochemical deposition of the Ag catalyst was performed using a potentiostat (VSP potentiostat from Bio-Logic Science Instruments). Physical characterizations. A field emission scanning electron microscope (TESCAN Mira3) was employed to observe the morphology of samples. Aberration-corrected high-resolution (scanning) TEM imaging (HR-(S)TEM), energy-dispersive X-ray spectroscopy (EDS) and spatially-resolved electron energyloss spectroscopy (SR-EELS) were performed using a FEI Titan Cubed Themis microscope which was operated at 80 kV. The Themis is equipped with a double Cs aberration corrector, a monochromator, an X-FEG gun, a super EDS detector, and an Ultra High Resolution Energy Filter (Gatan Quantum ERS) which allows for working in Dual-EELS mode. HR-STEM imaging was performed by using highangle annular dark-field (HAADF) and annular dark-field (ADF) detectors. SR-EELS spectra were acquired with the monochromator excited allowing an energy resolution of 1.1 eV with an energy dispersion of 0.4 eV pixel −1 . Liquid products were quantified by 1H NMR spectroscopy (600 MHz Avance III Bukrer with a cryorobe Prodigy TCI) using deionized water with 0.1% (w/w) of DSS (Sodium trimethylsilyl propane sulfonate) like internal standard for the quantification of the ethanol and formate. An 1D sequence water suppression with excitation sculpting with gradients (zgesgp) was used for the acquisition (Number of scan = 32, Delay D1 = 30 s). X-ray photoelectron spectroscopy (XPS) measurements were carried out on Thermo Electron ESCALAB 250 System using Al Kα X-ray radiation (1486.6 eV) for excitation. Raman measurements were conducted using a Renishaw in Via Raman microscope and an ×50 objective (Leica) equipped with a 633 nm laser. Operando Raman measurements were carried out using a modified liquidelectrolyte flow cell. The spectra were recorded and processed using the Renishaw WiRE software (version 4.4). An Ag/AgCl electrode and a Pt plate were used as the reference and counter electrodes respectively. Ex situ X-ray absorption spectra at the copper K-edges and Operando X-ray absorption spectroscopy (XAS) measurements at the copper K-edges were collected at Beijing Synchrotron Radiation Facility (BSRF) on beamline 1W1B and the SOLEIL synchrotron SAMBA beamline, respectively. Operando X-ray absorption spectroscopy (XAS). Operando XAS measurements at the copper K-edges were collected at Beijing Synchrotron Radiation Facility (BSRF) on beamline 1W1B and the SOLEIL synchrotron SAMBA beamline, respectively. Operando XAS measurements of functionalized Ag-Cu electrodes were obtained by using a Si (111) monochromator at the Cu K-edge for energy selection with the beam size of 1 × 0.5 mm. A 13-channel Ge detector was used to collect the signals in fluorescence mode. An ionization chamber (I 0 ) filled with a mixture of N 2 /He was used to measure the intensity of the incident radiation. While the measurements in transmission mode were operated in other ionization chambers which were filled with the mixture of N 2 and Ar in I 1 chamber. A modified electrochemical cell was used for operando XAS measurements. The applied potential was controlled by a VSP potentiostat (Bio-Logic Science Instruments). A platinum wire and Ag/AgCl electrode (3 M KCl) were used as counter and reference electrodes, respectively. For the XAS studies, 15%at. Ag-Cu was firstly electrodeposited on gas diffusion layer (GDL, Sigracet 22 BB, Fuel Cell Store) used as gas diffusion electrode (GDE) and then functional solutions were dropcoated on the catalyst side, while the other side of the GDL was covered with polyamide tape. The GDL was then tape on a graphite foil and subsequently, the electrode was mounted in an operando cell with the graphite foil acting as a working electrode and window. A 0.5 M solution of KHCO 3 was used as electrolyte for the CO 2 RR and the cell was continuously purged with CO 2 during the measurements. All measurements were performed at constant potentials of −1.2 V, −1.1 V, −1.0 V and −0.9 V vs. RHE. Time-resolved spectra were recorded every 30 min until no further changes were observed under CO 2 RR conditions. Data alignment and normalization of the X-ray absorption near-edge structure (XANES) spectra were conducted by using the Athena software. To fit the Cu K-edge extended X-ray absorption fine structure (EXAFS) spectra χ(k)k 2 , the range of parameter R in R-space from R min = 1 Å up to R max = 2.1 Å was used for the freshly prepared catalysts, while R min = 1.0 Å to R max = 3.0 Å were used for the reductive catalysts. The k-range from 3.0 Å −1 to 10.0 Å −1 with a k-weighting of 1, 2, and 3 were applied in the Fourier transforms. All fitting parameters including the coordination numbers N, interatomic distances R, disorder factors σ 2 for Cu-O and Cu-Cu paths, as well as the corrections to the photoelectron reference energies ΔE 0 were obtained. The S 0 2 factors were set to 0.831. Computational details. All density functional theory (DFT) calculations were carried out in the Vienna Ab-initio Simulation Package (VASP) code with the projector augmented-wave (PAW) method. The exchange-correlation energy was treated using a general gradient approximation (GGA) with the Perdew-Burke-Ernzerhof (PBE) formalism. A plane-wave basis with a kinetic energy cutoff of 500 eV was chosen to expand the electronic wave functions. To investigate the possible binding modes between functional molecular and catalysts, a 5 layers of Cu (111) slab (7.7386 Å × 7.7386 Å), in which the two bottom layers were kept fixed during relaxation, was built with a vacuum space of about 20 Å. For the geometrical optimizations, all atoms were fully relaxed to the ground state with the convergence of energy and forces setting to 1.0 × 10 −5 eV and 0.01 eV Å −1 , where a 3 × 3 × 1 Γ-centered Monkhorst-Pack schemed k-mesh was used to sample the first Brillouin zone. To compare the bond strength between each group of functional molecular and Cu (111), the adsorption energy (E ads ) is calculated by using the following formula: where E Cu/FM , E Cu , and E FM denote the total electronic energies of an adsorbed system, a clean Cu (111) surface, and the free functional molecular, respectively. Electrochemical in H-cell and MEA configuration. All electrochemical measurements were carried out at an ambient temperature and pressure using a VSP electrochemical station from Bio-Logic Science Instruments equipped with a 5 A booster and FRA32 module. The cell voltages reported in all figures were recorded without iR correction. All the potentials in the H-cell were converted to values with reference to the RHE using: In the H-cell configuration, Ag/AgCl reference electrode (3 M KCl) and Pt plate were used as a reference and counter electrodes, respectively. The electrolyte consisted of a 0.5 M KHCO 3 solution (99.9%, Sigma Aldrich), which was saturated with alternatively CO 2 (≥99.998, Linde) or Ar (5.0, Linde). Prior to any experiment, the electrolyte solutions were saturated by bubbling CO 2 or Ar for at least 20 min. The electrochemically active surface area (ECSA) of the different catalysts was determined using Pb underpotential deposition in H-cell. An Ar-saturated solution of 100 mM HClO 4 + 1 mM Pb(ClO 4 ) 2 was used as an electrolyte. The working electrode was held at −0.7 V vs. Ag/AgCl for 10 min and then cyclic voltammetry was recorded between −0.7 and 0.7 V vs. Ag/AgCl at 10 mV s −1 . Pt foil was used as the counter electrode, while Ar (Linde, 99.998%) was continuously supplied to the electrolyte. The ECSA values for Cu and Ag were calculated assuming the deposition of a monolayer of Pb atoms over Cu and Ag surface with a conversion factor of 310 and 260 mC cm −2 , respectively 64 . The MEA electrolyzer (Dioxide Materials) was comprised of the Ag-Cu cathode, a Ti-IrO x mesh anode, and an anion exchange membrane (AEM, Fumasep FAA-3-50, Fuel cell store). The anode and cathode flow fields are made of titanium and stainless steel with geometric active areas of 4 cm 2 , respectively. The anode was prepared by following previous work through depositing IrO x on a titanium support (0.002″ thickness, Fuel Cell Store) with a loading of 2 mg cm −2 by using a dip coating method followed by thermal annealing 65 . The MEA was prepared by hot-pressing the anion exchange membrane (AEM, Fumapem FAA-3-50, Dioxide Materials) between the Ag-Cu cathode and Ti-IrO x anode. The cell was assembled with flow fields for the anode and the cathode, which were separated by the AEM. Anode and cathode gaskets were used to ensure good sealing of the electrolyzer (Supplementary Fig. 29). A 0.1 M KHCO 3 anolyte and humidified CO 2 gas were fed to the anode and cathode at constant flow rates of 30 mL min −1 and 10 standard cubic centimeters per minute (sccm), respectively. The voltage of the electrolyzer was progressively increased from −2.8 V with increments of 50 or 100 mV. After 15-20 min of stable operation under constant full-cell potentials, the products were collected and analyzed. Quantification of the CO 2 RR products. The electrochemical data were recorded while simultaneously collecting the CO 2 RR gas products by using an automatic sampler connected to the cathode outlet. A cold trap was used to collect the liquid products before the sampler. For each applied potential, the gas products were collected at least three times with proper time intervals. The gas aliquots were then injected into an online gas chromatograph (Agilent, Micro GC-490) equipped with a TCD detector and Molsieve 5 A column continuously. Hydrogen and argon (99.9999%) were used as the carrier gases. Liquid products were quantified by 1H NMR spectroscopy (600 Mhz Avance III Bukrer with a cryorobe Prodigy TCI) using deionized water with 0.1% (w/w) of DSS (Sodium trimethylsilylpropanesulfonate) like internal standard for the quantification of the ethanol and formate. An 1D sequence water suppression with excitation sculpting with gradients(zgesgp) was used for the acquisition (Number of scan = 32, Delay D1 = 30 s). Owing to the liquid product crossover, the FE values of the liquid products were calculated based on the total amount of the products collected on the anode and cathode sides during the same period. Stability measurements in the MEA configuration. For the stability test, the MEA electrolyzer was operated at a constant voltage of −4.55 V with continuous feeding in CO 2 . The gas products were collected at frequent time intervals. The FE values were calculated from the average value obtained from three successive injections. As for the liquid products, the total liquid products were collected at the end of the experiments. Faradaic efficiency and energy efficiency calculations. The Faradaic efficiency (FE) of each gas product was calculated as follows: The Faradaic efficiency (FE) of each liquid product was calculated as follows: The formation rate (R) for each species (i) was calculated as follows: The full-cell energy efficiencies (EE) was calculated as follows: where g i represents the volume fraction of gas product i; v represents the gas flow rate at the outlet in sccm; z i represents the number of electrons required to produce one molecule of product i; I total represents the total current; l i represents the number of moles of liquid product i; and Q total represents the charge passed while the liquid products are being collected. P 0 = 1.01 × 10 5 Pa, T = 273.15 K, F = 96,485 C mol −1 and R = 8.314 J mol −1 K −1 ; t represents the electrolysis time (h); S represents the geometric area of the electrode (cm 2 ); E i represents the thermodynamic potential (versus RHE) for CO 2 RR to species i and E cell represents the cell voltage in two-electrode setup.
2021-07-26T00:05:43.577Z
2021-06-12T00:00:00.000
{ "year": 2021, "sha1": "5d2a312331835a9d577cb5b53208a41c6d34a835", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-021-27456-5.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1008c9f825d7687b3bacb9d6fdb8e9bfab65cbb4", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
139462821
pes2o/s2orc
v3-fos-license
Flexural tests of masonry beam with and without reinforced bar Behaviour of reinforced masonry has been studied experimentally to determine its strength potential. The increase in either compressive or tensile strength of masonry is possible due to the existence of rebar or wire mesh. The research is carried out to determine the effect of steel rebar on flexural strength of reinforced masonry beam using local brick. The square hollow masonry beams of 330x330mm with and without reinforced bar were tested in the laboratory to determine the load and deflection curves and bending strength. The rebar was located at the centre of beam’s cross section and left unbounded. Mechanical properties of masonry's constitute were also determined. It was found that the flexural strength of beams with rebar of 22 mm diameter was greater 11 times than that of beam without rebar. However, that strength was only 1.6 times due to the weaker end connections of the beam to the rebar. Flexural strength of reinforced masonry beam with 22 mm rebar was greater 2.7 times compared to the beam using a rebar diameter of 16 mm. Introduction Masonry is an element of a building structure consisting of clay brick or concrete block and mortar which are arranged in a specific pattern. The behavior of the masonry is similar to concrete that has a compressive strength far greater than its tensile strength. The masonry can similarly be reinforced with reinforcing steel to withstand or reduce the tensile load. Masonry with rebar is known as reinforced masonry while with prestressed high strength bar known as pre-tensioned or post-tensioned masonry. Both reinforced and pre-tensioned masonry are widely seen as building construction as well as arch bridges in the European region. The use of reinforcing steel on the masonry's wall is enabled to withstand the tensile forces that occur on the walls and increase shear capacities of the wall [1]. The addition of reinforcing steel using steel wire or wire mesh can increase the compressive strength and flexural strength. Test results by Pascanawaty et al. [2] on brick walls with wire and wire mesh showed an increase in compressive strength of 95% and 65% respectively. Increased stiffness (EI) was also generated due to the addition of wire and wire mesh of 4 and 4.7 times respectively for masonry tested perpendicular to the bed joint and 6.5 and 9.2 times tested parallel to the bed joint. The pattern of failure occurring is seen to be caused by the shear failure of the brick wall. The material properties of the masonry are affected by its constituent materials namely brick or block unit and mortar. Research by Budiwati [3] and Rahayu [4] has shown that the compressive stress of masonry prism is quite high once the better quality of brick unit and mortar are used. The use of prestressed steel on columns made of brick walls can increase the natural frequency of columns. The 40 mm diameter prestressed steel was placed in the center of the hollow square column and was post-tensioned up to a maximum of 300 kN. Test results performed by Budiwati [5] in a short column of 1.325 m height showed an increase in the natural frequency from 55 rad/s to 75 rad/s (35%). The rise in frequency value is due to the presence of prestressed steel, as can be seen from the dominance of the resulting steel mode shape. From the above description, it is generally seen that the portion of reinforcing steel or prestressing steel can improve the stiffness of structural elements. However, the resulting failure pattern of masonry is due to compressive strength. In order to study the effect of rebar on the flexural strength of masonry, the experimental research was conducted. Laboratory tests were carried out on clay brick masonry beams with rebar applied in the center of the hollow beam's cross-section and left unbonded. The result of bending beam test was in the form of load and deflection curve and crack pattern of the beam. From this research, the information related to the quality of clay brick walls and its component, bending strength of reinforced masonry beam, and cracked pattern of brick beam walls has been studied. Experimental works The research was conducted on four points bending test experimentally of four clay brick beams, three beams with rebar and one beam without rebar. Along with the beam testing, the component of masonry was also tested. They were the compressive test of brick, mortar, concrete, and masonry wallettes. Testing of masonry constituents' The testing of clay brick was conducted on ten brick units. The brick was measured its length, width, and thickness in accordance to SNI 15-0686-1989 [6]. The compressive test was conducted on those ten full-size bricks and also cube bricks of 70 mm. Water absorption of the brick was also determined. The bricks were immersed in the water until saturated and left dry in the oven for one day. The brick mass was weight and used for calculation of water absorption. Clay brick masonry wallettes were made to test the compressive and flexural strength of masonry. The compressive strength of the masonry wallettes was tested in accordance to SNI 03-4164-1996 [7] while for bending strength refers to the SNI 03-4165-1996 [8]. The tested specimens were prepared in the laboratory condition and tested at the age of 28 days. Testing of masonry beams Dimension and cross section of masonry beams is shown in Fig. 1. The total length of the beams was 1360 mm with the two ends connected to a concrete block of 50 mm thick. Beam cross section was 330x330mm and dimension of the hollow was 130x130mm (Fig 2b). The rebar was applied in the center of cross section (Fig 2a). Four beams (Fig. 2c) were constructed namely beam with no rebar (BR0), beam with rebar diameter of 22mm (BR22), beam with rebar diameter of 16 mm (BR16), and beam with rebar diameter of 22 with the slack connection (BR22S). The two point's load of 600 mm apart was applied in the middle length of the beams. The beams were loaded until failure and deflections at the center length were measured using dial gauges. Characteristic of masonry's components The red bricks being tested were from Darmasaba Village, Badung Regency. The average dimension of the brick was of 218 mm length, 102 mm width, and 63 mm thick. The average compressive strength value of the brick was 17.23 N/mm², and water absorption was 24%. Standard deviation and coefficient of variation was 1.57 and 9%. According to SNI 15-0686-1989 [6], the brick was characterized as class 50 (22%), maximum standard water absorption requirements for bricks. The unit weight of the brick was 2.001 kg/m 3 . The compressive strength of the brick determined using 70 mm cube was 8.71 MPa, lower than the brick tested in full size. However, referring to SNI 15-0686-1989 [6] the value was above the lowest average compressive of 5 N/mm², according to Eurocode 6 [9] it meets the minimum average compressive strength of standard brick used as a structural wall of 2.5 N/mm², and it fits the minimum compressive strength brick that may be used according to the recommendation of Indonesia Earthquake Study [10] of 3 MPa. The average compressive strength of the whole red brick (219x102x63) mm was 17.2 N/mm 2 . The value is twice higher than the compressive strength of a 70 mm cube brick. The strength of the cube brick was tested in reference to the ASTM standard [11] while the whole brick refers to the SNI standard that is almost the same as the BS Standard [12]. Base on the two test results, it can be understood that the compressive strength of the brick is 8.71 N/mm 2 and is classified as Class K50. The ratio of cement to the sand of the mortar used was 1:4. The average compressive strength of 10 mortars tested with a size of 40x40x40 mm was 22.30 N/mm². Regarding the compressive strength of the mortar, it is classified as class (i) according to BS 5628-1-1 1992 and M type mortar according to ASTM C 270 [11]. The concrete cap used in tested beam had an average compressive strength of 21.8 N/mm² tested using ten cylinders of 150 mm diameter and 300 mm height. Compressive and flexural strength of masonry The compressive test of masonry wallettes conducted in this study refers to the standard specified in SNI 03-4164-1996 [7]. The average maximum load applied was 92.5 kN so that the average compression strength value of the masonry was 1.36 N/mm². The modulus of elasticity the masonry was of 327.3 N/mm², calculated as the modulus secant. The compressive strength obtained was very small compared to the minimum strength in BS 5628-1-1992 [13] namely 3 N/mm² determined based on the brick and the mortar characteristic. Flexural test conducted on the wallettes has a similar size to those used in the compressive test. Calculation of the results is referred to BS EN [14]. The average flexural strength of the masonry wallettes loaded parallel and perpendicular to the bed joint was 0.07 N/mm² and 0.62 N/mm² respectively. The strength of masonry tested perpendicular was higher than those tested parallel in which the flexural strength of the mas tested parallel was only 12% of those tested perpendicular. The crack occurred in the area between the brick and the mortar. Flexural tests Testing set up of the beams is shown in Fig. 4. Table 1 shows the loads and the corresponding flexural strength of the four masonry beams tested. The ratio of the flexural strength of each beam to BR0 is also given. The flexural strength obtained from the masonry beams with no bar (BR0) was of 0.29 N/mm². The presence of 22 mm diameter rebar at the cross section of the beam (BR22) resulted in a significant increase in the flexural strength of the beam to 3.41 N/mm² (11.6 times). The significant increase is due to the rebar were cast to the concrete at both ends of the beam. From the table, it can be seen that by applying rebar diameter of 16 mm at the The flexural strength of 0.47 N/mm² was found for the BR22S, the beams with slack rebar connection. The rebar was slotted into the hole of the center's beam cross section before testing and was washer-nut tightened. The value is 0.14 times smaller than the beam with the same rebar diameter, but it was cast in the concrete. However, the addition of the rebar increased the beam bending strength by 1.6 times compared to the non-rebar beam. Load-deflection curves for all beams are shown in Fig. 4. Beam BR22 was the strongest compared to other three beams while BR0 was the weakest. The deflection occurring at the bottom side of the beam was recorded for every 1 kN increment of the load applied. The load-deflection curve produced by the BR0 test specimen was the smallest of the four tested beams. The deflection occurring at the midspan of the BR0 beam was 0.7 mm with the maximum load applied was 5.5 kN. For BR16, maximum deflection was 10 mm, and the corresponding load was 23.75 kN. There is an increase in the load capacity that can be withstood by the BR16 compared with the beam without a reinforced bar (BR0). From Fig. 2 mm respectively, while for BR22S beam the maximum load was 8.75 kN with the corresponding deflection of 4.0 mm. It can be concluded that the effect of additional rebar in masonry beams is to increase the stiffness of the beam. The bigger the rebar diameter, the higher the strength of the beam is. The pattern of bending collapse of masonry beam during loading is shown in Fig. 5. It can be seen that the position of masonry beam crack was on the pure bending area. A typical collapse pattern was a vertical crack parallel to the applied loading direction and formed a diagonal crack on the brick near the loading area. Due to the bed joint of masonry beam was parallel to the applied load, it was obvious to have the crack occurs at the area between brick and mortar that was indicated by the separation between the brick and the mortar. No bricks were broken, but it was only hair-crack in some area. Crack patterns that occur in the beams did not change due to the difference in diameter of the reinforced bar used, including for beams with no rebar. The different bar diameter effect the maximum loads that caused the beams to fail. Conclusions Based on results from the experimental study it can be concluded that the flexural strength of masonry beams is increased due to the existence of the rebar. The value of bending strength is 0.29 MPa, 1.27 MPa, and 3.41 MPa for a beam with no rebar (BR0), beam with rebar of 16 mm diameter (BR16), and beam with rebar of 22 mm diameter (BR22), respectively. The addition of reinforced bar on the beam resulted in a resilient strength increase of up to 11 times. However, if the reinforced bar is not installed well (BR22s), the growth is only 1.6 times compared to BR0. An increase in the diameter of reinforcing bars from 16 mm to 22 mm increases the flexural strength of 2.7 times. Test results obtained for masonry beam material is the compressive strength of 8.71 N/mm² for brick, and 22.30 N/mm² obtained for mortar. Using those materials result in the compressive strength of masonry of 1.36 N/mm². The flexural strength obtained is 0.07 N/mm² and 0.62 N/mm² for masonry prism loaded parallel and perpendicular to the bed joint respectively. Support from Magister Program of Civil Engineering Udayana University is gratefully acknowledged.
2019-04-30T13:09:29.129Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "ee8f7bd95ebdb82657a8126865ae7450e487cfaa", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2019/25/matecconf_icancee2019_01018.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b23ed17371fdedc7628967e5d7261f0eb3fd022b", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
214113573
pes2o/s2orc
v3-fos-license
Developing a Taxonomy for Success in Commercial Pilot Behaviors Human error has been well studied in aviation. However, less is known about the ways in which human performance maintains and contributes to aviation safety. The lack of data on positive human performance prevents consideration of the full range of human behaviors when making safety and risk management decisions. The concept of resilient performance provides a framework to understand and classify positive human behaviors. Through interviews with commercial airline pilots, this study examined routine airline operations to evaluate the concept of resilient performance and to develop a taxonomy for success. The four enablers of resilient performance, anticipation, learning, responding, and monitoring, were found to be exhaustive but not mutually exclusive. The tenets of resilience theory apply in airline pilot behavior, but operationalizing a taxonomy will require more work. Human error is thought to account for 80% of aviation mishaps (Shappell & Wiegmann, 2001). As a result, human error has been well studied in aviation (Helmreich, 1997;Kontogiannis & Malakis, 2009;Wiegmann & Shappell, 1999). Researchers and practitioners are able to speak a common language due to the development of a well-accepted taxonomy, the Human Factors Analysis and Classification System (HFACS) (Wiegmann & Shappell, 2017). The widespread acceptance of models such as Threat and Error Management (TEM) have helped valuable concepts of human error move into operational settings as diverse as aviation, medicine, and nuclear power. (Boy & Schmitt, 2013;Helmreich & Musson, 2000). Most data sources in aviation, such as the Aviation Safety Reporting System (ASRS), the Aviation Safety Action Program (ASAP), and Line Operations Safety Assessments (LOSA), are event or error driven, which enables and reinforces the study of error. However, much less is known about how human performance actively builds and enables system safety and efficiency (Holbrook et al., 2019). In complex, high reliability systems such as aviation, resilience has emerged as a key factor in safe and efficient operations. Resilient performance occurs when a system can "adjust its functioning prior to, during, or following changes and disturbances, so that it can sustain required operations under both expected and unexpected conditions." (Hollnagel, 2011, p. xxxvi). Studying how human performance contributes to system resilience can offer a new perspective on how to improve system performance and safety, and will offer a more complete picture of the role humans currently play in complex systems. As Holbrook et al. (2019) point out, understanding the full range of human contributions to system performance is critical at a time when the role of the human in the aviation system is changing. Safety I and Safety II In its earliest days, aviation safety was reactive, with most safety improvements driven by mishap investigations. With the introduction of incident reporting systems and the development of hazard identification and risk mitigation strategies, aviation safety entered a period of proactive safety management, in which mishap precursors could be identified and mitigated prior to mishaps occurring. This approach, known as Safety I, concentrates on identifying, trapping, and mitigating error in order to reduce the number of negative outcomes to as low as reasonably practicable (Hollnagel, 2018;ICAO, 2009). A number of challenges emerge with the Safety I approach as systems become safer. First, data is only systematically collected on operations with error or negative outcomes, so as operations become safer, a smaller and smaller proportion of actual operations are analyzed (Holbrook et al., 2019). Therefore, opportunities to learn and improve become increasingly limited. Second, the focus on prediction and prevention of negative outcomes does not accommodate unknown or unknowable threats. Third, if safety is measured by the absence of events that are extremely rare, it becomes increasingly difficult to assess the impact of system changes (Holbrook et al., 2019). Finally, it is intuitively obvious that studying failure when you are trying to ensure success tells only part of the story. In a comment first attributed to Marit de Vos of Leiden University, it is as if we are trying to learn about marriage by studying divorce (de Vos, 2018). Data Sources A key tenet of Safety Management Systems is the collection and analysis of safety data, so that the impact of changes to the system can be measured and monitored (ICAO, 2009). Aviation has a rich variety of data sources that drive safety decision making. The Aviation Safety and Reporting System allows anonymous reporting of incidents from private and commercial pilots, air traffic controllers, mechanics, dispatchers, and cabin crew (NASA, 2019). The Aviation Safety Action Program fulfills a similar function among air carriers and repair stations (FAA, 2002). The Flight Operational Quality Assurance (FOQA) program collects vast amounts of data on routine flights that is analyzed by individual operators for exceedances and trends. Finally, the Line Operations Safety Assessment program uses expert observation of routine flights to identify threats and errors, based on the Threat and Error Management model (FAA, 2014). Finally, the National Transportation Safety Board (NTSB) and the Federal Aviation Administration (FAA) collect data on aircraft mishaps and publish detailed accident reports and analyses. These data sources primarily focus on errors or incidents, and therefore do not represent the population of routine and ordinary flights. Robust data sources on successful flights are lacking. Therefore, most safety recommendations derive from a non-representative data set, and the routine and successful operations that make up the vast majority of commercial aviation operations are not documented. Human Error The abundance and quality of data concerning human error has made it possible to create a taxonomy of human error that has been widely accepted. The Human Factors Analysis and Classification System was derived from extensive analysis of aviation mishaps and incidents, and has been applied to diverse industries outside aviation (Wiegmann & Shappell, 2017). HFACS has enabled a common language to be used in government, academia, and industry, and among researchers as well as practitioners. The ubiquity of HFACS has made it a powerful tool in identifying and addressing problems in human performance. Similarly, ASRS categorizes incidents using a taxonomy that focuses on outcomes and failures in human performance, which has resulted in a body of rich and consistent information about adverse events and errors (NASA, 2019). However, no such common vocabulary exists for discussing successful behaviors that actively contribute to system safety. The lack of a taxonomy to categorize and classify positive behaviors adds to the difficulty of studying successful performance. Resilience Theory Safety II depends not only upon reliable data sources and a common vocabulary, but also on a theoretical underpinning. Just as models of human error are anchored by theories of human information processing and cognition, so models of successful behaviors must be anchored by theories of human and system performance. Accident causation models have typically been linear, leading to the approach that preventing bad outcomes involves preventing or mitigating precursors. However, some mishaps cannot be explained by linear models. Rather, they are the result of a complex interplay of events that affect each other (Woods, 2017). In this model of accident causation, safety results from the ability of a system to accommodate these events. Resilience theory concentrates not on the response to the specific disturbance, but on the system capabilities that allow it to accommodate the disturbance. Resilient performance is thought to be enabled by four key system attributes, specifically, the ability to: • Anticipate future events or situations • Monitor both its own performance and environmental factors • Respond to expected and unexpected events • Learn from experience These four abilities form a model of resilient performance based on the underlying theory of resilience (Hollnagel, 2011). Problem The study of error has been instrumental in achieving the safety improvements that commercial aviation has enjoyed. However, the reduction in negative outcomes creates problems for using the Safety I approach to further improve safety. As negative outcomes decline, the proportion of flights studied becomes smaller and less representative. Further, the impact of safety interventions becomes very difficult to assess. Also, a system that concentrates on identifying and mitigating threats may become vulnerable to threats that cannot be predicted. Instead, a Safety II approach is needed that supplements Safety I by examining the qualities that allow a system to respond flexibly in response to threats and disturbances, both anticipated and unexpected. The Safety I approach has been successful in improving commercial aviation safety, but further safety advances cannot depend only on reducing the occurrence of negative outcomes. Safety in complex systems depends on the ability of a system to accommodate disturbances. System resilience depends upon behaviors that reflect the key attributes of anticipation, monitoring, responding, and learning. However, the error and event-based reporting approach common in commercial aviation does not fully capture the range of pilot behaviors corresponding to the key attributes of system resilience. As a result, much of the pilot's contribution to system resilience is not measured, and therefore not studied systematically. In order to understand how system resilience is built and maintained, data must be collected on routine successful operations. Currently, aviation has rich data sources and robust taxonomies to study error, but insufficient ways to identify and categorize success. Purpose The purpose of this study is to identify behaviors that increase system resilience in routine commercial airline operations, and to begin to articulate a taxonomy for behaviors that contribute to system resilience. Data on successful routine performance in commercial aviation is not systematically collected or analyzed in a way that allows for exploration of the qualities and attributes that enable system resilience. LOSA assesses routine flights, but focuses on error management. FOQA records information on routine flights, but the data analysis focuses on exceedances, rather than the data that might correspond to corrections that prevent exceedances. These data sources are tremendously valuable, but incomplete in the effort to study the contribution of routine pilot performance to system resilience. This study aims to fill a small part of that gap by studying specific events that involve unexpected or unplanned events and exploring pilot behaviors that contributed to successful conclusion of these flights. Significance of the Study The Safety I approach of reducing negative outcomes has natural limits as systems become safer. Further improvements in safety and efficiency must come from expanding data sets to include analysis of successful outcomes in order to understand the antecedents of successful performance as well as the antecedents of unsuccessful performance. Safety II is still in its early stages of acceptance. This study adds to the growing body of literature that uses a Safety II approach to understand the full range of human performance contributions to system resilience. Learning more about successful human behaviors that contribute to system resilience can help training organizations cultivate and enhance resilient performance. Further, with the increase in interest in autonomous systems in aviation, it is vital to understand the human contribution to system performance so this ability can be accounted for in any new system design. Research Questions Can commercial airline pilot behaviors be classified according to the four key attributes of resilient performance? Can a taxonomy of resilient performance be articulated from investigating airline pilot behaviors in routine operations? Methodology This project used a qualitative, case study approach based on incident debrief interviews with commercial airline pilots. The study was designed to utilize purposeful sampling of the participants' viewpoints and expert opinions regarding their decision-making processes in aviation. Qualitative research is the traditional method for discovering a deeper understanding of a subject in a way that quantitative-only data cannot give us. The interviews were based on the critical incident approach in which a participant was asked to recall a particular type of event (Hobbs, Cardoza, & Null, 2016). The interview protocol contained a greeting, description of the purpose of the research, event prompt, follow-up questions, and space for reflective notes. Using research questions developed by NASA, the researchers developed an open-ended question with follow up questions to probe for deeper meaning (Holbrook et al, 2019). The interview questions are presented in Appendix A. Institutional Review Board permission was obtained from the sponsoring university prior to any participant recruiting or data collection. To maintain the confidentiality of the participants, all identifying information was redacted from the transcripts. A case study methodology was employed to examine the various aspects of the pilots' thought processes within the theory of resilient performance. This case study was designed to bring the researchers to a deeper understanding of this issue, adding depth to what is already known about this phenomenon. As a result, 16 unique perspectives were obtained, analyzed, and placed into specific themes for the purpose of addressing the research questions. Participants Sixteen pilots from major U.S. airlines were recruited for participation in this study. Fourteen pilots were actively flying for a major airline (including eight captains and eight first officers); and, two were actively flying for a regional carrier (one captain and one first officer). Saturation of the data was met through this sample by ensuring that adequate quality data was collected to support the study; no new information was expected to be added to the emerging patterns that would enhance or change the findings of this study. The 16 purposely-selected participants were pilots from different airlines, which allowed for different perspectives from a cross section of cultures, experiences, and situations. In the data collection and analysis process, each participant read and signed a confidentiality consent form, was assigned a code to ensure confidentiality and privacy was maintained. A high degree of validity was designed into the research process. The first step to ensure validity consisted of inter-rater reliability (IRR) training. Interviewers discussed potential biases and then met to create mock interviews, thereby ensuring consistency of questions and follow up techniques. Next, the researchers ensured that an appropriate sample was selected, by interviewing both captains and first officers from different airlines. Finally, triangulation was also used to ensure validity. Interviews were conducted by three IRR-trained researchers in different locations. Once the interviews were complete, the researchers individually analyzed the data before meeting to compare and integrate their individual results. Procedures As an initial study, this data is intended to support a foundational understanding of pilots' thought processes and behaviors within resilient performance. As in any research, ancillary findings (which are not the primary target of the planned procedures) can greatly contribute to the results of this study. Further, understanding the thought processes in real-world situations was envisioned as a secondary function of this research. The researchers voice-recorded each participant's discussion throughout the interview. A written transcript was developed for each participant after de-identifying each participant's information. Each of the participants' responses offered insight into their perceptions, opinions, and personal recommendations of airline operations. The MAXQDA qualitative analysis software was used to organize and analyze the data. The participants were assigned a sequential identification number (i.e. Participant 1 [P1]). Using the inductive approach to data analysis, the researchers then extracted key statements and phrases while organizing them into broad patterns that corresponded with the research questions and finally summarized what was communicated within each statement. From this extraction, the researchers identified primary themes. While the researchers had specific interview questions that were asked during each of the semi-structured interview sessions, the interviewers also permitted the free flow of dialogue. This approach provided a broader set of information, yielding richer overall data than is presented in this discussion. Through the data collection process, the researchers were able to freely engage with the participants, which yielded additional unexpected findings. While not initially planned, the additional dialogue provided a wealth of interpretive data to support the findings from the original structured research questions. The data reduction process was helpful in further identifying patterns and alignment to the research questions. In the review of themes, the above connections were drawn based on similar participant responses and the interpretation of this data. It is important to be mindful that qualitative data analysis is ongoing, fluid, and sheds light on the broader study questions. Limitations and Delimitations This study included only pilots employed full-time with airlines based in the United States. Additionally, participants were limited to those who were available and willing to be interviewed. Purposive sampling allowed for the representation of a variety of airlines, but may have introduced other biases. Results and Discussion As an initial study, this data is intended to investigate the practical application of resilience theory in real-world setting. Holbrook et al. (2019) categorized behaviors in terms of strategies for resilient behavior, such as "Anticipate resource gaps" and "Anticipate procedure limits". The authors focused on observable behaviors rather than underlying strategies, since the research objective was to develop a taxonomy of behaviors that an expert observer could use in safety audit setting. However, the theoretical framework and major categories were the same. Our intent was to begin a discussion among researchers and practitioners, rather than to prescribe an exact taxonomy. The model for the taxonomy is shown in Figure 1. The four categories of Anticipate, Monitor, Respond, and Learn, are discussed below. Anticipate When asked the first research question, Were there things you were aware of at the start of your flight that you thought increased the likelihood that this event might occur during that flight?, there were two themes identified in the data: Considering and Preparing. These behaviors consisted of gathering information, discussing what to do, and deciding on action. For example, in response to noticing that an aircraft ahead got diverted, P9 stated, "Then we started talking to dispatch, and started trying to coordinate to go somewhere else in case in we needed to do that." As P3 stated, "Once we got up with the Washington Center frequency that was starting to do the traffic delays we had a plan in place so we knew once we got into holding we'd already calculated that we could hold for about 20-25 minutes before we'd have to go to our alternate." Taking Action in Anticipation. In some cases, pilots became aware of potential disruptions, and took action in anticipation of them. As P8 stated, "We knew thunderstorms were forecast, so we added extra fuel to give us maximum holding time." During an uncertain maintenance delay, P13 explained "It's just really important to manage your sleep. And so I slept as much as I could during that day, not knowing when we were leaving." Monitor When asked the second research question, Were there things that you experienced during that flight that you though increased the likelihood that this event might occur?, there were two themes identified in the data: Routine Monitoring. Responses from pilots indicated there are known factors they routinely monitor on every flight, for example weather, crew rest, the aircraft interphone system, or traffic in the area. P4 stated "Just for myself, usually anytime I'm on a more than like an hour long flight pretty much as soon as I get up to cruise I update and monitor all my weather information. Just to have like the earliest heads up if something is starting to change. And that's when we first got up to cruise, I got an updated ATIS for Baltimore and it was already showing thunderstorms at the field." As another example, P9 stated "I was just flying from Charlotte to San Francisco, and I'd say probably almost, a little over halfway into the flight, I actually monitor the flight attendant conversations over the interphones. I'm sure like you guys have, we have a way to monitor their intercom conversation. So, I keep that available. I started noticing they were calling about a passenger that was having some kind of medical distress. So I became aware of something that could potentially be developing with a medical issue. And I typically just wait and let it play out and then eventually they're going to contact me and let me know. But at least now I have an idea that at least something's transpiring or beginning to transpire so I can start to... It lessens the startle effect later." Increased Surveillance. In addition, there were factors that pilots paid more attention to due to certain circumstances, for example holding and diversion of aircraft ahead on the flight path in areas of bad weather, fuel state when other aircraft were diverting for weather, and traffic in the area when conditions made VFR traffic likely. P6 stated "You know it was August in Miami. So, you always have to be aware of the potential for the airfield getting soft in the thunderstorms. Typically, in Florida they move through fairly quickly and we do have holding fuel for that contingency. And then sometimes there's a little extra. So, we look at the fuel more carefully based on experience with the weather and actual weather." P2 explained, "You know that busy airport on a VFR day you're going to have VFR traffic in addition to traffic that are filed with the FAA or you know… you've got guys that are not filing. So just more of an awareness that this trip I needed to be on my A game, you kind of keep an eye out for this basically." Respond When asked the third question, How did you respond to this event?, there were two themes identified in the data: Discussing and Deciding. This included gathering information, discussing alternatives, and deciding on action to take. For example, P8 stated "Between the two or three of us with dispatch, we continually monitored the weather and tried to make the best possible decision. Like I said, I was more for going to Dulles, which was open at the time, and they were both like, 'Yeah, but Dulles, they've got that thunderstorm there, close by, and they're predicting it's going to move in. I think BWI is clear and a million, and we're going to be pretty safe going in there'. . . It was actually a little bit closer than Dulles, although either one of them were super close. Between the three of us, we gathered the information, made a decision we were all comfortable with. I was comfortable going with Baltimore." Taking Action in Response. This category included all actions taken in response to unexpected events or situations, for example following a checklist, initiating a divert or complying with a collision avoidance procedure. Pilot explanations of this were typically simple and direct. P4 stated "And then you break out the checklist. And do the normal things declare an emergency break out the checklist. Go ahead start running the checklists." Learn When asked the fourth question, What did you learn from this event?, every participant stated that learning from previous experience increases the safe operation of a flight. Every pilot specifically mentioned learning from previous events. This was the most discussed aspect of resilient performance for these pilots. Pilots stated that both formal and informal learning guided their actions. Formal Learning. P16 stated "Training. . . So, any pilot that experiences any nonnormal situation relies on their training methodology to solve the problem, resolve it." In addition, P15 included "Every year we train in the simulator for all kinds of different problems". Moreover, P1 included "I think there is pattern matching that goes on. I think I find in other emergency situations I have handled in my career there's pattern matching. It seems to me that, have I seen this kind of scenario before and it goes all the way back to my primary training we did simulated engine failures unexpectedly. So pattern matching to me can be helpful. Pattern matching can also retrieve some skills, some primal skills, positive primal skills that might help you deal with it." Informal Learning. As P10 stated, "They may be able to trigger in their mind oh you know I talked to somebody about this once and I think that's really a huge hugely important thing in aviation is that is those little things that you have in your mind of past experiences and past stories that you've heard so that when symptoms of a problem do present themselves to you, you can kind of reach back to those tidbits of information and maybe use that to analyze and figure out what's going on in your situation." Moreover, airlines have robust safety feedback programs. As P2 stated, "Well, we have a very robust ASAP, where we have access to a lot of information de-identified of incidents and events that occur. I think with a strong safety management system, through our FOQA program, our ASAP system and LOSA. I think there is power in learning and when you read these things you can be very arrogant and say well that would never happen to me. I look at it and say I could see that happening to me and... So I think learning about lessons learned from other people are very powerful." In addition, P13 added "There's our debriefing afterwards. We talked about what had happened. Like I told my first officer, I said he did a fantastic job at coordinating with the flight attendants. Especially at the end when I was doing a lot of the flying, doing the diverting and talking to ATC, he did a lot of the behind-the-scenes stuff, which really helped. We debriefed what things could have gone better, what went well, and then how would we do things differently." Ancillary Findings As in any research, often the data collected yields information, ideas, or additional themes that were not anticipated. When this occurs, a rich and detailed set of findings can support the gap in the literature and further support the research questions. In this case, there were two major unintended findings that were common throughout the interviews. While this later theme may not be directly aligned to the original research question, it is related to the perceptions and opinions of the participants. Enablers of Resilient Performance Training. Training was a topic that was discussed in 12 of the 16 interview sessions. Every participant complimented the quality of training received from the respective airline. As P5 stated, "Yeah I lost an engine on takeoff a couple of years ago. And it was just sort of fall back on your training. . . Because you're trained for it all the time you know to lose an engine on takeoff." P12 stated, "You see you start falling back on what you've been taught to do." As P7 added, "yeah, simulator training. We had seen it before in the simulator. . . I followed the emergency procedures we were trained to do." Experience. In addition to training, experience was mentioned by 14 of the 16 every participants as a huge factor in how they responded to an event. As P1 stated, "On my last trip to Dulles, I mentioned it to the co-pilot what had happened, and if we were offered that again on an afternoon flight, that we would probably end up doing the same thing, or at least considering it." P11 included "Experience because many airports that have construction anywhere near the end of the runway, have frequently had their instrument or glide slope and localizer antennas interfered with construction or vehicles driving right in front of them… Personal experience, since I was a private pilot, you just land the airplane." P4 handled an emergency by recognizing that something just wasn't right, stating, "I guess I mean just from flying this aircraft for the past several years knowing what the speed schedule would be upon reaching the thrust reduction altitude it'll start commanding a nose down pitch attitude and the speed bug would switch up to two hundred and fifty. And just witnessing that not happening is something very different occurring. That was just outside of the normal pattern that you're accustomed to seeing." As P9 stated, "As experienced dictates, you try and avoid surprises, anything with startle effects, so I always anticipate or try to become cognizant of any potential threats to a flight. And like I said, the longer flights I'm aware that options could be limited to divert. A lot of international experience like yourself. So you realize there are areas where you have really limited options. You try and think ahead, "what would I do in this case?" Because you don't want to be caught behind the power curve and have a surprise and have to play catch up." Despite the importance of informal learning, pilots did not generally share these lessons through any established process. Rather, pilots reported sharing their lessons with others in oneon-one conversations, but generally they regarded what they learned as not significant enough to share through the more formal mechanisms available at their airlines. As Holbrook et al (2019) point out, "no methods exist to systematically report or capture this information. This is a missed opportunity for developing training, data systems, and procedures whereby operators could systematically benefit from others' lived experiences, not just their own" (p. 17). Further, several pilots noted that their airline experience differed from their previous military experience in this regard, with more opportunities to share informal lessons in their military flying background. Crew Climate Most participants discussed crew climate and crew coordination as major factors in their decision making, hence resilience. For example, as P14 stated, "But then you could almost call it if anything like a sort of like team building type thing. Because at that point we had kind of like faced, nothing major, but we faced an abnormal situation and worked through the issue and come to a conclusion there." As P7 stated," I learned that the people that I worked with during the emergency were awesome. The controllers were very helpful in getting us back around. Everything went very, very easily just because of the training, and the working together from the airline perspective." P8 added "The most important thing is having a crew that can work together. That can say, Hey, we're gonna check all the other stuff when we come to work, and just work together the best we can to handle any kind of a situation." The contribution of the crew concept to resilience is an especially important topic to explore in future research, as the idea of single pilot operations gains more traction. Any changes to accommodate single pilot operations must also be able to incorporate the resilience that is an emergent property of team performance in the cockpit. Categories Are Not Mutually Exclusive It became apparent early in the interviews that often a response could be used in more than one category. For example, P9 stated "As experienced dictates, you try and avoid surprises, anything with startle effects, so I always anticipate or try to become cognizant of any potential threats to a flight. And like I said, the longer flights I'm aware that options could be limited to divert. A lot of international experience like yourself. So, you realize there are areas where you have really limited options. You try and think ahead, "what would I do in this case?" Because you don't want to be caught behind the power curve and have a surprise and have to play catch up." This example could easily fit into the categories of Anticipate, Monitor, or Learn. Conclusion Resilient performance, as a theory, appears to have practical application in aviation. Purposeful sampling of 16 airline pilots show resilient performance does occur on flights. The categories of Anticipate, Monitor, Respond, and Learn were exhaustive, but not mutually exclusive. Thus, the tenets of resilience theory are initially validated, but operationalizing a taxonomy will require more work. Recommendations for the Instructional Environment As noted previously, the highest response was in the category of Learning. Although each category is important in the decision-making process, opportunities to create better learning environments will continue to enhance safety. This gives great opportunities to enhance student learning with the incorporation of resilience theory. As both formal and informal training were highlighted by the participants, three areas to create better learning for students: Flight line Operations: as part of the brief/debrief time, instructors should build in scenarios where students need to think through a situation. Situations could include abnormal engine indications, unexpected weather, equipment malfunction, etc. This gives the student the opportunity to chair fly (practice on the ground) the thought process and resources available. Curriculum Developers: a similar process can be used in any classroom setting (air traffic control, maintenance, UAV operations, etc.). Curriculum developers/instructors can build in "what would you do if" scenarios into lectures. This helps reinforce the law of primacy (learn it correctly the first time) for situations that may be encountered later in more stressful environments. Capturing Positive Performance/Resilience: this gives opportunities to reinforce correct thought processes. Often times, people critique negative/incorrect application, yet fail to reinforce the overwhelming part the process that was done correctly. This is a great opportunity to correct faulty thoughts, but also praise and reinforce correct thought processes. Future Research Future research is suggested with a larger sample size, across numerous airlines, worldwide. Also, future research should include less-experienced pilots, to see if the theory holds at different levels of experience. Further, research should include different operational domains, such as flight instruction. Holbrook et al. (2019) discussed the need to be able to correlate safety data, such as FOQA data with crew behaviors. Future research should attempt to connect disparate data sources to develop a more robust and complete picture of resilient behaviors. Finally, carefully-scripted follow up questions should be introduced to include crew dynamics with resilient performance.
2020-03-19T19:45:23.987Z
2020-01-08T00:00:00.000
{ "year": 2020, "sha1": "fe00d11aa1f1f9ef16ec346c31b96ba2082efb20", "oa_license": null, "oa_url": "https://ojs.library.okstate.edu/osu/index.php/CARI/article/download/7959/7345", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "151faa9a679d980c7dd7a4e1e9638541051c4d08", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
225661990
pes2o/s2orc
v3-fos-license
Using Assessment Tools to Develop a Workshop for Library Staff: Establishing a Culture of Assessment No abstract. Using Evidence in Practice Using Assessment Tools to Develop a Workshop for Library Staff: Establishing a Culture of Assessment Problem The University of Illinois at Chicago Library is committed to enhancing its outcome-oriented initiatives.To that end, the University Library's Steering Committee developed a strategic plan using a logic model, which is a visualization of a program or project showing the relationships between investments, activities, and intended results (W.K. Kellogg Foundation, 2006).The Assessment Coordinator, who is a member of the Steering Committee, has the primary responsibilities of planning and implementing library assessment initiatives, leading the Assessment Coordinator Advisory Committee, conducting surveys, and consulting with library units to determine their assessment needs.Another responsibility of the Assessment Coordinator is creating professional development opportunities for library staff related to assessment.While the Assessment Coordinator plays a role in providing resources and expertise in assessment, it is not practical for one person to look into every project.At the University Library, some librarians who possess strong assessment skills are involved in committees and working groups related to assessment in the library.Assessment responsibilities must involve team members across departments to overcome assigning responsibilities to one specific group.Therefore, this paper aims to demonstrate how assessment tools were used to create professional development opportunities to contribute to an assessment culture. Evidence A fundamental task of librarians with assessment responsibilities is to "help librarians demonstrate their library's value to the institution" (ALA, 2017).Another key responsibility is to provide mentoring, training, and coaching in order to "build a culture of assessment and organizational capacity for assessment" (ALA, 2017). To foster an assessment culture, the Assessment Coordinator at the University Library decided to develop and offer a workshop designed for library staff to illustrate the use of a logic model followed by individual consultations.The logic model was chosen as the workshop topic because logic models have been used as a framework in grant proposals (e.g., IMLS), strategic planning (Dubicki, 2011), library assessment (Stoddart & Lajoie, 2014), program evaluation (Markless & Streatfield, 2017), and program intervention (Kletter, Mendelez-Torres, Lilford & Taylor, 2018).Additionally, logic models are an important training topic (e.g., CARLI Counts, 2019). During September -October 2018, the Assessment Coordinator had several informal interviews with department heads and senior staff to determine the best way to develop the logic model workshop.Based on the feedback from the department heads, the Assessment Coordinator created two pilot workshops during regularly scheduled monthly department meetings prior to launching the logic model workshop to all staff.Two department heads at the University Library agreed to offer 30 minutes of their meeting time for the pilot workshop. Three days before the pilot workshop, the Assessment Coordinator distributed to participants a pre-workshop survey and pretest asking them about their previous experience and knowledge of logic models.According to the pre-workshop survey results, more than 65% of participants were not aware of logic models, and more than 80% of participants had no experience using a logic model.Therefore, the workshop focused on the basic concepts of the logic model, as well as emphasizing the benefits of adapting logic models to their projects.To develop the workshop content, the Assessment Coordinator reviewed logic model training guides developed by Abdi and Mensah (2016), Taylor-Powell and Henert (2008), Community Tool Box (n.d.), and the W.K. Kellogg Foundation (2006). During the workshop, the Assessment Coordinator shared the results from the presurvey, played a video from the dean about the importance of learning the logic model and her expectations, and led activities to enhance the participants' engagement.After the pilot workshop, library staff and faculty received a post-workshop survey and post-test.The preand post-assessment tools aimed to answer five major questions: 1. What were the participants' previous knowledge and experience in using logic models? 2. What is the overall evaluation of the pilot logic model workshop?3. What were the most interesting aspects of the workshops?4. Did the participants' knowledge and awareness of logic models increase after the workshops?5. Which areas of the workshops could be improved? A total of 26 library staff and faculty attended two pilot workshops (a 65% attendance rate). The first pilot workshop was conducted with the Resource Acquisition and Management Department in November 2018 (n = 16), and the second pilot workshop was conducted with the Research Services and Resources Department in December 2018 (n = 10).One half of participants completed the workshop evaluation survey.The key findings from the workshop evaluation survey (Figure 1) demonstrate that participants generally found the workshop to be helpful.All of the participants (n = 13) replied "strongly agree" and "agree" with respect to the content's usefulness, the workshop's level and format, the understanding of the material, and their willingness to recommend the workshop to others.Only the workshop's length was an issue.More than 15% of the respondents (n = 2) "disagreed" that the length of the workshop was appropriate, and 20% of the participants (n = 3) in the open-ended questionnaire provided further information saying that the workshop was too short. Figure 1 The Evaluation Results of the Pilot Workshops (N = 13) Participants were asked to name the most interesting aspect of the workshop.Participants indicated that the most interesting aspect was the game activity where groups worked together to identify the basic concepts of the logic model and put them in correct (n = 5).One respondent commented that "the video from the dean was great to show she supports these efforts."The results from the tests indicate that the mean scores of participants' knowledge of the logic models in the post-test (M = 6.14) were higher than in the pre-test (M = 4.25).This result confirms the effectiveness of the workshop. Implementation Based on the findings from the two pilot workshops, the Assessment Coordinator has been providing the logic model workshop on an ongoing basis to groups or individuals involved in a project that requires measuring outcomes.To date, the workshop has been conducted with three departments and one committee.Because participants reported that the length of the pilot workshop was not sufficient, the workshop was expanded from 30 to 60 minutes.Additionally, depending on the group's previous experience and knowledge as assessed through the survey and pre-test, the Assessment Coordinator tailored the workshop activities and the level of contents. During the logic model workshops, many participants understood the concepts; however, when applying the logic model to their work or a project, it was not easy to develop on their own.As a result, follow-up support was offered through one-on-one consultations.After presenting the logic model workshops, several participants requested assistance with either developing logic models for their programs and projects or reviewing their previous logic models to check if the measurable outcomes were appropriate and to verify how the intended outcomes are met. Outcome Within half a year, this initiative had positive results.Immediately after the logic model workshop for the library's human resources department in February 2019, the Director of Human Resources requested a follow-up consultation and expressed their desire to develop an onboarding program using a logic model.Since then, the Assessment Coordinator has been collaborating with the Human Resources department as they develop an onboarding program for new employees. As a result of the Assessment Coordinator's six-month consultation with this department, the onboarding program using the logic model was completed and presented to the library steering committee. Another example is the Undergraduate Engagement Program of the Research Services and Resources department, which was developed by the outreach engagement faculty using the logic model prior to the logic model workshop.During the consultations, the Assessment Coordinator and the outreach engagement faculty revised the outcome statements and discussed the possible measures that enable faculty to measure the desired outcomes.The updated logic model was also shared with the library steering committee and in a library faculty meeting.Some faculty who attended the presentations commented to the Assessment Coordinator that it was helpful for them to better understand the program and the program goals, and they wanted to develop their projects using logic models. The last example is a faculty member who participated in the logic model workshop and wanted to write a grant proposal using the logic model.After the workshop, the Assessment Coordinator and the faculty member reviewed the draft grant proposal and focused on how to articulate measurable outcomes using the logic model.Afterwards, the faculty member submitted her grant proposal and received the $20,000 grant. Reflection Developing valuable library staff training requires innovative strategies and a significant investment in time and resources.Offering logic model workshops to the University of Illinois at Chicago Library staff was one example of a successful training to build a culture of assessment.It was a useful place to start because it shows participants the value of considering assessment from the very beginning of designing a project or initiative.Additional workshops are needed to provide librarians with skills for actually measuring whether the outcomes set in the logic model are being achieved.In the end, each member of the library staff who participates in the workshop will have the ability to assess and evaluate their own project and program which, in turn, establishes and reinforces the culture of assessment.
2020-06-18T09:08:40.957Z
2020-06-12T00:00:00.000
{ "year": 2020, "sha1": "b32ca5c6cb6ebf2f60f9e8c0a6ff8c68b6686e18", "oa_license": "CCBYNCSA", "oa_url": "https://journals.library.ualberta.ca/eblip/index.php/EBLIP/article/download/29709/22214", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a5c2740e97516a4802513afdec40efd6aa2b12a4", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
20382724
pes2o/s2orc
v3-fos-license
Facile Affinity Maturation of Antibody Variable Domains Using Natural Diversity Mutagenesis The identification of mutations that enhance antibody affinity while maintaining high antibody specificity and stability is a time-consuming and laborious process. Here, we report an efficient methodology for systematically and rapidly enhancing the affinity of antibody variable domains while maximizing specificity and stability using novel synthetic antibody libraries. Our approach first uses computational and experimental alanine scanning mutagenesis to identify sites in the complementarity-determining regions (CDRs) that are permissive to mutagenesis while maintaining antigen binding. Next, we mutagenize the most permissive CDR positions using degenerate codons to encode wild-type residues and a small number of the most frequently occurring residues at each CDR position based on natural antibody diversity. This mutagenesis approach results in antibody libraries with variants that have a wide range of numbers of CDR mutations, including antibody domains with single mutations and others with tens of mutations. Finally, we sort the modest size libraries (~10 million variants) displayed on the surface of yeast to identify CDR mutations with the greatest increases in affinity. Importantly, we find that single-domain (VHH) antibodies specific for the α-synuclein protein (whose aggregation is associated with Parkinson’s disease) with the greatest gains in affinity (>5-fold) have several (four to six) CDR mutations. This finding highlights the importance of sampling combinations of CDR mutations during the first step of affinity maturation to maximize the efficiency of the process. Interestingly, we find that some natural diversity mutations simultaneously enhance all three key antibody properties (affinity, specificity, and stability) while other mutations enhance some of these properties (e.g., increased specificity) and display trade-offs in others (e.g., reduced affinity and/or stability). Computational modeling reveals that improvements in affinity are generally not due to direct interactions involving CDR mutations but rather due to indirect effects that enhance existing interactions and/or promote new interactions between the antigen and wild-type CDR residues. We expect that natural diversity mutagenesis will be useful for efficient affinity maturation of a wide range of antibody fragments and full-length antibodies. The identification of mutations that enhance antibody affinity while maintaining high antibody specificity and stability is a time-consuming and laborious process. Here, we report an efficient methodology for systematically and rapidly enhancing the affinity of antibody variable domains while maximizing specificity and stability using novel synthetic antibody libraries. Our approach first uses computational and experimental alanine scanning mutagenesis to identify sites in the complementarity-determining regions (CDRs) that are permissive to mutagenesis while maintaining antigen binding. Next, we mutagenize the most permissive CDR positions using degenerate codons to encode wild-type residues and a small number of the most frequently occurring residues at each CDR position based on natural antibody diversity. This mutagenesis approach results in antibody libraries with variants that have a wide range of numbers of CDR mutations, including antibody domains with single mutations and others with tens of mutations. Finally, we sort the modest size libraries (~10 million variants) displayed on the surface of yeast to identify CDR mutations with the greatest increases in affinity. Importantly, we find that single-domain (VHH) antibodies specific for the α-synuclein protein (whose aggregation is associated with Parkinson's disease) with the greatest gains in affinity (>5-fold) have several (four to six) CDR mutations. This finding highlights the importance of sampling combinations of CDR mutations during the first step of affinity maturation to maximize the efficiency of the process. Interestingly, we find that some natural diversity mutations simultaneously enhance all three key antibody properties (affinity, specificity, and stability) while other mutations enhance some of these properties (e.g., increased specificity) and display trade-offs in others (e.g., reduced affinity and/or stability). Computational modeling reveals that improvements in affinity are generally not due to direct interactions involving CDR mutations but rather due to indirect effects that enhance existing interactions and/or promote new interactions between the antigen and wild-type CDR residues. We expect that natural diversity mutagenesis will be useful for efficient affinity maturation of a wide range of antibody fragments and full-length antibodies. The widespread interest in using antibodies in diagnostic and therapeutic applications has led to considerable efforts in developing methods for optimizing their properties (1)(2)(3)(4)(5)(6). Methods for improving antibody affinity are particularly important because lead antibodies identified using in vivo (immunization) and in vitro (e.g., phage display) methods typically do not have high enough affinity for therapeutic applications. Moreover, improvements in antibody affinity are generally expected to enhance the performance of diagnostic antibodies due to improved specificity at reduced antibody concentrations. Methods such as phage, yeast surface and ribosome display are commonly used for in vitro affinity maturation because of their many attractive properties (7)(8)(9)(10)(11)(12)(13). These properties include the ability to precisely control antigen presentation, conformation, and concentration as well as the ability to perform negative selections against various types of non-antigens to eliminate non-specific variants (14)(15)(16)(17). These display methods have been used to achieve large enhancements in affinity for a wide variety of antibody fragments and full-length antibodies (9,(18)(19)(20)(21)(22)(23). Nevertheless, there are several outstanding challenges related to in vitro affinity maturation that need to be addressed. First, while it is possible to use saturation mutagenesis to evaluate every possible single mutation in antibody complementaritydetermining regions (CDRs), single mutations typically do not result in large gains in affinity (1,3,24). Therefore, it is often necessary to generate sub-libraries to identify combinations of single mutations that result in large increases in affinity, which is a slow and laborious process. Second, it is not possible to test all combinations of single and multiple mutations in the CDRs of antibodies in a single library due to intractably large library sizes. For example, a library size of >10 39 would be required to sample all possible combinations of single and multiple mutations at ~30 residues in the CDRs of typical variable domains. This means that only an extremely small subset of the possible single and multiple mutations can be tested using display methods, which is largely dictated by transformation efficiencies [~10 9 -10 10 for phage (25,26) and ~10 7 -10 8 for yeast (9,27) using conventional transformation methods]. Therefore, it is important to develop smart library design methods that sample a relatively small number of residues at each CDR position that are most likely to generate antibodies with significant gains in affinity (28)(29)(30)(31)(32)(33)(34)(35)(36)(37)(38)(39)(40)(41). A third common challenge related to antibody affinity maturation is the identification of affinity-enhancing mutations that lead to reductions in antibody specificity (42)(43)(44). Highly interactive residues-such as arginine and aromatic residues-can be readily enriched in the CDRs during affinity maturation, which is concerning because they have increased risk for promoting non-specific interactions (43)(44)(45)(46)(47). While negative selections are useful for removing some non-specific variants, it is critical to use libraries with the highest possible fraction of specific variants to maximize the likelihood of isolating antibodies with not only increased affinity but also with high specificity. A related problem is that affinity-enhancing CDR mutations can lead to reductions in stability (48)(49)(50)(51). Antibody affinity/stability trade-offs appear to be due to structural changes in the CDRs and frameworks that are necessary to increase affinity, and additional compensatory mutations are needed in some cases to maintain thermodynamic stability (48,49,51). Therefore, it is important to generate antibody libraries with the highest possible fraction of stable antibodies to minimize the frequency of isolating destabilized antibodies that require additional mutagenesis to restore stability. To evaluate potential solutions to these challenges, we have sought to identify mutations that increase the affinity of a camelid single-domain antibody specific for the C-terminus of α-synuclein (52) (Figure 1). This variable (VHH) domain-originally referred to as NbSyn2 and herein referred to as N2-was previously isolated from an immune library. We selected this antibody domain for further optimization because its crystal structure is available in complex with antigen at high resolution (Figure 1), it is relatively simple to display on the surface of yeast for in vitro selections relative to more complex multidomain (scFv) and/or multichain (Fab or IgG) antibodies, it has intermediate affinity (KD of 58 ± 9 nM) that can be further increased, and it has relatively high stability (apparent melting temperature of 68 ± 0.3°C). We posit that efficient affinity maturation of antibody variable domains such as N2 can be accomplished in three steps: (i) identification of the most permissive sites in the CDRs that can be mutated without large (negative) impacts on affinity using FigUre 2 | Identification of VHH complementarity-determining region (CDR) residues involved in antigen binding via alanine scanning mutagenesis. The relative antigen binding of the VHH variants (400 nM) with single alanine substitution mutations in (a) CDR2 and (B) CDR3 was evaluated using fluorescence polarization (2 nM TAMRA-labeled αsynuclein peptide). Raw polarization signals were background subtracted (background signals were obtained using samples with only TAMRA α-synuclein peptide), and normalized signals are reported (signal for mutant divided by that for wild type). Error bars represent the SD for three independent experiments. The VHH sequence is defined using Kabat numbering. Alanine mutants that have modest impacts on antigen binding (mutant binding is at least 50% of wild-type binding) are highlighted in gray fill, while those mutants with larger negative impacts on antigen binding are indicated in white fill. Toward our goal of developing systematic and robust affinity maturation methods, we first sought to identify permissive sites in the CDRs of N2 that weakly impact antibody affinity using both computational and experimental methods. Two of the CDRs (CDR2 and CDR3) are involved in mediating antigen binding (Figure 1). Our computational alanine scanning analysis of these CDRs identified two residues in CDR2 (N52 and K56) and two residues in CDR3 (Y100 and W100e) that are sensitive to mutation (Table S1 in Supplementary Material). We tested these observations using experimental alanine scanning mutagenesis at 18 sites in CDR2 and CDR3. Three sites in these CDRs (R50, P98, and C100a) were excluded from this analysis because they were either shown previously to be involved in mediating antigen binding (52) or suspected to be important for antibody structure and stability. The alanine mutants were expressed in bacteria and purified using metal-affinity chromatography (purification yields of 0.7-2.6 mg/L). SDS-PAGE analysis revealed high purities ( Figure S1 in Supplementary Material). The relative binding of each mutant was evaluated using fluorescence polarization at three VHH concentrations (44,133, and 400 nM; Figure 2; Figure S2 in Supplementary Material). Consistent trends were observed at each VHH concentration. Eleven of the 18 mutants retained >50% of the wild-type binding activity, including three in CDR2 (L52b, G53, and V55) and eight in CDR3 (F96, S97, G99, G100b, G100c, S100d, S100f, and N100g). The other seven mutants that displayed greater reductions in binding included five CDR2 mutants (I51, N52, G52a, G54, and K56) and two CDR3 mutants (Y100 and W100e), which were not subjected to further mutagenesis. Four of the disruptive mutations (N52 and K56 in CDR2 and Y100 and W100e in CDR3) were identified in our computational alanine scanning mutagenesis (Table S1 in Supplementary Material). These and other previous results (39,53,54) highlight the value of alanine scanning mutagenesis to identify permissive CDR sites that can be mutated during antibody affinity maturation. Design of antibody libraries Using natural Diversity Mutagenesis We next sought to design a single antibody library with mutations in N2 at permissive sites in CDR2 and CDR3. We aimed to accomplish multiple objectives in our library design. First, we limited the library size to ~10 7 variants to enable 10-fold oversampling of the library using yeast surface display given that our typical yeast transformation efficiencies are ~10 8 transformants. Second, we aimed to generate a single library with all possible combinations of wild-type residues as well as single and multiple mutations at the 11 permissive sites in CDR2 and CDR3 as well as at three additional sites not tested during alanine mutagenesis (A49, A94, and K95). This limits the number of possible mutations at each CDR site to typically one to two mutations in addition to the wild-type residue. Third, we sought to sample mutations that most closely correspond to those observed in the CDRs of natural antibodies FigUre 3 | VHH library design for N2 affinity maturation using natural diversity mutagenesis. A single VHH library was designed that involved mutating four sites in CDR2 (top) and 10 sites in CDR3 (bottom). The CDR sites selected for mutagenesis were identified primarily using alanine scanning mutagenesis (11 CDR sites). Each mutated CDR site involved sampling the wild-type residue and one to five of the most common natural diversity mutations. Degenerate codons were selected at each CDR site that maximized the natural diversity coverage and minimized the total number of mutations. It was not possible to sample the wild-type residue and the most common natural diversity mutations at each CDR site due to the limitations of degenerate codons. The resulting library (9.4 × 10 6 variants) theoretically encodes all possible combinations of single and multiple CDR mutations (up to 14 mutations per VHH). The reported CDR site-specific natural diversity statistics are averaged values for human (VH) and camelid (VHH) variable domains, as reported in the abYsis database (55). Boxed amino acids correspond to the selected natural diversity mutations. in a site-specific manner. To accomplish this, we used the AbYsis database to identify the most common amino acids in camelid VHH and human VH domains at each site in CDR2 and CDR3 (55). We used an average site-specific amino acid frequency for camelid and human domains at each CDR site given that there are many more sequences for human domains than for camelid domains. Fourth, we aimed to use inexpensive primer synthesis methods to generate the libraries encoded by standard degenerate codons. Therefore, we sought to identify degenerate codons at each CDR site that encoded the wild-type residue and ~1-5 additional residues that maximize the coverage (sum of individual site-specific amino acid frequencies) of the combined camelid and human natural diversity at each site ( Figure 3). Based on these four key objectives, we designed the library shown in Figure 3 and generated it using the process outlined in Figure S3 in Supplementary Material. The library contains 9.4 × 10 6 unique variants and includes wild-type residues at each position as well as all possible combinations of single and multiple mutations at 14 sites in CDR2 and CDR3. We sequenced several (22) members of the initial library, and the results are summarized in Figure 4 and Figure S4 in Supplementary Material. All variants were found to be unique and contained mutations according to the proposed library design. sequence analysis of V h h libraries after sorting for enhanced antigen Binding The library of antibody variable domains was displayed on the surface of S. cerevisiae and screened for variants with increased affinity for the α-synuclein peptide. The sorting process involved five rounds of selection via magnetic-activated cell sorting (MACS) with progressively reduced concentrations of α-synuclein peptide (starting at 50 nM peptide and ending at 5 nM) and one additional round of selection via fluorescence-activated cell sorting (FACS) (20 nM peptide). The sorting process was continued until the antigen binding of the library was increased by at least fivefold relative to wild type, as judged by flow cytometry. Selections were performed in a buffer (PBS) that contained both BSA (1 mg/mL) and milk (1% w/v). We have found previously that antibody selections in complex environments (e.g., buffers supplemented with milk) lead to identification of antibodies with improved specificity (56). The enriched VHH library was sequenced after sorts 5 and 6, and 17 unique variants were identified and further analyzed (based on sequencing 23 clones) with 1-6 mutations in CDR2 and CDR3. Sequence logos in Figure 4 summarize the general enrichment of amino acids in the CDRs, while the amino acid enrichment ratios are given in Figure S5 in Supplementary Material and the CDR sequences are given in Figure S6 in Supplementary Material. Most of the sites in CDR2 and CDR3 (11 out of 14) displayed either intermediate or strong preference for the wild-type residue (Figure 4). However, three sites (53 in CDR2, 96 and 100d in CDR3) either displayed similar preference for mutations as the wild-type residue (Arg, Gly, Ser, and Asn at position 53) or strong preference for a specific mutated residue (Ser at position 96 and Thr at position 100d). It is also notable that the four positions that were varied in CDR2 did not display strong preference for any single amino acid, while almost every residue in CDR3 (9 out of 10) displayed strong preference for a single residue. This result is unexpected based on alanine scanning mutagenesis, as the identified sites in CDR3 appeared to be as permissive (or even more permissive) to mutagenesis than those identified in CDR2. identification of affinity-Matured Variable Domains with high stability and specificity To evaluate the effectiveness of the affinity maturation process, we next expressed and purified the unique VHH variants that were identified in the enriched library. The variable domains expressed at levels (purification yields of 0.1-2.0 mg/L) that were generally similar to wild type (1.0 mg/L), and also displayed purities similar to wild type ( Figure S7 in Supplementary Material). We first used fluorescence polarization to evaluate the affinities of the variable domains for the α-synuclein peptide ( Figure 5A). The equilibrium dissociation constant for the wild-type N2 variable domain (KD of 57.6 ± 9.0 nM) was approximately threefold lower than the previously reported value (KD of 190 ± 30 nM) that was measured by isothermal calorimetry (52). We chose to characterize two VHH domains in more detail (N2.12 and N2.17). Both variable domains displayed improved affinity (KD of 7.6 ± 0.4 nM for N2.12 and 13.2 ± 4.8 nM for N2.17 relative to 57.6 ± 9.0 nM for wild type; Figure 5A). Interestingly, the improved affinity of the N2.12 variant came at the cost of reduced stability (apparent Tm of 59.7 ± 0.3°C relative to 67.8 ± 0.3°C for wild type; Figure 5B). By contrast, the N2.17 variant displayed similar stability as wild type (66.9 ± 0.1°C for N2.17 relative to 67.8 ± 0.3°C for wild type; Figure 5B). This finding demonstrates that our affinity maturation method can be used to identify antibody variable domains such as N2.17 with increased affinity without significant reduction in stability despite the common observation of affinity/stability trade-offs (such as those observed for N2.12) during affinity maturation (51,58). We also evaluated the specificity of the N2.12 and N2.17 VHH domains to evaluate if gains in affinity were offset by reductions FigUre 6 | Analysis of non-specific binding for wild-type and affinitymatured VHH domains. Non-specific binding of VHH variants was evaluated using well plates coated with milk proteins (left) and a panel of six non-antigen proteins (right). The non-specific binding analysis was performed at an antibody concentration of 1,000 nM. The reported non-specific binding values are the signals for antibody binding to well plates coated with milk proteins or other non-antigen proteins divided by the background signal without primary antibody (VHH). The reported binding values (right) are the averages for six non-antigen proteins (ovalbumin, BSA, KLH, ribonuclease A, avidin, and lysozyme). The values are averages of three independent experiments, and the error bars are SD. A two-tailed Student's t-test was used to determine statistical significance [p-values < 0.01 (**)]. FigUre 5 | Evaluation of the affinity and stability of select VHH mutants that were enriched after library sorting for improved antigen binding. (a) Fluorescence polarization analysis of VHH binding to labeled antigen (2 nM TAMRA-labeled α-synuclein peptide). The analysis was performed in a PBS buffer supplemented with BSA (0.001%) and Tween 20 (0.001%). Three independent experiments were performed, and representative binding curves are shown for wild type (black), N2.12 (red), and N2.17 (green). Each point shown is the average of two repeats and the error bars are SD. Data were fit with a binding model that accounted for the fact that the VHH antibodies were not in excess of the antigen at some of the VHH concentrations (57). (B) Extrinsic fluorescence analysis of apparent VHH unfolding as a function of temperature. The fluorescence data were obtained using an extrinsic dye (Protein Thermal Shift dye, Life Technologies). Three independent experiments were performed, and representative melting curves are shown for wild type (black), N2.12 (red), and N2.17 (green). The data were background subtracted using background signals obtained without antibody. Next, the fluorescence data were subtracted by the relatively low signal at 50°C, and divided by the maximum fluorescence signal (after the maximum signal was subtracted by the signal at 50°C). Finally, the pre-and post-transition regions of the normalized fluorescence data were flattened using linear fits. in specificity (Figure 6; Figure S8 in Supplementary Material). A simple test of non-specific interactions is to evaluate the propensities of antibodies to interact with well plates coated with different types of non-antigen proteins (milk proteins and a panel of six non-antigen proteins in Figure 6 and Figure S8 in Supplemental Material) at relatively high antibody concentrations (~1 μM). Interestingly, the N2.17 variant displays significantly lower non-specific interactions than wild type (p-values of 0.003 for milk proteins and 0.009 for six non-antigen proteins), while the N2.12 variant displays similar non-specific binding as wild type (p-values of 0.129 for milk proteins and 0.342 for non-antigen proteins). These results demonstrate that the affinity-matured VHH domains display similar or improved specificity relative to wild type. We next analyzed the affinity and stability of the other 15 unique VHH variants that were isolated during the sorting process (Figure 7; Figures S9 and S10 in Supplementary Material). All but one of the variable domains (N2.5) displayed a statistically significant increase in affinity relative to wild type (p-values <0.01; Figure 7A). This suggests that our library design and selection strategies enable robust identification of variable domains with improved affinity. Interestingly, variants with the greatest improvements in affinity (at least threefold) contained at least three mutations and up to six mutations. This highlights the inherent limitations of attempting to identify variable domains with large increases in affinity using single mutations. The stability analysis of these variable domains also revealed interesting behaviors (Figure 7B). Most notably, the apparent stability of the VHH domains is much more variable than the affinity measurements. About one-third of variable domains (6 of 17) display similar stabilities as wild type (apparent melting temperature within 1°C of wild type). The variable domains with the largest reductions in apparent melting temperature (>7°C; Origins of affinity/stability and affinity/ specificity Trade-offs for affinity-Matured V h h Domains To better understand the origins of the strong and weak tradeoffs between affinity and both stability and specificity for the selected VHH domains, we performed reversion mutational analysis for two of the variable domains (N2.12 and N2.17) to evaluate the impact of the acquired mutations on affinity, stability, and specificity. Six single reversion mutants were created for N2.12, while four single reversion mutations were created for N2.17. The purities of the reversion mutants were similar to wild type ( Figure S11 in Supplementary Material). The affinity and stability measurements are summarized in Figure 9 and Figures S12 and S13 in Supplementary Material. In Figure 9, the affinity is reported as the equilibrium association constant (KA). Reversion mutations that reduced affinity and/or stability-which signifies that the original mutations increased affinity and/or stability-correspond to reduced KA or apparent melting temperature ( ) Tm * values. For the highest affinity variant identified in our studies (N2.12), one mutation (G49) is highly destabilizing and reversion to the wild-type residue (A49) results in a large increase in stability ( ) Tm * increases by 7.4°C; p-value of 2 × 10 −5 ; Figure 9A) without a significant change in affinity (p-value of 0.67; Figure 9A). Surprisingly, this reversion mutant is the most desirable affinity-matured VHH domain that we obtained, as the large affinity enhancement (>7-fold) is achieved without compromising stability (p-value of 0.099 for comparison to wild type). This reversion mutational analysis also reveals that the affinity enhancement of N2.12 is largely due to four mutations (W52b, R53, S96, and T100d). The S96 mutation is particularly interesting because it contributes positively both to affinity and stability, as judged by the fact that the reversion mutation (F96) reduces both properties (p-values <0.03). By contrast, the R53 mutation increases affinity (p-value of 0.004) at the cost of stability (p-value of 0.001), and the W52b and T100d mutations increase affinity (p-values <0.005) without significantly impacting stability (p-values >0.1). Reversion mutational analysis of the more stable VHH domain (N2.17) revealed key differences relative to the less stable N2.12 variant (Figure 9B). None of the four reversion mutations in N2.17 resulted in changes in apparent melting temperature >2°C. The most destabilizing N2.17 mutation was F100f, and the reversion mutation S100f increased stability to levels modestly higher than the wild-type N2 domain without a significant change in affinity relative to N2.17 (p-value of 0.74). The three key affinity mutations (W52b, S96, and T100d)-which were also observed in the less stable N2.12 domain-had little impact on stability (<1°C). These findings highlight that the affinity/stability trade-offs observed in our enriched library can be addressed either by screening a sufficient number of VHH variants or by performing reversion mutational analysis to identify destabilizing mutations that are not required for affinity. The specificity of the reversion mutants was analyzed by evaluating their relative propensity to interact with milk proteins (Figure 10). A decrease in the normalized specificity N2.4, N2.11, N2.12, N2.13, and N2.16) had the highest number of mutations (5-6 mutations). A direct comparison of affinity versus stability for the VHH domains reveals a wide range of affinity/stability trade-offs (Figure 8). of a reversion mutant indicates that the original mutation has a positive impact on antibody specificity. The N2.12 variable domain-which possesses similar specificity as the wild-type N2 domain-acquired five mutations that decreased specificity (p-values <0.01; Figure 10A). However, N2.12 also acquired a single mutation (W52b) that increased specificity (p-value of 9.4 × 10 −6 ; Figure 10A) and which appears to offset the negative effects of the other five mutations. Interestingly, the improved specificity of N2.17 relative to wild type appears to be due to three mutations that enhance specificity (W52b, S96, and F100f; p-values <1.1 × 10 −5 ; Figure 10B). This analysis highlights that affinity-enhancing mutations can contribute both positively and negatively to antibody specificity, and that significant improvements in specificity can be due to the cumulative effects of multiple mutations. computational analysis of natural Diversity Mutations That enhance affinity To gain further understanding about how the selected mutations increased VHH affinity, we performed computational modeling of two of the mutant variable domains (N2.12 and N2.17). This was accomplished by introducing the corresponding mutations into the crystal structure of the wild-type N2 domain in complex with the α-synuclein peptide (PDB: 2X6M) and relaxing the structures via CHARMM force field energy minimization (59). The highest affinity domain we identified after library sorting (N2.12) contains six mutations that are located near but generally not in direct contact with the antigen (Figure 11A). The one exception is I55 in CDR2 (V55 in wild type), which forms a direct contact with E137 in the α-synuclein peptide via an interaction between the backbone amide in the antibody (I55) and carboxylate oxygen in the antigen (E137). However, this does not appear to explain the increased affinity of N2.12 because the mutation increases the interaction distance (2.6 Å) relative to wild type (1.7 Å). Instead, the increase in affinity for N2.12 appears to be due to indirect effects that involve enhancement of existing interactions as well as introduction of new interactions that involve wild-type CDR residues ( Figure 11B). This includes an enhanced salt bridge between K56 (side chain nitrogen) in CDR2 and E139 (carboxylate oxygen) in the antigen. Moreover, a new electrostatic interaction is introduced between T57 (backbone carbonyl oxygen) in CDR2 and A140 (backbone amide nitrogen) in the antigen. The latter interaction appears to be mediated by a water bridge in both the crystal structure and energy minimized (relaxed) structure of the wild-type antibody-antigen complex (data not shown). Normalized speci city 12 VHH in complex with the α-synuclein peptide. The six acquired CDR mutations are highlighted in black text, the wild-type residues are shown in gray, the nitrogen atoms are shown in blue, and the oxygen atoms are shown in red. Only one of the CDR mutations (Ile55) makes direct contact with the antigen, and the distance of this interaction is increased relative to wild type. (B) New or enhanced interactions between the N2.12 VHH and the α-synuclein peptide. Direct electrostatic interactions are shown with black dotted lines, and the distances are indicated in black for N2.12 relative to the original distances for wild type in blue (if there was a wild-type interaction). VHH residues are numbered according to Kabat. Similar findings were obtained by examining the modeled structure of the more stable N2.17 variant in complex with the α-synuclein peptide (Figure 12A). None of the four mutations make direct contact with the antigen. Instead, the gains in VHH affinity appear to be due to indirect effects involving wild-type CDR residues (Figure 12B), as observed for N2.12 ( Figure 11B). We observe enhanced hydrophobic packing between G100b, G100c, and T100d in CDR3 with A140 in the antigen ( Figure 12B). In addition, there are new direct electrostatic interactions between T57 (CDR2) and A140 (antigen) as well as G100b (CDR3) and A140 (antigen). Finally, two electrostatic interactions are enhanced, namely R50 (CDR2) with A140 (antigen) and G100c (CDR3) with A140 (antigen). This enhancement is due to A140 in the α-synuclein peptide moving deeper into the binding pocket of the VHH domain, which is mediated by structural rearrangement of the CDRs. These results are consistent with the general understanding that affinity maturation of antibodies involves subtle changes to the antigen-binding site and beneficial mutations often mediate their effects indirectly via structural changes that optimize interactions involving wild-type CDR residues (60)(61)(62)(63). DiscUssiOn This work identifies several key factors that impact the efficiency and robustness of antibody affinity maturation. First, we find that multiple mutations (>4) are necessary to achieve large (>5-fold) gains in affinity for the N2 VHH antibody. While there are obvious exceptions to our findings (1,3,5), they are generally consistent with previous findings that many single affinity-enhancing mutations cause relatively modest increases in affinity (24,(64)(65)(66). It is possible to identify and combine several single mutations that enhance affinity, but the collective effects of multiple mutations on antibody affinity are complex and often not additive (58,62,67,68). Moreover, generating all possible combinations of single antibody mutations is a time-consuming process that involves multiple rounds of expression and affinity evaluation. It is also notable that the need for several mutations to achieve large increases in antibody affinity is likely at least part of the reason that it is particularly challenging to use computational methods for antibody affinity maturation (3,24,58,67,69). Accurate prediction of subtle structural changes caused by combinations of CDR mutations is notoriously difficult. Our natural diversity mutagenesis approach is attractive because it enables sampling of all possible combinations of single and multiple CDR mutations (~1-5 mutations per CDR site across 14 sites in this work) for rapid identification of antibody variants with large increases in affinity using a single antibody library. There are multiple considerations related to our natural diversity mutagenesis approach that deserve further consideration. First, the primary problem during affinity maturation is obtaining mutations that increase affinity but reduce specificity. Our use of natural antibody diversity to guide library designwhich has been reported previously in related ways by others (28)(29)(30)(31)(32)(33)(34)(35)(36)(37)(38)(39)(40)(41)70)-avoids overrepresentation of highly interactive residues that are likely to promote non-specific interactions. Many previous studies (including those from our own lab) have used NNN or NNK degenerate codons in antibody CDRs to identify affinity-enhancing mutations (58,(71)(72)(73). One of the limitations of this approach is that the frequency of sampling each amino acid is based on its corresponding codon frequency. In our experience, this is especially problematic for highly interactive residues such as arginine that have a large number of codons (up to six depending on the specific degenerate codon). By contrast, our library design infrequently sampled highly interactive residues, such as arginine (2 out of 14 CDR sites), tryptophan (1 out of 14 CDR sites), and phenylalanine (2 out of 14 CDR sites). In fact, one of the key affinity mutations in both N2.12 and N2.17 was F96S, which removed an aromatic residue and increased the hydrophilicity of CDR3. It is also notable that our mutational approach was useful for identifying beneficial mutations in the highly variable CDR3 in addition to the less variable CDR2. Two of the key affinity mutations in both N2.12 and N2.17-F96S and S100dT-were in CDR3. The most common residues at many sites in CDR3 occur at relatively low frequency (13-21% for positions 95-100 g). Therefore, it was not obvious that sampling such a small number of natural diversity mutations (1-3 mutations per site for nine sites in CDR3) in such a highly diverse CDR would be sufficient to identify affinity-enhancing mutations. For example, the natural occurrence of the wild-type residue (Phe) at position 96 in CDR3 is 4% (combined human and camelid diversity), and we sampled only one mutation (Ser) at this site that is also relatively uncommon (9%) despite being more common than most other residues at this CDR3 site. Likewise, we sampled three mutations at position 100d in CDR3 (Gly, Ala, and Thr) that were all relatively uncommon (5-11%). Nevertheless, we identified a beneficial mutation (Thr) that occurs relatively infrequently (5%) at this site in CDR3. These results suggest that natural diversity mutations in CDR3-especially for affinity maturation-may be particularly useful for libraries aimed at isolating combinations of mutations that result in large increases in affinity without over enrichment in highly interactive residues that are likely to also mediate nonspecific interactions. Despite the strengths of our natural diversity mutagenesis approach, one obvious weakness is related to the use of inexpensive primer synthesis methods that rely on standard degenerate codons to generate libraries. This results in the limitation that some combinations of wild-type CDR residues and the most common natural diversity mutations are (i) not possible, (ii) require too many additional mutations to justify including them, and/or (iii) require inclusion of undesirable codons (e.g., those encoding cysteine or stop codons). While we allowed a cysteine mutation at one position (100c) to maximize natural diversity coverage, it is undesirable to include too many cysteine mutations due to complications associated with unpaired cysteines. An example of the limitations of using degenerate codons to generate antibody libraries is related to position 52b in CDR2. The wild-type residue at position 52b is Leu, and the two most common residues at this position are Lys (29% based on combined camelid and human natural diversity) and Arg (22%). However, this requires sampling a minimum of six codons, which corresponds to a minimum of five residues and overrepresentation of arginine (two codons) to achieve natural diversity coverage of 56% (an average of ~9% per codon). Therefore, we sampled Gly, Val, and Trp in addition to the wild-type residue (Leu) at position 52b using four codons to achieve natural diversity coverage of 36% and similar average diversity per codon (9%). Likewise, the wild-type residue at position 96 in CDR3 is Phe. In order to sample Phe and the most common residue (Gly), this requires sampling a minimum of four codons that include Val (5%) and Cys (1.5%). Sampling these four residues would result in natural diversity coverage of 23% (an average of ~6% per codon). Instead, we sampled Ser in addition to Phe using two codons to achieve natural diversity coverage of 13% (an average of ~6% per codon). This approach allowed us to sample a similar amount of natural diversity per codon and eliminated the use of an undesirable codon (Cys). These examples highlight the limitations of using standard degenerate codons to achieve the highest possible coverage of natural diversity mutations. This limitation could be readily solved using more expensive trinucleotide synthesis methods. Our results also demonstrate that affinity/stability trade-offs are common during antibody affinity maturation. We and others have previously found that CDR mutations that increase antibody affinity can be destabilizing (48,49,51). Indeed, several examples of natural antibodies have been reported that demonstrate how affinity-enhancing mutations can be destabilizing (48,49). This destabilization is likely due to strain on the antibody framework that results from modifying the structure and chemistry of the antigen-binding site for increased affinity. Encouragingly, about one-third of our affinity-matured antibodies displayed little reduction in stability (<1°C) and we identified one of the highest affinity variants with similar stability as wild type after additional mutational analysis (N2.12 with A49; Tm * of 67.1 ± 0.3°C relative to 67.8 ± 0.3°C for wild type). Nevertheless, the fact that the highest affinity variants identified after library sorting were some of the most destabilized ones (e.g., N2.12 and N2.16) highlights the challenge of affinity/stability trade-offs during affinity maturation. One promising approach is to combine natural diversity mutations in the CDRs with those that naturally occur in the frameworks (74) to co-select for both affinity and stability mutations. We are currently in the process of evaluating this strategy to further improve the affinity maturation process for a wide range of single-and multidomain antibodies to isolate variants that possess high stability in addition to high affinity. Another notable aspect of our findings relates to the impact of affinity-enhancing mutations on antibody specificity. Specificity is arguably the most difficult antibody property to maintain or enhance during affinity maturation (42)(43)(44). This is likely due to the natural tendency to accumulate highly interactive (solvent exposed) amino acids in antibody CDRs during affinity maturation that improve antigen binding but also promote non-specific interactions and reduced specificity. Indeed, we observed tradeoffs between affinity and specificity for the N2.12 variant, as three of the four key affinity-enhancing mutations (R53, S96, and T100d) reduced specificity ( Figure 10A). Interestingly, the N2.17 variant displayed reduced affinity/specificity trade-offs, as two (W52b and S96) of the three affinity-enhancing mutations also increased specificity (Figure 10B). The latter results are particularly notable because these same mutations (W52b and S96) also increased the stability of N2.17. It is also notable that the impacts of mutations on affinity and specificity were context dependent, as some mutations (e.g., S96) that increased affinity displayed opposite impacts on specificity (reduced specificity for N2.12 and increased specificity for N2.17). Despite these complexities, it will be important in the future to better define how CDR sequence and structure impacts antibody specificity because antibody specificity appears to be a key factor in differentiating approved antibody therapeutics from those in clinical trials (75). cOnclUsiOn Our systematic approach for using natural antibody diversity to design libraries with combinations of single and multiple mutations with limited diversity at each CDR site is effective for increasing the affinity of a camelid VHH domain while maintaining or enhancing stability and specificity. These encouraging results will need to be evaluated for other types of single-and multidomain antibodies to evaluate their generality. It will also be important to develop computational methods to improve library design by optimizing natural diversity coverage while minimizing the number of mutations. This is relatively straightforward to perform at any given CDR site but it is more challenging to globally optimize with increasing numbers of CDR sites. Nevertheless, efforts in optimizing antibody library design are key to avoid oversampling abnormal CDR sequences that are unlikely to lead to high antibody stability and specificity in addition to high affinity. We expect that methods such as the ones we have demonstrated in this work will be useful for rapidly and systematically optimizing antibodies for a wide range of diagnostic and therapeutic applications. eXPeriMenTal MeThODs cloning and library construction The wild-type N2 gene was created using PCR-based gene synthesis (76). The amino acid sequence of the N2 VHH domain (Figure 1) was obtained from the PDB (2X6M). A hexahistidine tag was added to the C-terminus of the VHH domain for purification. The gene was flanked with N-terminal HindIII and C-terminal XhoI restriction sites. The digested PCR product was then ligated into a bacterial expression vector (pET-17b, Novagen) that contained an N-terminal pelB sequence for periplasmic secretion. Single point mutations of N2 were generated via site-directed mutagenesis using PfuUltra II (600850, Agilent Technologies). The N2 natural diversity library was created using overlap extension PCR to introduce mutations in portions of CDR2 and CDR3 ( Figure S3 in Supplementary Material). Mutagenesis was performed using degenerate codons at 14 sites in CDR2 and CDR3 (Figure 3). The first step in library generation was to perform three PCRs. These included amplification of DNA fragments encoding the N-terminus of VHH domain to framework 2, CDR2 to framework 3, and CDR3 to the C-terminus of VHH domain. The DNA fragments overlapped each other by ~20 bases, which enabled the three DNA fragments to be combined in a final amplification step using terminal primers. The terminal primers contained flanking NheI and SalI restriction sites as well as 45 bases of homology on each end with the yeast display plasmid (pCTCON2). The N2 natural diversity library genes were ligated into the yeast display plasmid and transformed into S. cerevisiae (EBY100) via homologous recombination. This process was performed as described previously (9) with minor modifications to increase transformation efficiency. These modifications include using more yeast cells (500 mL of EBY100 was grown to OD600 of 1.2) for a single library transformation, more DNA (nine preparations of 4 µg PCR product and 1 µg digested vector), and electroporation at higher voltage (2,500 V). After the yeast cells were allowed to recover, the yeast library was grown in SDCAA (500 mL of 20 g/L dextrose, 6.7 g/L yeast nitrogen base, 5 g/L casamino acids, 14.7 g/L sodium citrate, and 4.3 g/L citric acid) for 48 h, and aliquotted for storage at −80°C. The library transformation resulted in 2 × 10 8 transformants. To assess the quality of the library, a small amount of the yeast library culture (1 mL) was miniprepped (Zymoprep II yeast miniprep kit, Zymo Research) and transformed into electroporation-competent bacterial cells (XL1-Blue, 200228, Agilent Technologies). Several (22) plasmids from the initial library were isolated and sequenced, and all were found to be unique. The natural diversity library was sorted via five rounds of MACS and one round of FACS. For each sort, yeast were washed twice with PBS containing BSA (1 mg/mL; PBS-B) and resuspended in a solution containing the biotinylated α-synuclein peptide (biotin-GYQDYEPEA) and PBS-B supplemented with 1% milk (non-fat dry milk, PBS-BM). For the FACS sort, 1,000× diluted anti-c-myc chicken IgY antibody (A-21281, Life Technologies) was added to this mixture to detect VHH display. The yeast and α-synuclein peptide solution was mixed end-overend at room temperature for 2-3 h. Next, the cells were washed once with PBS-B and sorted for antigen binding. antibody affinity analysis The affinities of the N2 VHH and variants thereof were measured using fluorescence polarization. The VHH domains were prepared at a range of concentrations (0.8 nM-1.6 µM) and mixed (75 µL) with the α-synuclein peptide labeled with a tetramethylrhodamine (TAMRA) fluorophore (4 nM, 75 µL; Genemed Synthesis Inc.). The antibody-antigen mixtures were prepared in 96 well flat bottom black polystyrene plates (7605, ThermoFisher Scientific). The binding buffer was PBS supplemented with BSA [0.001% (w/v)] and Tween 20 [0.001% (v/v)]. Background wells were prepared that contained the same concentration of TAMRA-labeled α-synuclein peptide without antibody. The antibody-antigen mixtures were allowed to equilibrate at room temperature for 3 h. Fluorescence polarization was then measured (Infinite M1000 PRO, Tecan) at an excitation wavelength of 530 nm (5 nm bandwidth) and an emission wavelength of 582 nm (10 nm bandwidth). The fluorescence polarization raw signals were background subtracted and two replicates were averaged for each antibody concentration. The average data were then fit to determine the KD value using a four-parameter model that accounts for the fact that the antibody is not in excess of antigen at some of the evaluated antibody concentrations: was measured as the plate was heated from 37 to 95°C. Many (>60) acquisitions were collected per 1°C, and the heating rate was ~0.6°C/min. The apparent melting temperatures of the VHH domains were determined by analyzing the first derivative of the fluorescence with respect to temperature. This involved fitting a second-order polynomial to the major peak and solving for the temperature at which the maximum occurred (or the minimum if the negative derivative is used). The reported melt curves were background subtracted using background signals obtained without antibody. Next, the fluorescence data were subtracted by the relatively low signal at 50°C and divided by the maximum fluorescence signal (after the maximum signal was subtracted by the signal at 50°C). Finally, the pre-and post-transition regions of the normalized fluorescence data were flattened using linear fits (58). antibody specificity analysis The specificities of the VHH domains were evaluated using two methods. The first method evaluated the propensity of the purified antibodies to bind to well plates coated with milk proteins. Transparent 384 well plates (MaxiSorp, 464718, ThermoFisher Scientific) were coated with milk [100 µL of 10% (w/v) milk in PBS with 0.1% (v/v) Tween 20; PBST] for 8 h and then washed with PBS. The VHH domains were diluted to 1,000 nM in PBST, added to the well plates and allowed to incubate overnight at room temperature. The well plates were then washed with PBS and secondary reagents were added to detect bound antibodies. The second method evaluated the propensity of the purified antibodies to bind to six immobilized non-antigens [ovalbumin (A5503, Sigma), BSA (BP9706, Fisher Bioreagents), KLH (H8283, Sigma), ribonuclease A (R6513, Sigma), avidin (A9275, Sigma), and lysozyme (L6876, Sigma)]. Non-antigen proteins were diluted in PBS (75 µL, 0.2 mg/mL) and immobilized in separate wells at 37°C for 1 h in 384 well plates. The wells were subsequently washed with PBST. Variable domains (1,000 nM, 25 µL) in PBS with 1 g/L BSA and 0.1% (v/v) Tween 20 were added to the well plates and allowed to incubate at room temperature for 2 h. Detection of bound VHH was performed similarly for both specificity tests. Secondary antibody (25 µL of 1,000× diluted anti-6X His tag antibody; ab18184, Abcam) in PBST was added, allowed to incubate for 1 h, and then washed with PBS. Next, the well plates were incubated with diluted horseradish peroxidaseconjugated goat anti-mouse IgG (25 µL of 1,000× dilution; 32430, Thermo Fisher Scientific) in PBST for 1 h and then were washed with PBS. The bound antibody was detected by adding substrate (25 µL of 1-Step Ultra TMB-ELISA, 34028, Thermo Fisher Scientific), quenching after 20-40 min (25 µL of 2 M H2SO4) and measuring the absorbance values at 450 nm (Tecan Safire 2 plate reader). Normalized binding signals were calculated as signal divided by background, and the background values were absorbance measurements without primary (VHH) antibody. computational Modeling The VHH-antigen crystal structure (PDB: 2X6M) was energy minimized using the CHARMM force field and the adopted basis Newton-Raphson routine (78). We applied the Newton-Raphson algorithm to a subspace of the coordinate vectors that were sampled by the displacement coordinates (during each iteration) with the objective of minimizing the energy of the complex. This enabled the rate of change of the gradient vectors to be computed and coupled with a subsequent eigenvector analysis to avoid saddle points (metastable energy states). At every Newton-Raphson iteration, the residual gradient vector was calculated and a steepest descent step was added to the Newton-Raphson step. This was done to incorporate a new direction into the basis set to avoid metastable states and find the shortest trajectory toward the atomic coordinates corresponding to the minimum potential energy of the complex. Computational alanine scanning mutagenesis was performed in a similar manner as described previously (79). Python scripts were written for a new OptMAVEn module to compute the difference between binding energies of the N2 single alanine mutants (which were energy-minimized) and wild-type N2 (2X6M). Binding energy calculations were performed using the conformation-dependent binding energy function as used in the Robetta full-chain protein structure prediction server (80,81). Structural models of two affinity-matured VHH variants (N2.12 and N2.17) in complex with antigen were also generated. These structures were simulated alongside the energy-minimized wild-type complex. We created the N2.12 and N2.17 variants using the Mutator program of IPRO suite of programs (82). This approach uses the residue positions and mutations as input, and it performs backbone perturbation, rotamer repacking and energy minimization. A mixed-integer linear programming optimization step was performed to systematically identify the optimal rotamer combination of the new residues at the mutation sites and residues within 4.5 Å (83). This was done to prevent energetically unfavorable steric clashes upon mutation. We performed ensemble structure refinements to establish favorable Lennard-Jones interactions in addition to eliminate severe steric repulsions. The N2.12 and N2.17 variants were visualized in complex with antigen using PyMOL (version 1.8, Schrödinger). Shell scripts were written to identify direct and indirect polar contacts between the antigen (α-synuclein residues DYEPEA) and VHH variants. Only contacts within 5 Å were analyzed. aUThOr cOnTriBUTiOns KT, PT, RC, TL, and CM designed the research; KT and SS performed experiments; RC and TL performed computational analysis; SL performed bioinformatics analysis; and KT, RC, CM, and PT wrote the paper. acKnOWleDgMenTs We thank Dane Wittrup for providing the pCTCON2 yeast display vector, EBY100 yeast strain and for their helpful discussions; and Eric Shusta, David Colby, Jennifer Cochran, Eric Boder, and Ben Hackel for their helpful advise in performing yeast surface display. We thank Catherine Royer for the use of the Infinite M1000 PRO plate reader. We also thank members of the Tessier lab for their helpful suggestions.
2017-09-04T08:18:01.776Z
2017-09-04T00:00:00.000
{ "year": 2017, "sha1": "5e7cf6c5854bb71945ed0582062e4a441eac9975", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2017.00986/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5e7cf6c5854bb71945ed0582062e4a441eac9975", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
90667168
pes2o/s2orc
v3-fos-license
An improved mitochondrial reference genome for Arabidopsis thaliana Col-0 Arabidopsis thaliana remains the foremost model system for plant genetics and genomics, and researchers rely on the accuracy of its genomic resources. The first completely sequenced angiosperm mitochondrial genome was obtained from A. thaliana C24 (Unseld et al., 1997), and more recent efforts have produced additional A. thaliana reference genomes, including one for Col-0, the most widely used ecotype (Davila et al., 2011). These studies were based on older DNA sequencing methods, making them subject to errors associated with lower levels of sequencing coverage or the extremely short read lengths produced by early-generation Illumina technologies. Indeed, although the more recently published A. thaliana mitochondrial reference genome sequences made substantial progress in improving upon earlier versions, they still have high error rates. By comparing publicly available Illumina sequence data to the A. thaliana Col-0 reference genome, we found that it contains a sequence error every 2.4 kb on average, including 57 SNPs, 96 indels (up to 901 bp in size), and a large repeat-mediated rearrangement. Most of these errors appear to have been carried over from the original A. thaliana mitochondrial genome sequence by reference-based assembly approaches, which has misled subsequent studies of plant mitochondrial mutation and molecular evolution by giving the false impression that the errors are naturally occurring variants present in multiple ecotypes. Building on the progress made by previous researchers, we provide a corrected reference sequence that we hope will serve as a useful community resource for future investigations in the field of plant mitochondrial genetics. variants represented sequencing artefacts or actual biological differences between the two Col-0 samples, we extracted diagnostic k-mers from the raw reads used in our analysis and those from the original A. thaliana Col-0 sequencing effort (SRA SRR307226). We confirmed that all the variants identified in our assembly were strongly supported in both sets of sequencing reads (Table 1), suggesting that the differences represent assembly errors in the published Col-0 reference sequence rather than real polymorphisms. We further validated these variants calls using the double-stranded consensus sequence from a dataset (SRA SRR6420475) that was generated with a highly accurate technique known as duplex sequencing (Schmitt et al., 2012). By comparing the same set of 57 SNPs and 96 indels to the raw reads in the resequenced C24 dataset (SRA SRR307231), we identified 28 variants for which the original reference allele was supported in C24 (Table 1). These cases, therefore, represent true polymorphisms that distinguish the C24 and Col-0 ecotypes but were not detected in the original reference-based assembly of the Col-0 mitochondrial genome such that the published Col-0 sequence improperly retains the C24 allele. In contrast, we found that the raw C24 sequence reads did not support the original reference allele in the remaining 125 variants (82%) ( Table 1). These cases appear to result from errors in the original C24 genome sequence (Unseld et al., 1997) that were not detected in either the resequencing of C24 or the reference-based assembly of Col-0 and, thus, have been propagated across reported genome sequences from multiple ecotypes (Davila et al., 2011). Many of these errors are found in regions differing by multiple SNPs or by multi-nucleotide indels, so it is not surprising that they were difficult to detect with short-read sequencing data. However, there are also many individual SNPs and 1bp indels in this set (Table 1), so the source of the assembly artefacts is unclear in some cases. Our newly assembled A. thaliana Col-0 reference sequence also differs from the published Col-0 sequence in two major structural variants. First, it includes a 901-bp sequence that is absent from the published Col-0 genome. The full-length of this sequence is clearly detectable in the raw reads of the original Col-0 study (SRA SRR307226). It would be inserted after position 48,895 in the published Col-0 genome (JF729201) and would correspond precisely to the last 901 bp of the C24 reference genomes. The fact that this deletion occurs exactly at the point where the circular reference genome map had been arbitrarily "cut" for reporting as a linearized sequence suggests that it might have resulted from an inadvertent byproduct of sequence handling and reorientation. Second, our newly assembled A. thaliana Col-0 reference sequence differs in a large rearrangement, apparently resulting from recombination between a pair of identical 453-bp inverted repeats at positions 36,362-36,818 and 143,953-144,409. The clear majority (30 of 33; 91%) of read-pairs spanning these repeats support our reported conformation. We are not able to test for similar support in the raw Col-0 reads from Davila et al. (2011) because their insert sizes are too short to span the repeat copies, but we did verify that our reported configuration predominates in Illumina paired-end and PacBio sequencing reads from four other Col-0 datasets (NCBI SRA SRR1581142, SRR5012968, SRR5882797, and SRP073602). Therefore, this configuration is likely the most common among different Col-0 seed stocks. Subsequent research in Arabidopsis mitochondrial genetics For good reason, A. thaliana is the "go-to" model for studies of plant mitochondrial genome function, stability, mutation, and molecular evolution (Davila et al., 2011;Christensen, 2013;Cupp and Nielsen, 2014;Zampini et al., 2015;Gualberto and Newton, 2017). As such, there is great incentive to make the Arabidopsis reference mitochondrial genomes the gold standard in the field. Indeed, the extensive characterization of structural variation in these genomes has gone a long way to accomplish this goal (Arrieta-Montiel et al., 2009). However, sequence errors still exist in the reported reference genomes with potentially detrimental and far-reaching effects on related research efforts. This is especially true because the actual rate of sequence evolution in plant mtDNA is usually very low (Wolfe et al., 1987), so even a modest amount of sequencing errors can result in a problematic signal-to-noise ratio. For example, a recent study was performed to infer the distribution and spectrum of mutations across the Arabidopsis mitochondrial genome and used the sequence variants that distinguish published C24 and Col-0 mtDNA sequences (Christensen, 2013). Such comparative analyses of published genomic data are commonplace and can make substantial contributions to the field, but it is now clear based on our reexamination of the Col-0 sequence that approximately 40% of the analyzed variants in that study were artefacts (Table 2). Another recent investigation was conducted to detect de novo mutations in A. thaliana organelle genomes using deep sequencing (Zampini et al., 2015). The authors applied a natural and seemingly conservative approach by rejecting any identified mitochondrial variant that did not differ from "both" published Col-0 mitochondrial genomes, but this choice highlights two pressing concerns. First, it illustrates the continued confusion in the field about the fact that original A. thaliana reference mitochondrial genome is derived from C24 and not Col-0. Second, it reflects a misunderstanding about the extent to which the multiple available reference genomes constitute independent data points. The reference-guided approach used to assemble mtDNA sequences from C24, Col-0, and Ler (Davila et al., 2011) appears to have incorporated many errors and allelic variants from the reference genome into the new assemblies. Nevertheless, those new assemblies are still reported as separate accessions on GenBank rather than as a set of variant calls, so there is a risk that the many errors shared between them will be falsely perceived as having been independently validated in two or more sequencing datasets. This concern is particularly relevant for the Ler sequence available on GenBank because it was generated with the same short 35-bp reads but a much lower level of sequence coverage -only 19´ compared to 230´ and 371´ for Col-0 and C24, respectively (Davila et al., 2011). For these reasons, it is important that researchers in the field of plant mitochondrial genetics be more broadly aware of the history and methodologies that produced the currently available reference mitochondrial genome sequence for A. thaliana.
2019-04-02T13:13:05.394Z
2018-01-18T00:00:00.000
{ "year": 2018, "sha1": "668aff9225f593fcc826ebae3fe7a4babad3449d", "oa_license": "CCBY", "oa_url": "http://www.plantcell.org/content/plantcell/30/3/525.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "91fcad9afe46594687b02e2b99a48e962a0f79c2", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
268552445
pes2o/s2orc
v3-fos-license
Sub-8 nm networked cage nanofilm with tunable nanofluidic channels for adaptive sieving Biological cell membrane featuring smart mass-transport channels and sub-10 nm thickness was viewed as the benchmark inspiring the design of separation membranes; however, constructing highly connective and adaptive pore channels over large-area membranes less than 10 nm in thickness is still a huge challenge. Here, we report the design and fabrication of sub-8 nm networked cage nanofilms that comprise of tunable, responsive organic cage-based water channels via a free-interface-confined self-assembly and crosslinking strategy. These cage-bearing composite membranes display outstanding water permeability at the 10−5 cm2 s−1 scale, which is 1–2 orders of magnitude higher than that of traditional polymeric membranes. Furthermore, the channel microenvironments including hydrophilicity and steric hindrance can be manipulated by a simple anion exchange strategy. In particular, through ionically associating light-responsive anions to cage windows, such ‘smart’ membrane can even perform graded molecular sieving. The emergence of these networked cage-nanofilms provides an avenue for developing bio-inspired ultrathin membranes toward smart separation. In nature, cell membranes are selectively permeable and can adaptively transport water and other nutrients to maintain cell viability, owing to the fantastical architecture with smart and highly selective protein channels (e.g., aquaporin 1) aligned in the ultrathin lipid bilayer (less than 10 nm in thickness) [1][2][3] .Crafting artificial membranes with similar adaptive channels and functions has profound implications for practical applications 4 .Emerging artificial channel materials, such as 1D carbon nanotubes 5 , 2D nanosheets (e.g., graphene and conjugatedpolymer-framework) [6][7][8] , and 3D crystalline frameworks (e.g., metalorganic frameworks and covalent organic frameworks) 9,10 , were reported to possess well-defined nanofluidic channels with masstransport behaviors comparable to their biological counterparts.The major downside of these channel materials is their poor processability caused by the inherited insolubility, i.e., the high surface energy of the nanodispersions makes them prone to aggregate in solution, thus leading to nonselective interface defects especially in large-scale membrane synthesis.In addition, it remains technically challenging to control the channel length and align the channels parallel to the membrane's cross-section to reach a minimum mass-transfer path 4,11,12 . Porous organic cages (POCs) as a type of intriguing 0D nanoporous molecule possess salient features of easy-to-modify skeletons and good solubility, which offer options for physicochemical structure regulation and scalable solution-processing [13][14][15][16][17][18][19][20][21] .More impressively, most of POCs with symmetric pore-opening can pack into interconnected channel networks for guest transport independent of the channel's orientation 22 .Recent theoretical calculations have demonstrated that POCs possess water channel behavior and can provide ultrafast guest permeance and selective separation [23][24][25] .A promising example was presented by embedding prototypical imine POCs into liposomes that showed aquaporin-like water permeability 22 .However, cell-inspired large-area ultrathin membranes have not been experimentally obtained from these materials.Difficulties may arise from the limited control of the van der Waals packing between cage molecules when phasing out from the dope solution during the membrane formation process 13 ; in addition, most of reported POC membranes are based on noncovalent interactions, which unavoidably face the issues of frangibility, defect and instability, especially in the case of large-scale synthesis 26 . Here, we report the successful synthesis of a series of ultrathin networked cage nanofilms with a thickness less than 8 nm using a universal strategy of free-interface-confined self-assembly & crosslinking (FISC) (Fig. 1a).A sharp oil/water (O/W) free interface was introduced to suppress the intrinsic van der Waals packing and direct their 2D self-assembly; the preorganized cage layers were then in-situ crosslinked into continuous ultrathin networked cage nanofilms within the confined 2D space.The nanofilms were able to inherit the nanofluidic channels from the POCs, resulting in exceptional water permeability at a scale of 10 −5 cm 2 s −1 that surpasses conventional polymeric membranes by 1-2 orders of magnitude.Transferring these networked cage nanofilms onto porous supports affords composite membranes with ultrahigh water permeance up to 360 L m −2 h −1 bar −1 and excellent molecular sieving performance well surpasses that of most current membranes, as well as long-term operational stability.The networked cage composite membrane with adaptive permeance can be achieved by microenvironment modulation (e.g., hydrophilicity and steric hindrance) around the pore windows.Impressively, a lightcontrolled graded molecular sieving system was established by ionically associating light-responsive anions to the POC membrane, in which the continuous separation of three organic dyes in a singlestage, single-membrane process was exemplified (Supplementary Fig. 1). Preparation and characterization of the ultrathin networked cage nanofilms A series of mature cycloamine organic cages were synthesized according to the literatures.Their chemical structures were confirmed by 1 H nuclear magnetic resonance ( 1 H NMR) spectra (Supplementary Figs.2-5).To realize the FISC process, the first key step is to dissolve the insoluble cycloamine organic cages in water; herein, partial quaternization of amine to enhance the solubility was achieved by adjusting the solution pH with hydrochloric acid (HCl, 0.1 M).As confirmed by electrospray ionization-mass spectrometry (ESI-MS), on average, ~4 of 12 amine groups were protonated in Cage 1 (Supplementary Fig. 6), while most of the amine groups were maintained.Moreover, the partially protonated cage molecules featuring amphiphilicity favor self-assembly at the sharp O/W interface 27,28 (Supplementary Fig. 7 and Supplementary Movie 1). As shown in Fig. 1a, the POC nanofilms were synthesized at a free O/W interface with organic cages dissolved in water and trimesoyl chloride (TMC) as the crosslinker dissolved in n-hexane.Free-standing nanofilms formed at an aqueous/organic interface, which qualitatively verified the Schotten-Baumann reaction between the cycloamine cage and TMC (Fig. 1b).A variety of cage molecules (Cage 1-4) can be processed into ultrathin nanofilms through this method (Supplementary Figs.8-10), showing the universality of the FISC strategy.Hereafter, the interfacially assembled and crosslinked cage nanofilm prepared with the prototypical Cage 1 (Fig. 1a) was selected as the representative material (denoted as the iac-cage nanofilm) for detailed investigation.A characteristic band of amide bond at around 1750 cm −1 was observed clearly in the Fourier transform infrared (FTIR) spectrum of the iac-cage nanofilm, indicating the formation of amide linkages (Supplementary Fig. 11).High-resolution C1s and N1s X-ray photoelectron spectroscopy (XPS) further validated the Schotten-Baumann reaction, and revealed that approximately 70 % of the -NH-groups were crosslinked; that is, approximately 8 -NH-groups per cage reacted with acyl chlorides, and the remaining 4 -NH-groups were protonated (Supplementary Figs. 12 and 13).Coupled with the peak ratio of N-C = O and -NH-and the O/N ratio, we calculated that each Cage 1 molecule in the iac-cage nanofilm binds to the other four cages through acyl chloride bridges (Supplementary equations 6 and 7).The nanofilms were sufficiently flexible to be transferred onto various substrates, including nonporous silicon wafers and highly porous supports (e.g., PE, polypropylene, polysulfone and cellulose membranes, Supplementary Figs.14-16).As illustrated in Fig. 1c, a rectangular iac-cage nanofilm with effective area of ~24.0 cm 2 was loaded on a porous substrate without any identifiable cracks.Top-surface SEM images of composite membranes demonstrated that all the pores of the PE supports were obstructed by the smooth nanofilms, while the profiles of the substrates were still visible, preliminarily indicating the dense, ultrathin and flexible features (Fig. 1d, e).Then, atomic force microscopy (AFM) was employed to precisely quantify the thickness of the iac-cage nanofilms.The height profile reveals that the thickness of the nanofilm is approximately 7.0 nm (Fig. 1f, g).This is at least 10 times thinner than most of state-of-the-art cage films prepared via spin coating or interfacial polymerization 26,29,30 .Considering a single Cage 1 molecule featuring a height of ~2 nm as revealed by molecular simulation (Supplementary Fig. 17), the iac-cage nanofilm would be the sum of the thickness of 3 ~4 POC molecules.Furthermore, the thickness of the iac-cage nanofilm is tunable by preparation conditions.By increasing the reaction time from 2 to 10 min, the thickness of the nanofilm slowly increased from 6.5 to 8.2 nm (Fig. 1h, Supplementary Fig. 18).Roughness analysis revealed that the nanofilm is super flat with an average roughness below 1.0 nm (Supplementary Table 1).In addition, the nanofilms prepared from other cages showed similar surface roughness and ultrathin feature (Supplementary Figs.19-21). Positron annihilation lifetime spectroscopy (PALS) was employed to probe the cavities of the networked cages.Although PALS is a powerful method for analyzing microporous structures in situ [31][32][33] , it is challenging to directly characterize nanofilms with thicknesses less than 10 nm because the signal is too weak to probe.To overcome this limitation, we conducted PALS analysis on bulk-phase crosslinked cages with similar monomer ratios.The results revealed the presence of a micropore with a size around 5.0 Å, suggesting that the intrinsic pore of Cage 1 was preserved after chemical crosslinking (Supplementary Fig. 22 and Table 2).Molecular simulations were performed to further understand the microstructures of the iac-cage nanofilm.As shown in Fig. 1i, the cages are crosslinked into interconnected 3D channel networks that allow water molecules to transport.The simulated pore size distribution in Fig. 1j centered at ~5.0 Å was consistent with the PALs result, further confirming that the networked cage nanofilms inherited the micropores from the parent cages through chemical crosslinking. Formation mechanism of the ultrathin networked cage nanofilm The ultrathin and super smooth features of the cage nanofilms triggered our interest in understanding the underlying membrane formation mechanism.For the traditional interfacial polymerization involving small molecule organic amine and acyl chloride, nucleation and aggregation of polyamide clusters of different molecular weights occur in the organic phase near the oil/water interface, followed by linking into a continuous film via the monomers 34,35 .The thickness and surface roughness will steadily increase under prolonged reaction time, leading to a relatively thick and rough polyamide nanofilm 35,36 .Here, 1,2-diaminocyclohexane, a fragment of Cage 1, was used as a monomer to interfacially polymerize with TMC.Under similar synthesis conditions, the thickness of the obtained nanofilms increased from ~17 to ~200 nm as the reaction time increased from 2 to 10 min (Supplementary Fig. 23).The same trend was observed for the surface roughness (Supplementary Table 3).The relationship between structure and reaction time aligns with that of conventional polyamide nanofilms (e.g., m-phenylenediamineand piperazine-based polyamide).However, it sharply contrasts with the behavior of current networked cage nanofilms, which can be attributed to the distinctive chemical structure of the organic cages.When comparing the quaternized nitrogen coupled with a hydrophilic counterion (Cl¯), it becomes evident that the secondary amine group on the cage molecule has a higher affinity for the oil phase.This observation aligns with the traditional understanding of interfacial polymerization, where amine groups tend to exhibit some solubility in the oil phase, leading most amine groups to preferentially insert themselves into the oil phase 37,38 .Similar to surfactants, the amphiphilic cage molecules tend to spontaneously assemble at the oil/water interface (with most amine groups inserted in the oil phase, Supplementary Fig. 7c) due to the interfacial energy considerations.Consequently, the uniformly packed organic cage layer will be crosslinked into continuous ultrathin films.Furthermore, the geometric size of the organic cages is 3 times larger than that of 1,2-diaminocyclohexane, leading to a much slower diffusion rate from the aqueous phase to the oil phase.We monitored the real-time diffusion kinetics of the organic cage and 1,2-diaminocyclohexane across the O/W interface using UV spectroscopy.Evidently, a lower diffusion rate was observed in organic cages than that of 1,2diaminocyclohexane (Fig. 1k).That is, the confinement of the cage molecules and their assembly at the 2D space of the sharp O/W interface facilitate their subsequent crosslinking into a continuous, ultrathin, and smooth nanofilm. Separation performance of the networked cage nanofilm composite membranes The iac-cage nanofilm was loaded onto a porous PE substrate to prepare a composite membrane (hereafter denoted as the iac-cage membrane), which was then subjected to comprehensive investigation of its molecular nanofiltration performance using a range of dye aqueous solutions.To confirm that dye adsorption did not affect the membrane selectivity, dye adsorption tests using Valia-Chien diffusion cell were performed.No noticeable variation in UV-vis absorption was observed in the feed chamber, and the permeate chamber remained colorless even after 72 h, indicating that the membrane did not absorb the dyes (Supplementary Fig. 24).All dye separation experiments were performed using a high concentration feed of 100 mg mL −1 , and all the results were recorded after at least 30 min of steady filtration.For the iac-cage membranes prepared with cage concentrations of 1.0, 4.0 and 8.0 mM, the water permeance increased with decreasing reaction time (Fig. 2a), which is consistent with the observed variation in nanofilm thickness, as demonstrated by AFM analysis.Significantly, the iac-cage membrane prepared with a cage concentration of 1.0 mM and a reaction time of 2 min exhibits an exceptionally high water permeance of 360 L m −2 h −1 bar −1 (the corresponding rejection for Congo red was 99.0%), which is 1-2 orders of magnitude higher than that of commercial nanofiltration membranes.Figure 2b shows the rejection behavior of the membrane towards a range of dye molecules with different geometric sizes (Supplementary Fig. 25).Interestingly, the iac-cage membranes displayed strict size-dependent selectivity.Almost all the dye molecules with dimensions larger than 5.2 Å can be rejected by the membrane, while the rejection to the molecule with a dimension below 4.8 Å sharply decreased to as low as 10.0% (for the 1 mM-2 min membrane, Fig. 2b).The sharp size-sieving behavior should be attributed to the sieving effect of cages.In addition, the electrostatic interaction slightly influences the rejection behavior.For the dyes with similar sizes but opposite charges, the membrane displayed a rejection order of Congo red (CR) > Alcian blue (AB) and Methyl orange (MO) > Methylene blue (MB), i.e., the rejection of negatively charged dye was relatively higher than that of positively charged one (Supplementary Fig. 26).Such phenomenon is consistent with the Donnan effect 39,40 , since the membrane top surface is negatively charged (−3.0 mV) at a neutral pH (Supplementary Fig. 27).As shown in Fig. 2c, the nanofiltration performance of the iac-cage membranes in the aqueous environment surpasses that of most commercial and documentary membranes to date.Moreover, such outstanding separation performance was largely maintained with increasing operation pressure up to 5 bar and over 120 h of the cross-flow filtration test, as well as upon immersion in strong acid aqueous solutions, and polar/nonpolar organic solvents (Supplementary Figs. 28, 29), highlighting their high structural stability. To emphasize the unique role of the cage cavity in water transport, a control membrane without intrinsic pores was prepared using a fragment of Cage 1 as a monomer under similar conditions.The control membrane displayed a water permeance of 5.5 L m −2 h −1 bar −1 , which is 65-fold lower than that of the iac-cage membrane, although it should be noted that it was around 10 times thicker.Furthermore, we calculated the intrinsic water transport characteristics, i.e., water permeability, P w , of the iac-cage membrane, and compared them with those of the control membrane and state-ofthe-art membrane materials 41 .As shown in Fig. 2f, the P w value of the iac-cage membrane is up to 10 −5 cm s −1 , which is one to two orders of magnitude higher than that of the control membrane and traditional polymeric membranes, and is comparable to that of nanofluidic membranes (e.g., MOFs, COFs and GO membranes) 9 .Very recently, He et al. reported a crystalline imine cage membrane with a minimal thickness of ~80 nm and a moderate water permeance of 43.0 L m −2 h −1 bar −1 42 ; the calculated P w value of this membrane is of a similar magnitude to that of our iac-cage membrane.These results indicated the preservation of nanofluidic water channels of cages within the iac-cage membrane.This also highlights a significant advantage of the FISC method, through which the amine cages can be interfacially crosslinked into sub-8 nm cage nanofilms, simultaneously largely preserving the nanofluidic water channels and thus leading to outstanding molecular separation performance (Fig. 2c).It is also worth mentioning that scale-up preparation of the iac-cage membrane might be achieved by engineering the FISC method into a rollto-roll technique 43,44 . Tuning the nanofluidic water flow through microenvironment modulation Regulation of pore window hydrophilicity.Controllable regulation of the water transport behavior is a significant feature of cell membrane channels 3 .The rich chemistry of the organic cage offers promising opportunities to regulate the nanofluidic water flow via modulating the microenvironment surrounding the cage window, e.g., previous studies have shown that the hydrophilicity/hydrophobicity of cationic cages can be tuned through anion metathesis 45,46 , potentially enabling the modulation of channel affinity toward water molecules.Fortunately, there are four cationic N atoms per cage molecule in the iac-cage membrane with hydrophilic Cl − as the counteranion according to XPS analysis, which can be exchanged with a hydrophobic one, i.e., bis(trifluoromethane sulfonyl)imide (TFSI − ) (Fig. 3a).The energy-dispersive Xray spectroscopy (EDS) mapping results demonstrated the even distribution of Cl on the membrane surface (Supplementary Fig. 30 and Table 4).The Cl/N ratio was around 0.33, further supporting the presence of ~4 cationic N atoms per cage molecule.After exchange with TFSI − , the Cl element is hardly detectable, while the feature elements of F and S appear and are homogeneously distributed on the membrane surface.Meanwhile, the water contact angle of the membrane increased from 51.3°± 0.8°to 62.3°± 1.5°(Supplementary Fig. 31). A more hydrophobic separation membrane typically has difficulty forming a water layer on its surface, leading to a decrease in water https://doi.org/10.1038/s41467-024-46809-4Nature Communications | (2024) 15:2478 permeance 47 .Counterintuitively, as depicted in Fig. 3b, the cage membrane paired with hydrophobic counterions TFSI − exhibited the highest water permeance and simultaneously maintained high CR rejection (99.0%).The calculated P w value is ~1.2 times higher compared to the membranes carrying Cl − .This result could be attributed to the weak interactions between water molecules and hydrophobic groups around the cage window, which reduces the time required for water molecules to traverse the cage, thereby improving the membrane water permeance 5,48,49 .To gain further insights, molecular dynamics (MD) simulations were performed using an ~8.0 nm-thick cage nanofilm with different anions (Fig. 3g, h, Supplementary Fig. 32, Movies 2 and 3).The steady water permeability was calculated to be 3.52 × 10 −5 cm 2 s −1 , a value comparable to the experimental result of 1.11 × 10 −5 cm 2 s −1 .The cage networks in Fig. 3g show 1D chains of water molecules, which is a typical feature reported in biological water channels 1,50 .The diffusive ability of water molecules was analyzed by tracking the number of water molecules transferred across these networked cage membranes in Fig. 3h.The results clearly reveal that water molecules diffuse faster in the TFSI − exchanged cage membrane, consistent with the observed separation performance in Fig. 3b. Modulation of window opening size with light-responsive counteranions.The facile counteranion exchange strategy inspired us to introduce molecular conformation-responsive counteranions to the cage window, which mimics the gating effect of the cell membrane, where the cationic N serves as the "gating hinge" and the environmentally adaptive counteranions acts as the "gates".As a proofof-concept, azobenzoate (azo), the typical light-responsive molecule 51,52 , was selected as the smart counteranion, which endows the cage membrane with a photo-responsive gating effect (Fig. 3a).The experiment started with counteranion exchange.i.e., replacing Cl − with an azo anion.The iac-cage membrane carrying the azo anion is denoted as the iac-cage-azo membrane hereafter.Successful anion exchange was confirmed by EDS mapping (Supplementary Fig. 34).The photo-induced conformational change (i.e., photoisomerization) of azo in the iac-cage-azo membrane was analyzed through solid UV spectroscopy.As shown in Fig. 3c-e, after UV irradiation, the absorbance (~322 nm) assigned to the bondingantibonding orbital [π-π*] transition of the trans-azo moiety decreases remarkably in intensity, while a slight increase in the band at ~430 nm is observed (assigned to the n-π*transition of the cis-azo moiety).The photo-stationary state of the iac-cage-azo membrane can be achieved after 10 min of irradiation, and the proportion of trans-azo in the iac-cage-azo membrane varied from 100% to ~70%, as determined by UV spectroscopy.Such a process is reversible upon irradiation with visible light, as evidenced by the 6 cycles of alternating UV and Vis irradiation.By comparison, the pristine iaccage membrane carrying Cl − anions is inert to photoirradiation under the same conditions, and no obvious UV-Vis absorbance variation can be detected (Supplementary Fig. 35). The separation performance evolution of the iac-cage-azo membrane under alternating UV and Vis irradiation was investigated.After 10 min of UV irradiation, a 20% decrease in water permeance was observed, whereas an almost complete rejection of CR was maintained; upon subsequent 10 min of Vis irradiation, the water permeance recovered and maintained a CR rejection of 99.0%.MD simulations also demonstrated that water molecules diffuse slower within the iac-azo-cis-nanofilm compared to the iac-azo-trans-nanofilm (Fig. 3h, Supplementary Fig. 33, Movie 4 and 5).We attribute this reversible water permeance to the variation in steric hindrance with photoisomerization of azo.We further investigated how light control the window opening sizes by measuring the rejection of a range of dyes.As shown in Fig. 4b, the iac-cage-azo membrane upon Vis irradiation shows a cut-off rejection (≥ 90%) around 5.1 Å; when the membrane is treated with UV light, the cut-off rejection slightly decreases to 4.5 Å.The shrinking of cut-off rejection experimentally evidenced the light-controlled window opening sizes.MD simulation results also confirms that projected pore aperture of the cage 1 equipped with cis-azo is smaller than that with trans-azo (Supplementary Fig. 36).More interestingly, as depicted in Fig. 4c, rejections towards dyes with relatively smaller dimension (e.g., methyl orange and 4-nitrophenol) are switchable in each cycle of UV/Vis irradiation, indicative of adaptive separation performance. Photo-responsive Graded molecular separation Adaptive molecular sieving membrane provide an opportunity for separating ternary systems or more complex mixtures using a single membrane 42,51 .The iac-cage-azo membranes displaying switchable water permeance and sieving performance hold promise for graded molecular sieving.As a proof-of-concept, the iac-cage-azo membrane was employed to separate a mixture of three organic dyes with gradient dimensions (i.e., 4-nitrophenol, 2.5 Å, methyl orange, 4.8 Å, and Congo red, 5.2 Å).Upon UV-irradiation, only 4-nitrophenol was detected in the permeate (Fig. 4d), and Congo red and methyl orange were nearly completely rejected.After Vis irradiation, the methyl orange became permeable while the Congo continued to be rejected, as confirmed by the pure phase of methyl orange in the permeate (Fig. 4e).After flushing the residual methyl orange from the feed with excess deionized water, the pure phase of Congo red could be collected (Fig. 4f).Upon alternating UV and Vis irradiation, a single iaccage-azo membrane achieved the separation of a ternary mixture.The iac-cage-azo membrane is recyclable and robust, as evidenced by at least 5 cycles of operation, consistent with the results obtained from solid UV spectroscopy. Discussion In summary, by confining the organic cage molecules at the oil/water interface, a range of cage molecules were self-assembled and then chemically crosslinked into flexible nanofilms with thicknesses down to 8 nm.The resulting composite membrane exhibited nanofluidic channels, enabling exceptional water permeance that surpassed commercial nanofiltration membranes by 1-2 orders of magnitude.Furthermore, the organic cage membrane exhibited a wide range of chemical functionalities, offering the possibility of facile microenvironmental modulation of the nanofluidic water channels.The membrane even displayed a photo-responsive gating effect, enabling photo-controlled graded separation.Given the versatility of POCs, these networked cage ultrathin nanofilms hold great potential for a wide range of future applications in catalysis, sorption, and sensing, particularly with task-specific functionalization. Chemicals and materials (R,R)− Synthesis of Cage 1 Typically, dichloromethane (10 mL) was added slowly to 1,3,5-triformylbenzene (500 mg).Trifluoroacetic acid (10 μL) was added to this solution as a catalyst.Then, a dichloromethane solution (10 mL) of (R,R)−1,2-diaminocyclohexane (500 mg) was added.The mixture was capped and left to stand for one week, during which, crystals formed on the sides of the vessel.The crystalline product was collected by centrifugation and washed with a dichloromethane/methanol solution (v/v = 5/95) for 3 times (50 mL × 3), and further dried at 100 °C for 24 h.After that, the imine cage (463 mg) was dissolved in a dichloromethane/methanol mixture (v/v = 1/1, 25 mL) under vigorous stirring. When the solution became clear, NaBH 4 (500 mg) was directly added into the solution and kept stirring for 15 h. 1 mL of water (1 mL) was then injected and the mixture was kept stirring for another 9 h.Finally, the solution was removed under vacuum.The residual was washed with a large amount of water for 3 times (100 mL × 3) until it became neutral.All the reaction processes were conducted at room temperature.The resulting sample was then vacuum dried at 80 °C for 24 h and stored in a light-resistant manner to afford Cage 1 as white powder. After slowly cooling to room temperature, the yellow crystalline product was collected and washed with acetone for 3 times (50 mL × 3), then vacuum dried at 80 °C for 24 h to yield imine cage as a fine orange powder.After that, the imine cage (50 mg) was dissolved in dichloromethane/methanol solution (v/v = 1/1, 10 mL) under vigorous stirring. When the solution became clear, NaBH 4 (100 mg) was directly added into the solution and reacted for 15 h. 1 mL of water (1 mL) was then injected and the mixture was kept stirring for another 9 h.Finally, the solution was removed under vacuum.The residual was washed with a large amount of water for 3 times (20 mL × 3) until it became neutral.The resulting sample was then vacuum dried at 80 °C for 24 h and stored in a light-resistant manner to afford Cage 2 as pale-yellow powder. Synthesis of Cage 3 Ethyl acetate (35 mL) was added to 1,3,5-triformylbenzene (50 mg) in a beaker at room temperature.After 5 min, a solution of 1,2-ethylenediamine (28 mg) in ethyl acetate (5 mL) was added.The mixture was capped and left to stand for 60 h without stirring.Pale-white needlelike crystals were observed after around 60 h.The crystals were collected carefully from the sides of the flask and washed with ethyl acetate for 3 times (20 mL × 3), and then vacuum dried at 80 °C for 12 h to yield imine cage as a white powder.The reduction process is similar to that of Cage 2. Synthesis of Cage 4 Tetraaldehyde 2 (150 mg) and 2 equiv KOH were dissolved in a mixture of water and ethanol (100/100 ml, v/v) and refluxed for 1 h.A 50 mL of ethanol solution of (R,R)−1,2-diaminocyclohexane (137 mg) was added into the mixture, and refluxed for additional 24 h.The resulting clear solution was filtered and the filtrate was allowed to slow evaporation at approximate 30 °C to afford cage crystals.The reduction process is similar to that of Cage 2. The molecular structures of all the cage compounds were confirmed through 1 H NMR analysis using CDCl3 as the deuterated solvent (Supplementary Figs.2-5). Fabrication of free-standing networked cage nanofilm The free-standing cage nanofilms were prepared through a freeinterface-confined self-assembly and crosslinking strategy (FISC).First, aqueous solutions of imine cage were prepared through a partial quaternization method.Imine cages were added to water (2 ml) and sonicated to form a turbid dispersion.Subsequently, HCl (0.1 M) was added dropwise until the pH reached approximately 8.0, resulting in a nearly transparent solution.The solution was then filtered using a syringe filter (pore size: 0.22 μm) and transferred to a glass dish.To initiate the FISC process, 1 ml of TMC solution in n-hexane was carefully added to the water surface.The reaction proceeded for a specific duration, after which the resulting cage nanofilms were transferred to a silicon wafer disc and rinsed with a solvent.The prepared samples were further heat-treated at 60 °C for 5 min for further characterization.The concentrations of the cage aqueous solution ranged from 1 mM to 8 mM, and the reaction time varied from 2 min to 10 min.The concentration of the TMC organic solution remained constant at 6 mM. Fabrication of networked cage nanofilm composite membrane For the fabrication of composite membrane, the clear cage aqueous solution was added onto a pre-clamped porous substrate positioned between a vacuum filter head and a vacuum filter bowl.Once the reaction was complete, the cage aqueous solution was filtered to facilitate the adhesion of the formed cage nanofilm to the porous substrate.Subsequently, the oil phase solution was poured off and the resulting composite membrane was heat-treated at 60 °C for 5 min.Finally, the composite membrane was rinsed with n-hexane to remove residual TMC. Counterion exchange for the networked cage nanofilms A simple anion exchange strategy was employed to manipulate the channel microenvironments including hydrophilicity and steric hindrance.All the counterion exchange experiments were started with an iac-cage-Cl-membrane.For the iac-cage-TFSI-membrane, 30 ml aqueous solution of TFSILi (10 mg ml −1 ) filtrated through the iac-cage-Clmembrane and rinsed with deionized water (3*30 ml).For the iac-cageazo-membrane, 30 ml aqueous solution of sodium azobenzoate (10 mg ml −1 ) was used. For more information on separation performance experiments, property characterizations, and computational simulations, please refer to Supplementary Information. Fig. 1 | Fig. 1 | Description of the confined self-assembly and crosslinking process at a sharp free interface and the resulting sub-8 nm networked cage nanofilms.a Crosslinking POCs at a free interface between water and the organic phase by trimesoyl chloride and the schematic diagram of networked cage nanofilm with water channel for rapid molecular separation.b Free-standing cage nanofilm formed at the aqueous-organic free interface in a 20 ml vial.c Photograph of nanofilm transferred onto polymer substrate.The orange frame indicates the nanofilm boundary.d Top-down SEM image of the composite membrane with a substrate of porous polyethylene (PE) ultrafiltration membrane.Inset is a high-magnification SEM image of PE support.e High-magnification SEM image of cage nanofilm supported by PE membrane.f, g AFM height image and corresponding height profile of a section of cage nanofilm on top of a silicon wafer.h Thickness increases with the reaction time.Error bars represent the standard deviation calculated from three parallel measurements.i 3D view of the cage nanofilm with interconnected pore network.j Simulated pore size distribution.k UV/vis absorption of the cage and its fragment detected at a position 30 μm away from O/W interface versus interfacial diffusion time. Fig. 2 | Fig. 2 | Separation performance.a Water permeance of networked cage composite membranes varied with synthesis conditions.Error bars represent the standard deviation calculated from three parallel membrane samples.b Rejection of different cage composite membranes versus the dimensions of the dye molecules.c Aqueous nanofiltration performance comparison of cage composite membranes with commercial and literature reported membranes.The molecule rejection data refer to selectivity of Congo red.d Water permeance with increasing operation pressure.e Continuous filtration at an operation pressure of 1 bar.f Water permeability comparison of cage composite membranes with traditional polymeric membranes. Fig. 3 | Fig. 3 | Microenvironment modulation for regulating water flow and simulation.a Schematic illustration showing the microenvironment modulation of the pore windows via anion exchange.b Separation performance enhancement after TFSI − exchange.c Time-dependent UV/Vis spectra of the iac-cage-azo membrane at 298 K, using UV and Vis (inset) irradiation.d, e Ratio of trans/cis state with UV and Vis light irradiation time, respectively.f Changes of the absorption band of iac-cage-azo membrane at 330 nm upon alternating irradiation by UV and visible light.g Simulation snapshot of water molecules transporting through the iac-cage-TFSI membrane.The enlarged snapshot shows the water chains inside the cage network.h Numbers of transferred water molecules Nw through cage membranes with various counterions. Fig. 4 | Fig. 4 | Photo controlled graded molecular separation.a Schematic demonstrating the light-controlled graded molecular sieving.b Molecule rejections of the iac-cage-azo membrane with increasing solute dimension after UV and Vis irradiation.The error bars show the standard deviation of three parallel membrane samples.c Reversible dye rejection of methyl orange and 4-nitrophenol of the iaccage-azo membrane observed upon alternating UV and Vis irradiation.d UV-Vis absorption spectra of the ternary mixture in water and the permeate from the membrane of state I. e UV-Vis absorption spectra of the permeate from the membrane of state II, showing the existence of methyl orange.f UV-Vis absorption spectra of the retentate of state II, showing the pure phase of Congo red.Inserts are the digital pictures related to permeate and retentate.The standard UV-Vis absorption spectra of 4-nitrophenol, methyl orange and Congo red are presented in Supplementary Fig.25.
2024-03-22T06:18:28.500Z
2024-03-20T00:00:00.000
{ "year": 2024, "sha1": "27c20a64c72694b1491d72af376d701051bb62af", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "3506d56da2468f8921eceaa524b04dd8885e81f5", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }