id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
240069118
pes2o/s2orc
v3-fos-license
Towards Sustainable, Resilient and Adaptive Urban Underground Space (UUS) Exploration, Land Subsidence and Economic Impact Spatial Model (USEM) in Shanghai, P.R. China: Systematic Reviews, Model Framework, Initial Results and Pre-Determined Challenges As a coastal megacity, Shanghai despite having one of the world’s leading UUS exploration for utilities, transportation, metro system, commercial and residential spaces is continuously vulnerable to serious geo-environmental hazards risks and climate change impact: land subsidence, flooding, storm surge, and seawater level rise. Hence, it is imperative to study the continuous impact of rapid UUS development, land subsidence mechanisms, related geo-environmental hazards and its socio-economic impacts towards establishing a resilient, adaptive and sustainable UUS development via spatial planning and development model to lessen future adverse consequences. The aim of this paper is to present current progressive findings of the entitled research work. Methods conducted include systematic reviews of existing online journals available at open-sourced databases such as Google Scholar and Research Gate, determined USEM’s concrete steps consists of cause-effect, spatiotemporal, scenarios modelling and comparative analysis. The data gathered are mostly secondary. The four major findings are: (1) systematic reviews summarised outcomes; (2) determined USEM model framework; (3) Initial results produced from the USEM’s first step of cause-effect analysis in Shanghai and (4) Pre-determined challenges for the model. The model’s methods are expected to be used to study Shanghai condition in comparison with developing coastal megacity like Jakarta, Indonesia. It can also be referred by related interested experts especially towards assisting deeper understanding on geo-dynamics of land subsidence, UUS and economic impact via spatial modelling and formulating policies of adaptation and resilience in developed and developing coastal megacities in the world. Introduction For many large coastal megacities such as Shanghai, Tokyo, Jakarta, Ho Chi Minh City, Bangkok and Dhaka, severe land subsidence is mainly caused by over extraction of groundwater, rapid urbanisation, soil consolidation, underground movement and flooding prone due to sea level rise and storm [1]. As rapid development demand in Shanghai is occurring faster than ever, development of UUS is targeted to become 'big, deep, long, fast, and dense' [2]. This has caused land subsidence to deteriorate again starting in 2000s onwards due to underground tunnel settlement and leakage, even though the net [3]. Based on past records from 1920s-2000s, there were uncertainties of 'decreased-controlledincreased' pattern of land subsidence rate in the megacity Shanghai. In practice, UUS has been explored, developed and utilised for many important purposes in Shanghai: pipeline and power utilities, transportation tunnels: metro railway system, shopping complexes, residentials, deep excavation of stormwater management and surface foundation pit excavation for high-rise buildings which are usually constructed in multi-aquifer and multi-aquitard layers. It is continuously challenging for UUS exploration especially in coastal megacity like Shanghai due to its natural geological condition of soft soil and foundation pit seepage [2]. Shanghai despite having one of the world's leading UUS development such as underground metro system is continuously vulnerable to geo-environmental hazards risks such as land subsidence, storm surge, and seawater level, accelerated by rapid urbanisation and climate change [4]. As the socio-economic impact of UUS exploration to land, infrastructures, properties and underground damages are long term and irreversible, it is important to have a proper feasibility impact modelling and analysis to ensure its resilience and sustainability [5]. Hence, it is imperative to study its continuous land subsidence control mechanisms and socio-economic impacts by modelling or simulation to avoid further adverse consequences especially in terms of subsidence information for disaster prevention, urban spatial planning and simulation at macro megacity scale and hydrological modelling [6]. This paper presents the current progressive systematic reviews towards the possible establishment of the USEM framework based on the integration and improvements of existing UUS-subsidence-economic impact chain, models' framework, initial results and pre-determined challenges using Shanghai, P.R China's case for future use. Shanghai's land subsidence monitoring stations and correlation of tunnel settlement to land subsidence are shown in following figure 1. Research Methodology A series of extensive systematic reviews have been conducted on more than hundred existing prominent related scientific journal articles available on online database platforms such as Google Scholar and ResearchGate. The literatures consist of researches conducted from period of 1960s-2000s and are gathered using key search terms: 'UUS', 'land subsidence', 'economic impact', 'spatial modelling' and 'Shanghai'. The research scientific journals are systematically reviewed by their publication year and related content. Gaps-to-be-filled are determined based on previous similar researches, theories, models, methods and arguments to reach research novelty and producing the so- 3 called 'USEM model' in this multidisciplinary research. After the steps in USEM framework are determined, the first step: cause-effect analysis is initially conducted. The cause-effect analysis tries to study the relation of determined causing factors: land subsidence risks intensity, hazards assessment, urbanisation rate, UUS exploration, long-term drawdown of groundwater and available adaptation policies with the economic impact factors: land, underground, infrastructure, buildings and socioeconomic. Research Findings and Discussions There are basically four main progressive findings for this on-going research which are (1) determined gaps towards establishing integrated model framework called USEM; (2) Detailed USEM steps and framework; (3) Preliminary cause-effect execution of the model using Shanghai and lastly, (4) Predetermined challenges and improvements of the model. Research Gaps towards USEM Framework Based on the extensive systematic reviews, there are four type of gaps determined towards supporting the realization of the USEM framework namely research, theoretical, model and methods and argumentative gaps. The determined gaps are summarised in table 1. Table 1. Summarised potential gaps-to-be-filled by USEM. Type of Gaps Gaps potential to be filled in for Shanghai context Researches Land subsidence risks, UUS-subsidence-economic impact spatial modelling at megacity scale Theoretical Integrated-combined theory of factors, cause-effects, suburban regions, UUSeconomic externalities uncertainties Models and methods Major models and methods integration for more accurate data analysis -costbenefit analysis -vulnerabilities-marginal damage -complex multifaceted analysis -optimal policies to mitigation social welfare losses Argumentative Necessity for comprehensive evaluation of UUS multiple potential resources involving spatial and land allocation, building-underground subsidenceinterdependence land use and human activity -socioeconomic system modelling -Fuzzy Analytical Hierarchy Process (FAHP) -limitations in current subsidence prevention zones in Shanghai Hence, based on the summarised gaps, the importance of having an adaptive, resilient UUSsubsidence-economic spatial modelling or USEM is crucial and potential to be filled as new added knowledge in the current related body of literatures. There are many models and analysis in hydrodynamic-economic-spatial relations however, they are unique in their own terms of context, disintegrated, merely check list assessment-based, do not focus specifically on the UUS exploration, development and additionally, spatial modelling for example, hydrodynamic model structure for property values [9]; land price-subsidence-spatial [10]; UUS-hazard risks [11]; FAHP for infrastructures [12]; property-metro-subsidence [13]; housing price-subsidence-spatial [14] and subsidence-economic impact assessment framework [15]. It was also observed that the existing land subsidence prevention zone in government management guidelines does not sufficiently consider the vulnerability of significant infrastructures in Shanghai [14]. Issues such as data lacking; longer time series data of land use; need for production of more reliable simulated results; suggestions for more accurate and comprehensive data permit for more accurate results; introduction of hydrodynamic model into inundation simulation and analysis; more detailed socio-economic impacts; loss evaluation and cost-benefit analysis warranted to identify localities that are particularly vulnerable [16] are among of the common issues identified in the literatures suggested for future potential research. USEM steps and framework Hence, based on the determined gaps, following steps are specifically designed in the USEM framework to further achieve the research goal and purposes. USEM consist of four major steps, explanation and equation as shown in table 2. Subij = ρgdrawijbjα (1) Where ρ is fluid density (kg/m 3 ), g is acceleration due to gravity (m/s 2 ), bj is the original thickness of the confining unit (m) and α is the compressibility of the confining unit material (m-s 2 /kg). The thickness of the confining unit varies across model cells. Disaster Risk = Hazard Exposure Vulnerability or Capacity (2) Where represents the overlay analysis in geographic information system (GIS) whereby, this equation is used to express the definition of disaster risk as the sum of hazards (causing factors) overlayed spatially with exposure and vulnerabilities or capacity such as economic impact [17]. Initial cause-effect results for Shanghai The preliminary findings indicate: There was negative correlations between cumulative subsidence and UUS development in Shanghai. As cumulative subsidence decreases (increment in rate), UUS development increases. It is important to predict future situations in years 2010 and onwards based on the previous record of uncertainties from rapid development-stabilised and accelerate again in 1990. Following figure 2 and table 3 visualises the initial correlation between cumulative subsidence and UUS development from 1920-2010 in Shanghai. Furthermore, proper continuous maintenance and improvement measures in the central business district (CBD) of Shanghai are needed for UUS development especially in high prone and risky areas, as well as to maintain high land price. New development areas at Northern and Southern Shanghai need to have controlled subsidence rate to produce future less risk with higher economic prices. Following figure 3 and table 4 visualises the results of correlation between UUS area, real estate price and cumulative subsidence. Figure 3. Spatial correlation between UUS area, real estate price and cumulative subsidence in Shanghai by 2010. possess such high real estate price and those with the lowest cumulative subsidence do not significantly own the highest real estate price. The complex relationship still needs further justification especially on the theory of real estate prices determinants factors etc. Regardless, it is expected that without proper spatial planning, greater risks of uncontrolled UUS development can cause adverse feasibility impact in future especially in terms of inundation and other economic impact such as land, buildings, infrastructures and underground structures. As land, infrastructures, buildings, properties and underground structures demand may continue to rise by 2050, further improvisation and proper control of adaptation and resilient policies especially involving the UUS development, land subsidence, its monetary impact via sustainable spatial planning is especially important. Pre-determined Challenges for USEM framework Current defficiencies of the USEM model are its complexity of assessments-spatial data framework. It is determined that the model integration will produce a complex multifaceted UUS-subsidenceeconomic spatial model. However, if it is not too ambitious, it should have its limits. To execute such complex model with various methods, equations and analysis will requires expertise in data gathering and GIS simulation. Hence, some data are produced based on assumptions. Nevertheless, the so-called USEM framework is indeed significant towards enabling and serving interested experts in field of urban planning and civil geotechnical engineering multidisciplinary. Conclusion The USEM framework is produced as a research progress in promoting adaptive, resilient UUS exploration along with land subsidence issues in Shanghai by studying its impact to urban economics and spatial planning. The framework can be referred and improvised further to provide deeper understanding on the geo-dynamics of UUS development, land subsidence, economic impact and spatial planning model in other megacities in the world. This research tries to open for possibilities of integrating the existing models to create a future integrated comprehensive modelling for UUSsubsidence-economic spatial chain using the case of Shanghai.
2021-10-28T20:10:59.127Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "36c240e990b03ce8770e18750524c448173914f8", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/861/7/072033", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "36c240e990b03ce8770e18750524c448173914f8", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Geography" ], "extfieldsofstudy": [ "Physics" ] }
236926764
pes2o/s2orc
v3-fos-license
Longitudinal Capsulotomy in Hip Arthroscopy: A Safe and Feasible Procedure for Cam‐Type Femoracetabular Impingement Objective To evaluate the surgical security, feasibility, and clinical efficacy of the longitudinal outside‐in capsulotomy in hip arthroscopic treatment for cam‐type femoracetabular impingement (FAI). Methods We retrospectively reviewed patients with cam‐type FAI who underwent hip arthroscopy in our institute from January 2018 to June 2019. All hip arthroscopic procedures were performed by one experienced surgeon in the same manner, except the fashions of capsulotomy. Fifty six patients with mean age of 39.1 and mean body mass index (BMI) of 24.5 were categorized into two groups according to the fashions of capsulotomy. Twenty six cases with longitudinal outside‐in capsulotomy were categorized into Group L, and 30 cases with transversal interportal capsulotomy were categorized into Group T as the control group. The demographic parameters were retrieved from medical documents and compared between the two groups. Surgical outcome including overall surgical time, traction time, complications, visual analogue score (VAS), and intraoperative radiation exposure were compared to investigate the security and feasibility. Radiographic assessment, and functional outcome were compared between the two groups to determine the clinical efficacy of the longitudinal capsulotomy. Results There was no significant difference in the demography and duration of follow‐up between the two groups. The overall surgical time demonstrated no significant difference between Group L and Group T (130.8 ± 16.6 min and 134.0 ± 14.7 min, P = 0.490). Significantly decreased traction time was found in Group L (43.2 ± 8.4 min and 62.2 ± 8.6 min, P < 0.001) compared to Group T. The Median of the fluoroscopic shot was 1 and 3 (P < 0.001). No major complications and reoperation were reported in both groups. The case of intraoperative iatrogenic injure was 0 (0%) and 6 (20%) in Group L and Group T respectively (P = 0.035), and the case of postoperative neurapraxia was 0 (0%) and 8 (26.6%) in Group L and Group T respectively (P = 0.017). The Median of postoperative VAS was 2 and 3 in Group L Group T (P = 0.002). The postoperative α angle was 42.3° ± 3.4° and 44.4° ± 3.5° in group L and group T respectively (P = 0.001). The postoperative iHOT‐12 score at final follow‐up was 79.3 ± 6.7 and 77.0 ± 7.9 respectively (P = 0.141). Conclusion Longitudinal outside‐in capsulotomy with less radiation exposure, reduced traction time, and reduced complications could be a safe and feasible procedure in arthroscopic treatment for cam FAI. Its clinical efficacy was not worse compared with traditional interportal capsulotomy in short‐term follow‐up. Introduction I n recent years, hip arthroscopy has become the mainstream of surgical treatment for femoracetabular impingement (FAI) 1 . Being different from other joints, the hip joint enveloped by the thick and tenacious capsule, which provides the hip joint sufficient stability but also obstructed the procedure getting into the joint. Capsulotomy was the most important evolution in the process of hip arthroscopic techniques, and the most popular capsulotomy technique is the so-called interportal capsulotomy which transversely connects the lateral portal and anterior portal on the capsule 2 . Capsulotomy could dramatically increase the visualization of arthroscopy and the mobility of instruments, which facilities the performing of complicated procedures such as labrum repair. However, the shortcomings of interportal capsulotomy should not be ignored, including intraoperative iatrogenic injury during portal establishing and traction-related postoperative neurapraxia 3 . Traditional interportal capsulotomy was performed based on the portal establishment with Seldinger technology, and the iatrogenic injury and time consuming procedure are highly dependent on the surgeon's experience. Moreover, interportal capsulotomy usually transversally sections the iliofemoral ligament (IFL), which could result in potential iatrogenic hip instability and may have a negative effect on clinical outcomes [4][5][6] . With the recognition of the importance of capsule advancing, some surgeons practice the capsule preservation technique. Denist et al. 7 proposed the peripheral compartment first technique characterized by decreased traction time, with starting hip arthroscopy from the peripheral compartment and then followed by the central compartmental procedure. Conaway et al. 8 proposed the puncture capsulotomy technique. These techniques could decrease the damage to IFL and restore the integrity of the capsule, but much more skill is required compared with traditional interportal capsulotomy. More recently, Thaunat et al. 9 proposed a novel technique of capsulotomy that starts from the peri-capsular space and longitudinally split capsule between two branches of IFL in an outside-in fashion without traction. Although the longitudinal fashion of capsulotomy could promise good visualization, sufficient space for practice, and ease of capsule closure, this procedure has not been popularly utilized, and the report on the clinical outcome of this procedure was limited. Comparing to traditional interportal capsulotomy, the superiority of longitudinal outside-in capsulotomy in surgical security and clinical outcome following hip arthroscopy has not been well studied. In our institute, interportal capsulotomy has been performed as a routine technique in hip arthroscopy for a long period. Recently, the longitudinal outside-in capsulotomy technique was applied for cam-type FAI. Therefore, in the current study, we reviewed patients who underwent hip arthroscopy diagnosed with cam-type FAI. The purpose of this study was to: (i) introduce our practice of longitudinal capsulotomy in hip arthroscopy; (ii) investigate the surgical result and clinical outcome of hip arthroscopy with longitudinal capsulotomy; and (iii) compare longitudinal capsulotomy vs the traditional interportal capsulotomy in security, feasibility, and clinical efficacy. Study Design and Participant This study approved by the institutional review board (No. 2019LW016-1) retrospectively reviewed consecutive patients who underwent hip arthroscopy between January 2018 and June 2019 in our database. Inclusion Criteria Inclusion criteria were: (i) patients age between 18 and 60 years old; (ii) diagnosed with cam-type FAI; (iii) underwent hip arthroscopy with capsulotomy in interportal or longitudinal fashion; and (iv) with outcome of minimum 1-year follow-up. Exclusion Criteria Patients were excluded if they had: (i) Tönnis grade ≥ 2; (ii) hip dysplasia; (iii) presence of pincer deformity; (iv) inflammatory synovitis of the hip; (v) avascular necrosis of femoral head; and (vi) previous ipsilateral or contralateral hip surgery. The medical records of 92 cases that met the inclusive criteria were screened and 25 cases were excluded for the presence of pincer deformity, two cases were excluded for hip dysplasia, four cases were excluded for inflammatory synovitis, two cases were excluded for avascular necrosis, and three cases were excluded for contralateral hip surgery. Finally, 56 cases with cam-type FAI were enrolled in the present study with 25 males and 31 females. The average age of this cohort was 39.1 (range 18-59 years), and the mean body mass index (BMI) was 24.5 (range, 17.6-31.2). Twenty six cases with longitudinal outsidein capsulotomy were categorized into Group L, and 30 cases with transversal interportal capsulotomy were classified into Group T as the control group. Indications for Surgery The diagnosis of FAI was made by a senior surgeon according to the classical symptoms, physical examination, and radiologic information. Patients with symptom duration exceed 6 months, failure of conservative therapy, and positive finding of labral tear on MRI would be recommended to take hip arthroscopy. Positon and Landmarks The standard setup of hip arthroscopy in the supine position with a fracture table and conventional instruments of arthroscopy were routinely utilized. The operative limb was placed in a neutral position of abduction-adduction with 5-10 degrees of flexion, and the contralateral side was placed in 45 degrees of abduction position. The pudendal post was eccentrically positioned, and the feet were well-padded and fixed in traction boots. The cutaneous outline of the anterior superior iliac spine (ASIS) and the great trochanter were marked before surgery, and then the anterolateral (AL) portal, mid-anterior (MA) portal, and distal anterolateral accessory (DALA) portal were routinely marked. (Fig. 1A). Portal Establishment and Capsulotomy Longitudinal Outside-in Capsulotomy. This procedure was performed without traction. A blunt trocar was introduced targeting the head-neck junction of the femoral head to establish the AL portal, and the fatty and fibrous tissue in front of hip capsule was identified with a 30-degree scope. Instruments were introduced into the pre-capsular space thorough the MA portal to triangulate with the same maneuver. The soft tissue in front of the capsule was cleaned, and the gluteal muscle, iliocapsularis muscle, and indirect head of rectus femoris were identified as the reference structure. A longitudinal capsular incision was made along the direction of IFL fiber paralleling to the axis of femora neck, and the fluoroscopy would be helpful in guiding for capsulotomy if necessary. The incision was extended to the labrum proximally and femoral neck distally ( Fig. 1B-D). Transversal Interportal Capsulotomy. Traction was applied and then the hip joint space exceeding 10 mm was confirmed with fluoroscopy. A 17G spinal needle was used to penetrate the joint capsule with the assistant of a Carm. and the AL portal was established using a cannulated dilator along a nitinol guidewire. And then, a 70-degree scope was introduced as the viewing portal. The MA portal was established under visualization in the same method. Finally, an arthroscopic blade was used to make a capsular incision connecting AL and MA portal. Capsulotomy would be extended if necessary. Exploration and Management in the Central Compartment Scope and instruments were introduced into the central compartment of the hip joint with traction. The chondrolabral injure, ligamentum teres, and pathology on the acetabulum were identified and addressed. DALA portal was established for acetabular trimming and anchor implant. (Fig. 1E). Exploration and Management in the Peripheral Compartment The traction was released and the hip was flexed by 30-60 degrees. The peripheral compartment was comprehensively inspected and the cam lesion was identified, and then a 4.5 mm high-speed arthroscopic burr (Smith & Nephew, Andover, MA) was used to make cam-plasty. The intraoperative dynamic impingement test and fluoroscopy were used to identify the complete correction of cam lesion. (Fig. 1F-G). Capsular Closure At the end of procedure, two or three simple side to side stitches was made to close the capsule (Fig. 1H). The main procedures of hip arthroscopy with longitudinal capsulotomy was shown as the following schematic diagram. (Fig. 2). Postoperative Management All patients followed the same protocol of postoperative analgesia. The oral nonsteroidal anti-inflammatory drug was prescribed for 4 weeks for prophylaxis of heterotopic ossification. All patients followed the standard protocol of rehabilitation. Patients who received labral refixation or/and cam-plasty were ambulated with crutches for 4 weeks. Data collection and Assessment of Outcomes The surgical outcome including overall surgical time, traction time, intraoperative radiation exposure, complications, and postoperative pain were retrieved from the medical documents of patients. Radiographic parameters including α angle, lateral center edge angle, and Tönnis classification of osteoarthritis were assessed by two surgeons independently with Picture Archiving and Communication Systems (PACS), and the final result was made by a senior surgeon in cases of disagreement. Functional outcome was retrieved from the database of follow-up. The definitive information of the measurement is described as follows. Intraoperative Radiation Exposure Intraoperative radiation exposure was defined as the number of fluoroscopic shots during surgery. The number of fluoroscopic shots during surgery was counted by a radiologist according to the images saved in the C-arm X-ray machine. Complications Intraoperative complications including iatrogenic cartilage or labral injury and breakage of instruments were review from the surgical video database. Postoperative complications including neurapraxias, infection, heterotopic ossification, deep venous thrombosis, and revision were documented in the medical record. Both intraoperative and postoperative complications were independently counted by a surgeon according to the documentary data. Visual Analogue Score (VAS) The visual analog scale is a widely used measurement for pain intensity. A continuous scale with a length of 10 cm, with the left end of the scale labeled 0 indicates no pain, and the right end labeled 10 indicates most severe pain. The location of marked between two ends represents the severity of pain ranged from 0 to 10. Postoperative pain was evaluated by an experienced nurse with the VAS score, and the highest score during postoperative 3 days was retrieved. α Angle The α angle was measured on the Dunn view radiography, which is the angle formed by the central axis of the femoral neck and the radius line where the femoral head loses its sphericity. The α angle was evaluated on the day before surgery and at 1-month follow-up. The cam lesion was defined with α ≥55 . Lateral Center Edge Angle (LCEA) The LCEA is formed by the vertical reference line to the line connecting the center of the femoral head and the most lateral edge of the acetabulum, which indicates the coverage of the acetabulum on the femoral head. The pincer lesion was defined with LCEA ≥38 . International Hip Outcome Tool À12 (iHOT-12) Score The iHOT-12 is the condensed version of the widely recognized International Hip Outcome Tool 10 . Each question was followed by a VAS range from 0 to 100, the patient was asked to answer each question by marking on the scale to reflect their limitation in this term. The iHOT-12 score is calculated by averaging all scores. iHOT-12 provides an overall assessment of the patient's hip function. The functional outcome was assessed with a patient-reported outcome (iHOT-12) on the day before surgery, 3 months, 6 months, and 12 months' follow-up. Statistical Analysis All data were analyzed using the SPSS 22.0 software (SPSS Inc., Chicago, IL, USA). Continuous variables were summarized with mean AE standard deviation or median and interquartile range (IQR). Continuous variables with normal distribution including age, BMI, α angle, LCEA, iHOT-12, overall surgery time, and traction time were compared using a two-sample t-test. Quantitative data including the month of follow-up, VAS, and intraoperative fluoroscopy which not follow normal distribution were compared by a two-sample Wilcoxon rank-sum test. Categorical variables including gender, Tönnis classification, and complications were presented with number and percentage and compared using the chisquare test or the Fisher exact test. A P-value of <0.05 was considered statistically significant. General Results Demographic data showed no significant difference between the two groups. The male/female ratios were 12/14 and 13/17 in Group L and Group T respectively (P = 0.832). The average age was 38.2 AE 11.1 and 40.1 AE 9.9 in Group L and Group T respectively (P = 0.530). The mean BMI was 24.7 AE 3.5 and 24.1 AE 3.3 in Group L and Group T respectively (P = 0.448). The Median of follow-up duration was 16 months and 18 months in group L and group T respectively (P = 0.107). Labral refixation was performed for each patient with labral tear. No major complications and reoperation were reported in both groups (Table 1). Intraoperative Fluoroscopy It is apparent that the radiation exposure with longitudinal outside-in capsulotomy was much lower than that with transversal interportal capsulotomy, Median of intraoperative fluoroscopy shots was 1 and 3, respectively. Performing longitudinal outside-in capsulotomy significantly reduced the radiation exposure by 66.7% compared to transversal interportal capsulotomy (P < 0.001). Tönnis Classification Two groups have no significant difference in the respect of osteoarthritis (P = 0.709), and there are five cases in Group L and seven cases in Group T present mild degeneration on radiography (Tönnis grade 1) respectively. Lateral Center Edge Angle (LCEA) The measurement of preoperative LCEA was 31.5 AE 3.4 and 30.8 AE 3.1 in Group L and Group T respectively (P = 0.386) ( Fig. 3A-C). Postoperative Pain The Median of VAS in Group L and Group T was 2 and 3. Performing longitudinal capsulotomy significantly reduced the postoperative pain by 33.3% compared to the transversal interportal capsulotomy (P = 0.002) ( Table 2). iHOT-12 Score The preoperative iHOT-12 score shows no significant difference between the two groups, and that was 38.9 AE 13.7 and 41.1 AE 15.6 in Group L and Group T respectively (P = 0.514). Hip arthroscopy dramatically improved iHOT-12 of patients in both groups, and the postoperative iHOT-12 score at final follow-up was 79.3 AE 6.7 and 77.0 AE 7.9 in Group L and Group T respectively (P = 0.141). Complications There are no iatrogenic chondrolabral injure (0%) and no postoperative neurapraxia (0%) was reported in Group L. In contrast to that, there are six cases (20.0%) of iatrogenic chondrolabral injure and eight cases (26.6%) of transient neurapraxia (fully recovered in 2 weeks) were recorded in Group T. The incidence of intraoperative iatrogenic chondrolabral injure and postoperative neurapraxia was significantly reduced by performing performing longitudinal capsulotomy. (P = 0.035 and 0.017, respectively.) Discussion T he current study found that performing the longitudinal capsulotomy in an outside-in fashion did not consume additional time to accomplish hip arthroscopy but significantly reduced the traction time and radiation exposure compared with the conventional technique. As a result, the case of complication and postoperative pain was significantly decreased. Additionally, performing longitudinal capsulotomy could facilitate the complete correction of the cam lesion. However, the influence of longitudinal capsulotomy on patient-reported outcomes in short-term follow-up was not obvious. Traction Time and Traction-Related Complications The most important finding of the present study is that performing longitudinal outside-in capsulotomy could reduce traction time and traction-related complications of hip arthroscopy. The reported complication rate of hip arthroscopy varies from 0.5% to 8%, and most of them are traction related 3,11-13 . Frandsen et al. 14 reported that up to 74% of patients complained of some kind of traction-related problems after hip arthroscopy, and neurapraxia is the most commonly reported. Kern et al. 15 reported the incidence of nerve injury after hip arthroscopy could be up to 13% and the traction related neurapraxia was underestimated previously. Bailey et al. 16 reported the mean traction time was 46.5 min, and a longer traction time and a greater traction force could result in groin numbness and pudendal neurapraxia. In the present study, all eight cases of transient neurapraxia reported came from Group with transversal capsulotomy. A possible explanation for this result was that longitudinal capsulotomy could be performed without traction, thus the traction time was dramatically reduced by around 20 min and only 43.2 min on average traction lasted in the present study. Moreover, Röling et al. 17 found the traction force could significantly drop after breakage of the vacuum seal labrum and additional capsulotomy. Another possible reason we speculate was that the traction force could be dramatically decreased after capsulotomy. Additionally, to prevent the traction-related complication we strictly followed the recommendations include minimizing the traction force, limiting the traction time, and using a well-padded perineal post 18,19 . Another finding of the present study endorses the traction advantage of longitudinal capsulotomy is that patients who underwent hip arthroscopy with longitudinal capsulotomy felt more comfortable and reported milder pain than those with traditional capsulotomy. In accordance with this interesting finding, Martin et al. 20 demonstrated that tissue damage could be decreased with less traction. Radiation Exposure Another finding of this study is that radiation exposure in hip arthroscopy could be decreased by performing longitudinal capsulotomy. The utilization of fluoroscopy is a near essential procedure for traditional hip arthroscopy, which could help surgeons in performing portal establishment and lesion correction. Meanwhile, the impairment of radiation for the patient and the surgical team could not be ignored. Seijas 21 and Gaymer 22 independently reported the mean exposure time was around 20 s in each hip arthroscopic procedure. Budd et al. 23 reported that the mean radiation time was 66 s in their study. The intraoperative radiation exposure could be related to the surgical fashion and experience of the surgeon. The exposure time in the present study is much shorter than that previously reported, and only several fluoroscopy shots were taken in each hip arthroscopic procedure. This inconsistency may be due to that performing longitudinal capsulotomy in an outside-in fashion with direct visualization further reduced the assistance of fluoroscopy. Moreover, all procedures were performed by an experienced surgeon, and fluoroscopy was used at critical steps in the present study. Feasibility of Practice Even with less assistance of fluoroscopy performing longitudinal outside-in capsulotomy reduced the incidence of iatrogenic chondrolabral injure and improved the resection of the cam lesion. Traditionally, the labrum and cartilage could not be seen during portal establishment, and penetration of labrum and cartilage scuffing was common in interportal capsulotomy. In contrast, performing longitudinal capsulotomy in an outside-in fashion could provide direct visualization for all procedures, and the iatrogenic chondrolabral injury could be almost eliminated. Additionally, longitudinal capsulotomy could provide ideal visualization for the exposure of the cam lesion, especially the distally located one. It could facilitate the complete resection of the cam lesion and improve the postoperative outcome and reduce the need reoperation of hip arthroscopy. Capsule Preservation and Outcomes The function of the iliofemoral ligament and the clinical benefit of restoring intact capsule was underlined, meanwhile, the shortcoming of interportal capsulotomy has been noted. Fagotti et.al 24 found more than half of the width of the IFL could be damaged after interportal capsulotomy. Several studies indicated conventional interportal and T-shaped capsulotomy could significantly decrease the strength of the iliofemoral ligament and affect the stability of the hip joint 25-27 . Bolia 28 found superior outcomes could be expected in patients with capsular closure compared with unrepaired capsulotomy. Capsular closure was suggested to be performed for large interportal capsulotomies or Tcapsulotomy [29][30][31] . But there are few studies that reported whether capsular closure should be performed after longitudinal outsidein capsulotomy, except one study, Thaunat found capsular closure after longitudinal outside-in capsulotomy might positively affect the final outcome 32 . It is rational that patient that had longitudinal capsulotomy with the function of the iliofemoral ligament and the integrity of the capsule maximally retained would expect to get a better outcome. In our practice, most patients with longitudinal capsulotomy could achieve well-healing of the capsule in the short-term follow-up with MRI (Fig. 3D). However, in the present study, the divergence in patient-reported outcomes between the two groups was not statistically significant. One explanation for this result could be that all procedures were performed by one experienced surgeon, and the capsule closure was performed in both groups. Another reason may be that the duration of follow-up is not long enough to distinguish the superiority of longitudinal capsulotomy. Limitations of the present study include the retrospective nature of the analysis. Patients with pincer deformity were excluded because of the high transition rate to T capsulotomy, which may limit the generalization of the findings. Even though a positive result was acheived, limitations that should not be ignored are the sample size of the present study, which is relatively small and the duration of follow-up is relatively short. Although Wolfson 33 and Nwachukwu 34 found most patients could achieve minimal clinically important difference or a substantial clinical benefit at postoperative 6 months. We insist that further study with a long duration of follow-up was needed to identify the clinical efficiency of longitudinal capsulotomy. Besides the limitations mentioned above, several features of the present study should be noted. First, the difficulty or inconvenience encountered when making rim resection and anchor implant with longitudinal capsulotomy should not be ignored, and we prefer to add a DALA portal to address this problem. Second, the clinical result came from one experienced surgeon who has practiced hundreds of hip arthroscopies with traditional capsulotomy. We do not consider that a surgeon with less experience could easily reproduce this result. Third, specific complications and underneath risks related to longitudinal capsulotomy could be encountered in the future. We have recorded one case with the indirect head of the femoral rectus injured intraoperatively. Fortunately, the injury was noted and completely repaired with suture, and the patient has no postoperative complications. Additionally, the impairment of longitudinal capsulotomy to the orbicular zona has not been investigated. Conclusion Longitudinal outside-in capsulotomy with less radiation exposure, reduced traction time, and deceased complications could be a safe and feasible procedure in arthroscopic treatment for cam FAI. Its clinical efficacy is not worse compared with traditional interportal capsulotomy in short-term follow-up.
2021-08-06T06:17:52.424Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "3c2b84a3861279aad723596a232d08ee8e8bb52d", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/os.13041", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f098b77726fcbc1127cec94c1de5996bcff638fb", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
257255330
pes2o/s2orc
v3-fos-license
Online Parameter Estimation using Physics-Informed Deep Learning for Vehicle Stability Algorithms Physics-informed deep learning is a popular trend in the modeling and control of dynamical systems. This paper presents a novel method for rapid online identification of vehicle cornering stiffness coefficient, a crucial parameter in vehicle stability control models and control algorithms. The new method enables designers to rapidly identify the vehicle front and rear cornering stiffness parameters so that the controller reference gains can be re-adjusted under varying road and vehicle conditions to improve the reference tracking performance of the control system during operation. The proposed method based on vehicle model-based deep learning is compared to other alternatives such as traditional neural network training and identification, and Pacejka model estimation with regression. Our initial findings show that, in comparison to these classical methods, high fidelity estimations can be done with much smaller data sets simple enough to be obtained from a lane-changing or vehicle overtake maneuver. In order to conduct experiments, and collect sensor data, a custom-built 1:8 scaled test vehicle platform is used real-time wireless networking capabilities. The proposed method is applicable to predict derived vehicle parameters such as the understeering coefficient so it can be used in parallel with conventional MIMO controllers. Our $H_{\infty}$ yaw rate regulation controller test results show that the reference gains updated with the proposed online estimation method improve the tracking performance in both simulations and vehicle experiments. I. INTRODUCTION Many modern automatic control applications require parameter identification of the plant model. These identified parameters can be used for the initial controller design, online adjustments and monitoring of the health of the system. Recent developments in data gathering and processing in embedded control systems enable combining modeling and control applications with learning techniques. However, due to the computational complexity, control-oriented neural networks are still difficult to implement in many applications [1]. Additionally, noisy sensor data also make complications to process collected data in a practicable way. To overcome this problem researchers gravitate to the physics-informed deep learning method [2]. There are numerous examples of modeling the plant with learning procedures to increase the fidelity of the control systems. In [3], in order to push the model predictive control's (MPC) performance to the limits, the learning procedure for modeling is improved in a way that the prediction model ensures the best closedloop performance. In [4], the performance of data-driven modeling and physical modeling approaches for vehicle lateral-longitudinal dynamics are compared. Results show that the data-driven model bests both linear and non-linear physical models. In [5], an online model-based reinforcement learning method is proposed to identify vehicle linear tire parameters to maximize maneuverability under variable road conditions such as unknown terrain. Automotive control applications are one of those fields where the estimation of the vehicle parameters can improve the performance of the vehicle immensely. Automotive systems contain many complex dynamic mechanisms, many of which are considered lumped parameter models during the control system design phase. For example, in vehicle lateral stability control, the LTI state-space model is widely used. In this model, the cornering coefficient is the hardest parameter to be estimated. It requires experimental data, such as tire lateral forces, which requires an estimator(addition to the IMU sensor data) or additional expensive force sensors. Pacejka's method [6] can estimate the nonlinear behavior of tire models but the method requires many data points and an estimator or additional sensors. In deep learning, a high number of data points and data sets with labeled data are required to train a network with high accuracy. To train such a network, extensive experiments with many sensors are required. Another option is to use a so-called physics-informed parameter learning scheme. This type of deep network contains the system physics in the loss function which enables them to be utilized with very few data sets and a lower number of data points. This allows them to be used and trained for a single system with high accuracy. The main contribution of this paper is the identification of the vehicle cornering coefficients with a physics-informed learning algorithm fed directly by raw sensor data (i.e. vehicle lateral acceleration, a y and yaw rate, r). The proposed model-based approach requires much smaller data sets to yield lower error values (i.e. better prediction) compared to conventional NN methods and Pacejka's model-based estimations in literature. Hence, the system is suitable to apply as part of a real-time control system. The rest of the paper is outlined as follows: In Section II the vehicle mathematical model used in the derivation of the physicsbased algorithm and in simulations and our scaled vehicle prototype and the experimental setup are introduced. Section III explains the prediction of tire properties with a more conventional method given by Pacejka. In Section IV our physics-informed parameter learning method is presented and compared to traditional estimation with neural networks without the model information. In Section V, first the model estimation accuracy of the physics-informed learning method and conventional deep learning method are compared, then the proposed learning method is implemented on H ∞ vehicle yaw-rate regulation algorithm to show its effectiveness with experiments. It is extensively used in vehicle lateral stability control applications because of its simplicity and accuracy for the majority of the control-oriented problems [7]. In this model, α f and α r are front and rear tire slip angles respectively. a and b are distances between the front wheel to centerof-mass and the rear wheel to center-of-mass. δ f and δ r represent front and rear wheel steering angles. v x and v y are the vehicle's center of mass velocities in x and y directions in local coordinates. V is the vector summation of them and β is their inverse tangent. R is the turning radius and O is the center of rotation. r is the yaw rate of the center of mass of the vehicle and F ij are forces acting on the front and rear wheels where i = x, y and j = 1, 2. Linearization of this model is required to design such controllers. Hence, by assuming small tire slip angles we obtain a state-space form from (1)-(5). Cornering coefficient (C af and C ar ) estimation is the biggest challenge throughout the linearization process since it requires experiments with vehicle and postprocessing (for example curve fitting and finding the slope of the linear region in Pacejka) the collected data to identify those parameters.ẋ = Ax+Bu (1) Figure 2 is an example of the response of our linear vehicle model to sinusoidal steering input. The accuracy of the data collected from the sensors (v y and r) will be compared with this graph. B. Test Vehicle and Experiments In order to conduct experiments and collect sensor data to feed the physics-informed learning algorithm, we assembled a custom-built 1:8 scaled test vehicle ( Figure 3) with fourwheel-steering and four in-wheel electric motor independent drive features. There are two Arduino-based control cards on the vehicle. While the first board collects the sensor data, runs feedback control algorithms and communicates with the Matlab server, with a WiFi module, where learning algorithms are run, the second board provides the motor actuation commands. Each wheel has an optical encoder sensor. Thus, we can measure the wheel rotation speed of the vehicle and the speed of the center of gravity with high accuracy. There is also a 9-axis IMU sensor (BNO055) mounted on the center of the gravity of the vehicle. Parameters of test vehicle are: Lateral acceleration and yaw rate values are exported from the IMU sensor. From the yaw rate and lateral acceleratioṅ v y is calculated as:v III. PACEJKA'S TIRE MODELLING Before moving to the proposed parameter estimation method, the results of the most common method called Pacejka tire model (equation 7 and 8), or Magic tire formula, from [6] that is used for cornering coefficient estimation will be analyzed. In these equations, D = max(F y ) and C = 1.30, B and E are unknown. Using the same sets of data collected for learning, 2 unknown parameters in equation (7)-(8) are calculated with the least square curve fitting. The collected data needs to be pre-processed to calculate tire lateral force and tire slip angle. For lateral tire force estimation, sliding mode observer from [8] is applied. For front and rear tire slip angle calculation equation (9)-(10) are used: Best fitting exponential curves for front and rear tires are shown on Figure 4. Slope of the linear region of these curves are C af = 4.59 and C ar = 6.45 respectively. Since the number of collected data points is insufficient and quality of the set of the data is low due to the noise of the sensors, the error value according to the equation in (14) can be calculated as 3.62, its significance will be discussed in the next section. IV. PHYSICS-INFORMED PARAMETER LEARNING The goal of the deep learning algorithms in this system is to estimate the C af and C ar values from the input data, which are r,ṙ, v y ,v y , δ 1 , δ 2 , v x . In order to find an estimation, a simple deep neural network is created to take advantage of automatic differentiation and gradient descent. The network consists of 6 layers,which are an input layer, 3 fully connected layers with 20 neurons, 1 fully connected layer with 2 neurons and an activation function with the form of: The gradient for the learnable parameters L for backpropagation becomes: As X moves from -∞ to ∞, Z moves from Z mean * (1 − Z range ) to Z mean * (1 + Z range ). Taking the output of this layer as C af , C ar the loss function is constructed from the governing differential equation as: where α is a 2-element constant vector that is used for scaling between equations. A, B, x, u are the state space parameters from mathematical model in Section 2. An additional loss function is utilized to ensure that C af and C ar values are constant in time by taking the mean value of C af , C ar and comparing them to the value of C af , C ar at each time step. With this setup, four different experiment results are examined. The deep learning setup is outlined in Figure 8(a). Z mean is taken as 10 and Z range is taken as 90%. The network is updated with an adaptive moment estimation method with a learning rate taken as 0.001 with a decay of 0.0005 [9]. From the experiment results, C af and C ar are: From four separate time-series trajectories of experimental data, C af can be calculated as 8.14 with 8.65% relative uncertainty and C ar can be calculated as 9.71 with 5.87% relative uncertainty. To calculate the error in this parameter prediction, experiment results are compared with simulations carried out with predicted cornering stiffness. Because the data involves 0 values, percent error cannot be used. The error is formulated as: The error values for these results are 2.09,2.81,2.18,3.06. The convergence plot for the first 4 results is given in Figure 6. The advantage of physics-informed neural networks to the traditional networks is that the physics-based approach can work with single data set rather than a large batch of labeled data in a much shorter amount of time which enables real-time online training [10]. Moreover, requiring a large batch of labeled data makes using traditional neural networks with experimental works challenging. Simulation results can be used to train traditional neural networks for predicting results from experimental data however, sensor noise and drift reduce the accuracy of the results. To demonstrate this, a traditional regression neural network that consists of a BiLSTM layer, 2 fully connected layers and the custom output layer described above, is trained (using the procedure outlined in Figure 8 The error values for these results are 3.25,4.12,3.77,4.03. From the loss values, it can be seen that the results from the traditional neural network are sub-optimal (i.e. not accurate estimations) using the same experimental datasets. These loss values may be further improved with more extensive data collection but the trend on the data seems to be sub-optimal in any case. The convergence plot for the 4 results is given in Figure 7. In this section, we investigate how online estimation can be beneficial to control applications using a MIMO H ∞ controller design example with variable reference signal generation. Figure 9 shows the block diagram of the closed system. Our proposed controller has two primary parts: a reference generator, and a MIMO controller. The MIMO controller is designed such that the system is stable to changes in the cornering stiffness tracking performance. The reference generator uses cornering stiffness values to generate the current MIMO controller reference. The parameter values it uses to calculate these commands are updated online estimation from the model-based deep learning algorithm as discussed earlier. A. Reference Generator The driver initiates the reference steering and velocity commands. In reference generator, with driver commands, the desired lateral acceleration gain and yaw rate gain are generated using (15) and (16) as mentioned in [12]. where understeer coefficient, K us , is given as The learning algorithm updates C af and C ar values according to the data received from the vehicle and keeps the understeering coefficient updated in (17) so that the reference generator block calculates its outputs based on the latest driving conditions. B. Optimal MIMO Controller The optimal MIMO controller shown in Figure 9 regulates front and rear steering angles using state error signals, to improve reference tracking. Both states from the vehicle model are used as feedback for the controller. The MIMO H ∞ controllers are designed using mixed-sensitivity minimization with the methods available from sources such as [11] and applicable for a range of cornering stiffness parameters. Input of the controller is set as yaw rate(r) and control outputs are set as front(δ 1 ) and rear(δ 2 ) steering. C. Comparison of PIDL and RDL Parameter Estimation Performance In this section, using the state-space model mentioned in Section II-A, two MIMO H ∞ controllers are designed using the cornering stiffness parameters estimated with physicsinformed deep learning (PIDL) and regression deep learning (RDL) methods. These controllers are then implemented on the test vehicle (Section II-B) and a typical lane change maneuver test given Figure 2 is performed to obtain and compare the error values of estimation results of both learning methods from Section IV. Cornering coefficients estimation of PIDL are taken as C af = 8.14 and C ar = 9.71, and coefficients from RDL are taken as C af = 1.95 and C ar = 5.23 (i.e. mean of the estimation set). Figure 10 presents the result of the feedback control experiments on the test vehicle. It shows that the yaw rate reference tracking performances of both controllers. The regulator designed with PIDL cornering parameters has better yaw reference tracking than the RDL. This verifies that PIDL estimation has better modeling accuracy since the controllers designed using this model generate better performance. D. PIDL Experiments with Varying Drive Conditions The discussion in the previous section show that prediction generated by the PIDL algorithm is more accurate even for short burst of data that can be obtained during a lane change maneuver. Therefore, it is also possible to use this method to update vehicle controller parameters online to improve performance. As an example scenario we start with the conditions such that the test vehicle cruising with constant longitudinal velocity v x = 1.2[m/s] and the vehicle controller operating with the parameters C af = 8.14 and C ar = 9.71 as calculated earlier from the experiment data. When the vehicle moves to another surface where road-tire physical interaction is much higher, the cornering stiffness coefficients will change. The optimal MIMO controller can be designed such changes in mind but using the older coefficients in the reference generator will generate a sub-optimal overall controller since the understeering coefficient of the vehicle is changed in the new conditions. Figure 11 shows the results of a such experimental scenario where from the from proposed physics informed deep learning algorithm, the new coefficient values are calculated as C af = 2.36 and C ar = 4.38. It shows that the control structure without K us reference update deviates from the yaw rate reference and becomes less responsive to the inputs because of higher road-tire interaction that was not compensated for in the outdated reference generator. On the other hand, the control structure with an updated reference value, i.e. online parameter estimation system, is not affected by the road condition change much as its yaw rate tracking capability is better. VI. CONCLUSIONS This paper introduces a novel physics-informed learning algorithm to estimate rapid vehicle tire cornering coefficients suitable to be used in conventional lateral stability control algorithms. Our simulation and experimental results show that, compared to the conventional regression-based learning algorithm and widespread Pacejka's method, the proposed method requires less data-set to fit collected data points. Hence, it allows real-time online parameter identification. The predicted coefficients straight forward and accurate enough to be used reference gain updates to improve the performance of control algorithms under varying road surfaces. The performance of online estimation, with experiments on the test vehicle, is verified by implementing it on an H ∞ yaw-rate regulator. Future work includes the extension of the method to more detailed models so that more parameters can be estimated with smaller data sets.
2023-03-02T02:15:42.632Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "bd4bd1270f4eff516fe65680a60638d9019b75e1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "bd4bd1270f4eff516fe65680a60638d9019b75e1", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
56031878
pes2o/s2orc
v3-fos-license
Existence of Multiple Positive Solutions for Choquard Equation with Perturbation Tao Xie, Lu Xiao, and Jun Wang 1School of Management, Jiangsu University, Zhenjiang, Jiangsu 212013, China 2Faculty of Science, Jiangsu University, Zhenjiang, Jiangsu 212013, China Correspondence should be addressed to Lu Xiao; hnlulu@126.com Received 7 May 2015; Revised 17 July 2015; Accepted 13 September 2015 Academic Editor: Kamil Brádler Copyright © 2015 Tao Xie et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper is concerned with the following Choquard equation with perturbation: −Δu + V(x)u = (1/|x|α ∗ |u|p)|u|p−2u + g(x), This kind of (1) arises in various physical contexts, especially in the case where = 3, = 2, = 2, and = 0. Then (1) becomes It is called the stationary nonlinear Choquard equation or the nonlinear Schrödinger-Newton equation.In general, many mathematicians are concerned with the positive solitary solutions of the following nonlinear generalized Choquard equation: where the powers ≥ 2 and ∈ (0, ).In order to obtain the solitary solutions of (3), we set (, ) = () ( > 0 is a constant) in (3) and get the stationary equation of (1) without perturbation, where () = () − . In 1954, paper [1] proposed model (2) to the description of the quantum theory of a polaron.Later, (2) was proposed by Choquard in 1976 as an approximation to Hartree-Fock theory for one component plasma [2].In the 1990s the same equation reemerged as a model of self-gravitating matter [3,4] and is known in that context as the Schrödinger-Newton equation.In recent years, many papers are concerned with the existence of solutions of (3).Lieb [2] proved the existence and uniqueness of the ground state to (2).Lions [5] obtained the existence of a sequence of radially symmetric solutions for (2) by using variational methods.Papers [6,7] proved the existence of multibump solutions of (2).Paper [8] proved that 2 Advances in Mathematical Physics every positive solution of ( 2) is radially symmetric and monotone decreasing about some point by using moving plane methods.Furthermore, the authors obtained the uniqueness of positive solutions for (2).Clapp and Salazar [9] proved the existence of positive and sign changing solutions of (2) when R 3 and are replaced by an exterior bounded domain Ω and (), respectively.Moroz and Van Schaftingen [10] showed the regularity, positivity, and radial symmetry of the ground states for the optimal range of parameters, and they also obtained decay asymptotic at infinity for these ground states.The more general system (3) was considered in [11].Moroz and Van Schaftingen [12] obtained the nonexistence and optimal decay of supersolutions of (3).Cingolani and Secchi [13] considered the existences of ground states for the pseudorelativistic Hartree equation.For semiclassical cases, the existence of multiple semiclassical solutions was considered in [14].Paper [15] considered the existence of semiclassical regime of standing wave solutions of a Schrödinger equation in presence of nonconstant electric and magnetic potentials.Cingolani and Secchi [16] studied the semiclassical limit for the pseudorelativistic Hartree equation.Under the assumptions on the decay of , paper [17] proved the existence of positive solutions by using variational methods and nonlocal penalization technique. Motivated by the works we mentioned above, in this paper we study the existence of multiple solutions to the nonlinear Choquard equation with perturbation.This kind of problems is often referred to as being nonlocal because of the appearance of the term ∫ R ∫ R (|()| |()| /|−| ) in the energy functional.This leads to the fact that (1) is no longer a pointwise identity.The main difficulties when dealing with this problem lie in the presence of the nonlocal term and the lack of compactness due to the unboundedness of the domain R .Under some conditions on , in the present paper we recover the compactness and find two nontrivial solutions of (1) by using variational methods. In what follows, we assume that ∈ (R , R + ) and satisfies the following condition: A solution is called a ground state solution (or positive ground state solution) if its energy is minimal among all the nontrivial solutions (or all the nontrivial positive solutions) of (1).A bound state solution refers to limited-energy solution. Then, we have the following main results. Variational Setting Throughout the paper, we use the following notations: (iii) Let and be some positive numbers. The main purpose of this section is to establish the variational setting for problem (1).We first recall the following classical Hardy-Littlewood-Sobolev inequality (see [20,Theorem 4.3]). In order to prove Theorem 1 we will constrain the functional on the set Usually, this set is called Nehari manifold.It is well-known that critical points of lie in the Nehari manifold.Denote Φ() = ⟨ (), ⟩.Thus, we know that In order to prove the existence of multiple nontrivial solutions for (1), we will divide the Nehari manifold N into the following three parts: Obviously, only N 0 contains the element 0. Furthermore, it is easy to see that N + ∪N 0 and N − ∪N 0 are both closed subsets of . Next we will give some explanation for the partition of Nehari manifolds N. Set We define the fibering map Thus, Obviously, ∈ N with > 0 if and only if () = 0. It is well-known that if the function () has unique global maximum point, then the set N is homotopic to unit ball of .Moreover, the set N is a natural constraint for the functional .This means that if the infimum is attained by ∈ N, then is a solution of (1).However, in our situation, the global maximum point of is not unique.This leads us to partition the set N according to the critical points of .This kind of idea was first introduced by Tarantello in [21].Later, many mathematicians apply this idea to study other problems; for instance, see [22][23][24] and the references therein. Now we are ready to study the properties of sets N ± and N 0 . Since = 1 and lies on the unit sphere of , we infer from Lemma 3 that has upper bound.So there exists 2 > 0 such that . Proof of Theorem 1 In this section we are going to give the proof of the main results.Before doing this we should study the properties for the minimizing sequences for the functional .In the whole paper, we say lim → ∞ ( ) = 0 means that lim → ∞ ‖ ( )‖ = 0. Lemma 6.Under the assumptions of Theorem 1, there exists a sequence { } ⊂ N + such that ( ) → + and ( ) → 0 as → ∞. Solutions for the Choquard Equation with General Nonlinearity In this section we will look for the positive solutions for the following Choquard equation with general nonlinearity: where () = ∫ 0 ().Since we only care about the existence of positive solutions, in what follows, we assume that ∈ 1 (R + , R) verifies the following conditions.
2019-01-01T18:34:34.219Z
2015-09-30T00:00:00.000
{ "year": 2015, "sha1": "0dca58796e3edb2d1d68c2704f58e9dbf07dae7a", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/amp/2015/760157.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0dca58796e3edb2d1d68c2704f58e9dbf07dae7a", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
248571527
pes2o/s2orc
v3-fos-license
Quantum neural network autoencoder and classifier applied to an industrial case study Quantum computing technologies are in the process of moving from academic research to real industrial applications, with the first hints of quantum advantage demonstrated in recent months. In these early practical uses of quantum computers it is relevant to develop algorithms that are useful for actual industrial processes. In this work we propose a quantum pipeline, comprising a quantum autoencoder followed by a quantum classifier, which are used to first compress and then label classical data coming from a separator, i.e., a machine used in one of Eni's Oil Treatment Plants. This work represents one of the first attempts to integrate quantum computing procedures in a real-case scenario of an industrial pipeline, in particular using actual data coming from physical machines, rather than pedagogical data from benchmark datasets. INTRODUCTION We are currently witnessing a time of intense growth and investments into quantum computing technologies, both from academic and private sectors, aimed at a fast pace of advancement in the quest for computational advantage brought by the practical use of quantum information processing. In particular, it is believed we are now experiencing what has been termed the Noisy Intermediate Scale Quantum (NISQ) Computing era, i.e., quantum processing units (QPU) are available with a number of non-error corrected qubits scaling between 50 and 1000 [1]. While not allowing to perform fault tolerant quantum computing, these devices are becoming worldwide available to explore the frontiers of quantum algorithms, which exploit inherent quantum mechanical features such as superposition and entanglement to produce a radically different approach to computational problems [2]. There are several fields of application in which new quantum algorithms have been analyzed: quantum chemistry [3][4][5][6], optimization problems [7,8], machine learning [9][10][11], solution of linear problems [12][13][14] and differential equations [15]. To overcome the problems due to the limited number of qubits available and to the absence of efficient error correction techniques, several proof-of-principle demonstrations have been carried out by focusing on so called variational quantum algorithms, characterized by a hybrid approach in which the quantum processing units (QPU) is seen as an accelerator along- * stefano.mangini01@universitadipavia.it side the classical CPU [16][17][18]. Several of these studies also fall within the emerging field of Quantum Machine Learning [9,19,20], which has even triggered the birth of dedicated quantum machine learning software [21][22][23]. Currently, most quantum machine learning algorithms are based on parametrized quantum circuits, and leverage an approach in which the optimization over the variational parameters is done on the classical CPU [19,20]. Indeed, these parametrized quantum circuits, often referred to as quantum neural networks [20,24,25] prove robust even in the presence of noise [16,26,27], which is inevitable in current implementation of quantum hardware, and are thus well-suited for near term NISQ devices. Here we test the use of quantum machine learning algorithms on a specific industrial use case. In particular, we propose the application of a newly formulated quantum pipeline comprising a quantum autoencoder algorithm [28][29][30][31] followed by a quantum classifier, applied to real data coming from a first stage water/oil separator of one of Eni's oil treatment plant. This algorithm is compared to the performance of a classical autoencoder to compress the original data, which are then used to implement a classification task. It is particularly relevant to notice that these quantum autoencoding algorithms can be run on presently existing quantum hardware, thus making such quantum machine learning algorithm readily usable with actual input data coming from a realistic source of industrial interest. While various models of variational autoencoders in the quantum domain have been proposed in the literature, for example for generative modelling tasks [31] and for the study of entanglement in quantum states [32], our implementation of the quantum autoencoder directly follows the architecture proposed by authors in [28], which is often studied as a prototypical model in the quantum machine learning literature [33], and it was also even extended to feature input redundancy [34], as discussed in [29]. The manuscript is organized as follows. In Sec. II we explain and give the specifics of the industrial case study considered in this work. In Sec. III we introduce the classical neural network model of the autoencoder, and also discuss the clustering algorithm used to create the two classes for the classification problem. In Sec. IV we review the quantum algorithm developed for a continuously valued input neuron [35], from which the quantum algorithm for the quantum autoencoder is derived. In Sec. V we show the results obtained for the data compression task, comparing them with those obtained with the purely classical autoencoder. At last in Sec. V B, we use the compressed data to implement a quantum classifier used to label the original data in a binary classification problem. II. CASE STUDY The industrial case study discussed in this work aims at testing classical and quantum machine learning approaches to analyze data coming from an industrial equipment within one of Eni's Oil Treatment plants, showed in Fig. 1. The equipment is a separator, i.e. a vessel receiving a stream of high pressure, high temperature crude oil (left part of the figure, indicated with a black stream), and exploits gravity to separate three output streams: Water (the heaviest component), indicated in the figure with a light blue stream; Oil (intermediate component), in the lower part of the figure indicated with a black stream; and Gas (lightest component), indicated with a light grey stream. The separator is regulated with three controllers: a pressure controller for the output gas stream, and twolevel controllers for the water and the oil stream. Notice that the controllers use PID (proportional -integralderivative) controller equations to regulate the opening of valves on the output streams. In a realistic machine learning problem, we might wish to use all the measurements coming from the sensors installed on this component, as well as on some of the components installed upstream, in order to predict if the behavior of the equipment is normal or faulty (i.e. working in a degraded mode). However, due to the limitation in the complexity of the problems that can currently be faced with quantum computing, we will focus on a simplified problem, involving only 4 variables, that are: • the oil level (LIC), • the oil output flow (FT), • the pressure (PI), • the opening of the oil output valve (FRC). FIG. 1. Snapshot of the separator. The separator is regulated with three controllers: a pressure controller for the output gas stream, and two-level controllers for the water and the oil stream. The controllers use PID controller equations to regulate the opening of valves on the output streams. Sensor measurements are sampled every 10 seconds and stored into data tables to be used for the training of the neural networks. The first step of the case study is the implementation of a dimensionality reduction procedure to compress the 4-dimensional input vector x = (x FRC , x FT , x LIC , x PI ) into a 2-dimensional vector. This is done both via a standard classical neural network autoencoder and a quantum autoencoder, introduced in Sec. III and Sec. IV respectively. The second step will be to implement a classifier using the 2-dimensional latent vector from the compression step to classify the status of the component. In order to do so, we need a labeled training dataset associating an input x i to a label y i = {0, 1} corresponding to the "ok" or "faulty" state respectively. However, since 4 variables are too few to label the working status of the separator as "ok" or "faulty", we followed a different approach, as explained in the upper left panel of Fig. 2. We run a binary clustering algorithm on the initial variables, in order to identify two categorical states, named as "Class A" and "Class B", and then used these categorical states as the labels for the classification task. So, the latent vector from the encoder is used as input for the classifier, that is trained to correctly predict the "Class A" and "Class B" states. The clustering algorithm used is the KMeans algorithm as implemented in the scikit-learn library [36]. This algorithm takes as input the desired number of clusters, in our case two, and tries to split the data in groups of equal variance. The centroids of the clusters were initialized uniformly at random. In Fig. 2 we show the result of the clustering procedure, where for ease of plotting we show only three of the four variables. This categorical dataset is then used to train a classical and quantum classifier, whose implementation details and results are discussed in Sec. V B. In Table I we summarize the findings of our work, showing the key figures (compression error and classification accuracy) for the classical and quantum pipelines considered in the case study. III. NEURAL NETWORK AUTOENCODER The most common use case of artificial neural networks is supervised learning, where the network is asked to learn a mapping from an input to an output space, by having access to an example set of input-output pairs. One prominent example are classification tasks, where the network is presented a labeled dataset M = (( x 1 , y 1 ), ( x 2 , y 2 ), . . . , ( x M , y M )) ⊂ R n × {0, 1 . . . , c} consisting of a set of inputs x i and the corresponding correct labels y j , with c being the total number of classes the inputs can be divided into (see upper left side of Fig. 3). Using this dataset, called training set, a neural network can be trained in a supervised fashion to learn the relationship between the input variables and the expected classification results. When the training is complete, the neural network model can be used for inference, that is for labeling previously unseen data. This property of neural networks, called generalization, is ultimately the key figure that distinguishes them from standard fitting techniques, making them incredibly powerful tools [37][38][39][40]. When dealing with real world problems, such as classifying the operational status of a plant as "ok" or "faulty" based on the measurements from the sensors installed on the plant, it is often the case that a large number of input variables are available. In fact, measurements coming from tens of sensors need to be analyzed not only on their instantaneous values, but also on additional features computed on time intervals, such as moving averages, and minimal/maximal values trends. This leads to a situation where too many input variables are available in the dataset, and it is often ineffective to directly feed them into the neural network classifier. With such a large number of variables, correlation analysis and feature engineering are often performed to focus only on the most influencing variables, and only after these pre-processing steps the neural network can be used effectively. Another strategy is to use a dimensionality reduction approach, consisting in computing a new set of variables, smaller than the initial one, incorporating most -ideally all-of the information contained in the original data. These new compressed data are then used as inputs to the classifier, as shown in Fig. 3a. In order to reduce the problem dimensionality, methods such as PCA (Principal Component Analysis) or SVD (Singular Value Decomposition) [39] are typically used. However, these methods are based on linear decomposition of the initial variable space, and they could not be suitable when nonlinear relationships between the variables need to be kept into account. A. Classical Autoencoders An alternative method to reduce the dimensionality of the problem is to use Autoencoders [37], as shown in Fig 3(c). An autoencoder is a neural network composed of two modules, called encoder and decoder, designed in such a way that the subsequent application of the encoder and the decoder to the input data results into an output that is as close as possible to the input, i.e. the discrepancy between output and input is minimized. With such an approach, the encoder builds a compressed representation of the input data to be eventually used by the decoder to fully (and as faithfully as possible) reconstruct the input. This means that the compressed representation built by the encoder (often referred to as latent vector) contains the same information of the initial input space, or at least minimum information is lost. Once the autoencoder has been trained to reconstruct the input, the latent vector can be used as the input space for the classifier. Therefore, the classification problem can be described as shown in Fig. 3b. In our case study, we consider a neural autoencoder as shown in Fig. 3c. The original input variables are fed to the input neurons, which are then passed to an intermediate hidden level (shown in green) consisting of a number of neurons much smaller than the input. Finally, there is an output layer (shown in red) with the same number of neurons as the input. The neural network is trained in an unsupervised fashion in order to generate an output that is as close as possible to the input. Thus, if it is possible to reconstruct the input (with a minimum loss of fidelity) starting from the inner layer, this means that the inner layer contains the same information as the input, and therefore we can use the compressed layer as an input for the classifier. The presence of non-linear activation functions within the neural network, such as the Rectified Linear Unit ReLU(x) = max(0, x), or sigmoid s(x) = 1/(1 + e −x ), ensures that the network can better capture non-linear relationships in the input variables compared to PCA or SVD. IV. QUANTUM DATA COMPRESSION In order to use a quantum pipeline to analyze the classical data coming from the sensors, we need to encode such data on a quantum state to be used as the input of the quantum autoencoder. While it is known from the recent literature [11,[41][42][43][44][45] that choosing a good encoding scheme is of key importance to ensure good expressivity and representation power of variational quantum algorithms, there is still no standard procedure to do so. In our case, given the relatively simple and low dimensional nature of the data sets to be analyzed, we choose to use a phase encoding strategy [35,46], which provides an effective way to load classical data into a quantum state, and also already proved useful in other machine learning tasks such as pattern-recognition [35,[46][47][48]. In particular, given a data sample x = (x 1 , x 2 , . . . , x N ) ∈ R N , this is encoded on the quantum state of n = log 2 N qubits as follows where the data x are first re-scaled to fit into an appropriate range, such as x i ∈ [0, π]. This class of states is also known as locally maximally entangled (LME) states [49], and we refer to Refs. [35,46] for an extended discussion on these states for variational quantum procedures. A. Quantum Autoencoder Having fixed a data encoding strategy, we now build a variational quantum algorithm for data compression. In particular, borrowing from the classical machine learning literature, our goal is to implement a quantum autoencoder [28][29][30]. In classical autoencoders, the compression is built in the geometric structure of the neural network, since the input layer is followed by a much smaller hidden layer consisting of a number of neurons equal to the de-sired reduced dimension. This bottleneck forces the NN to learn a low dimensional representation of the inputs, which is stored in the intermediate hidden layer(s) of the network. However, this procedure cannot be straightforwardly applied to the quantum domain, because quantum computations follow a unitary, thus reversible, evolution. In fact, while classically it is possible to perform fan-in(fan-out) operations, that is arbitrarily reducing (increasing) the number of classical bits in the computation, such operations are irreversible, which prevents their direct implementation on a quantum computer. Alternatively said, it is not possible to eliminate or create new qubits during the execution of a quantum computation. Nonetheless, it is possible to circumvent this issue as follows. Consider two quantum systems, denoted as system A and system B, and be |ψ AB the quantum state of the composite quantum system AB. Our goal is to compress the information stored in the composite state in a lower dimensional representation, for example given by the state of subsystem A only, with system B being safely discarded. We can formalize this intuition in the following way: denote with E(θ) a quantum encoding (in the sense of compressing) operation depending on variational parameters (i.e. trainable weights) θ, then the desired compression task consists in the operation where the state |ψ AB of the composite system AB is compressed on the state |φ A of subsystem A only, and the system B is mapped to a fixed reference state of choice, called trash state, for example being the ground state |trash B = |0 ⊗|B| . It is clear that the goal of the encoder is to disentangle the two systems in such a way that one of them, i.e., the trash system, goes to the fixed reference state, and the other contains all the original information of from the full quantum state. In order to recover the original quantum state |ψ AB , it is then possible to act with a quantum decoder operation D(θ), defined as D(θ) = E † (θ). This way, acting with the decoder on the compressed state yields the original state Thus, suppose having compressed the information stored in the quantum state of a composite system into one of its subsystems. Then, it is always possible to retrieve the original information, if needed, by coupling such information-carrying system with some new qubits initialized in the |trash state, and then act on them with the quantum decoder operator, as schematically represented in Fig. 4. Of course, this only holds in the ideal case where the encoder perfectly manages to disentangle the subsystems A and B, i.e., by obtaining the product state in Eq. (2). In practice, this is never the case since the input state |ψ AB depends on the classical input data via the phase encoding, and these states cannot be exactly disentangled, in general. In fact, after discarding the trash system B, the compressed state A is no more a pure state, rather a mixed state given by the density However, upon optimization of the variational parameters θ, the trained encoder tries to create a final state as close as possible to the target product state in Eq. (2). Training the quantum autoencoder The initial quantum state |ψ AB is obtained by using phase encoding to load the classical information on the phase of the quantum state, with the following scheme. Be X = { x i | x i ∈ R N , i = 1, . . . , M } the set containing the classical data to be analyzed, then the quantum autoencoder is trained using the quantum states obtained as T = {|ψ x = i e i xi |i | ∀ x ∈ X }. In our specific case, the classical data are four dimensional N = 4 and thus we only need n = log 2 N = 2 qubits to encode the data. This in turn implies that the compressed system A and the trash subsystem B consist of a single qubit each. Given the input data, the variational parameters θ of the encoder are optimized in order to rotate the trash qubit as close as possible to the target trash state, which we choose to be |trash = |0 . This is achieved by means of a training procedure whose aim is to find optimal parameters θ * such that the loss function characterizing the task, L(θ), is minimized. That is, the goal of training is to find where we have defined literature, and together with the Mean Squared Error (MSE) is the one of the most commonly employed loss functions in supervised regression tasks, which is also our case. Note that the loss function is faithful, in the sense that it reaches its global minimum L(θ * ) = 0, only when Z B j = 1, ∀j = 1, . . . , M , that is when the trash qubit is always and perfectly disentangled from the other qubit, and mapped to the target trash state |0 . A schematic representation of the quantum circuit used for the training procedure is explicitly shown in Fig. 5(a). Variational ansatz The actual quantum circuit implementation of the encoder E(θ) (and hence the decoder) is arbitrary, and different variational ansatzes have been proposed in the quantum machine learning literature, in fact [16][17][18]20]. In our case, we are dealing with only two qubits, and the most general ansatz consists of repeated applications of single qubit rotations and CNOT quantum gates. In fact, having in mind to keep the parameters count and the overall circuit complexity low, we hereby propose a minimal yet efficient variational autoencoder consisting of two layers of Pauli−y rotations R y (θ) = e iσyθ/2 and a CNOT, followed by a final layer of rotations, as schematically depicted in Fig. 5(b). V. EXPERIMENTS AND RESULTS In this section we discuss the experiments implementing the classical and quantum data analysis approaches described above for the data compression and classification tasks. A. Data compression Classical autoencoder The classical neural network autoencoder was implemented with the Keras library of TensorFlow [50], and it consists of two dense layers in a 4-2-4 structure as in Fig. 3(c), with sigmoid activation function. The input data consists of a time series with 2873893 samples, 25% of which are used as validation data, and the rest for training. Before training, features were transformed with a MinMax scaler, which scaled each feature to fit in the range [0, 1]. After the learning phase, the average reconstruction errorē, evaluated as amounts to 5%, and in Fig. 2 we show a comparison of the original against reconstructed data averaged by day, for the validation dataset. As we can see, the decoder shows quite good performance in the reconstruction of the input data for 3 of the 4 variables. For the 'LIC' variable, the median of the distribution of the reconstructed data coincides with the one of the original data, though the fluctuations are not very well described. There is no obvious a priori reason for the imperfect reconstruction of this particular variable, and this may well be a shortcoming of the autoencoding approach, which focuses more on the other variables to achieve a good-enough reconstruction scheme. In the following step we used the two variables from the compressed layer as input for a supervised classification algorithm, to predict the class assigned at the beginning through the clustering algorithm. We expect that, if the compressed vector is a suitable representation of the input data, a classification algorithm would be able to achieve very good performances. Quantum autoencoder The quantum autoencoder was simulated using a combination of PennyLane [21] with the TensorFlow [50] interface, as well as Qiskit [23], and the optimization was thus performed using the automatic differentiation techniques implemented by these libraries. While this is only possible when performing a classical simulation of the quantum algorithm, in realistic scenarios of optimizing a quantum circuit on real quantum hardware one can resort to parameter-shift rules [45,51] to estimate gradients and optimize variational parameters. The variational circuit was trained using the Adam optimizer [52] with learning rate set to 0.001, to update the six variational parameters θ = (θ 0 , θ 1 , θ 2 , θ 3 , θ 4 , θ 5 ). The training was performed using mini-batches of size 20 for a total training set consisting of 10040 samples. In Fig. 6 it is shown the optimization process across epochs of learning, both for the training loss, and for a validation set of 520 samples. Before the phase encoding process, the classical data { x i } i were normalized as x i ← π · x i /|| x i ||. It is clear that the quantum encoder is effectively trained, with the loss reaching the minimum value of L(θ * ) = 0.0058. With a trained encoder, we can now proceed to investigate the quality of the data compression provided by the algorithm. The state of the qubits A and B after the quantum encoder operator consists of a general two-qubit state where, if the encoder has been successfully trained, the probability of measuring qubit B in state |1 , p 1 = |c| 2 + |d| 2 , is much smaller (ideally zero) than the probability of finding it in |0 , i.e. p 1 p 0 = |a| 2 + |b| 2 . Thus, in order to obtain a compressed pure state for qubit A instead of a mixed one, we could post-select state |Ψ AB on measuring the trash qubit in state |0 . BeΠ B 0 = |0 0| B the projector on state |0 for system B, then the composite state is projected to If we wish to retrieve the original information, now stored in compressed form in the state |ψ c A of system A only, we can couple this system to a new qubit initialized in |0 , and then apply the quantum decoder, as shown in Fig. 4. An example of this procedure is shown in Fig. 7, where the reconstruction performances of the quantum autoencoder are evaluated on a test set consisting of M = 1000 samples coming from the original dataset. In the case of Fig. 7, the average reconstruction error (see Eq. (5)) amounts toē = 5.4%, confirming that the quantum autoencoder can successfully compress and retrieve original information with low error. However, it is important to stress that these results were obtained using the Qiskit statevector simulator from IBM, which allowed us to have direct access to the amplitudes of the quantum states, and thus recover the final phases of the decoded state, |ϕ decoder = D(θ)(|0 ⊗ |ψ c A ). In fact, in a real case scenario with a quantum hardware, it is not possible to perfectly retrieve the phases of the decoded state |ϕ decoder , since one would need to perform quantum tomography of such state, and even in that case results could only be obtained up to an arbitrary constant, due to quantum measurement outcomes following Born's rule. Thus, while such reconstruction test would prove much harder to be performed on a real device, the results in Fig. 7 obtained with the simulator are still relevant in checking the inner working of the quantum autoencoder, and check that it is actually able to perform the task it was designed for, even if it is not currently accessible by a real experimenter. There is a second possible approach, which albeit being indirect does not require state tomography and is thus more readily compatible with actual runs on quantum processors. The performances of the quantum autoencoder can be tested measuring the fidelity [53] F (ρ x , σ θ x ) = Tr ρ x σ θ x between the initial pure state ρ x = |ψ x ψ x | obtained through phase encoding (1), and the generally mixed state obtained through the quantum circuit autoencoder (see Fig. 4) where E(θ) and D(θ) represents the superoperator corresponding to the encoder E(θ) and decoder D(θ) operators, respectively. Clearly, the larger the fidelity the better, since it corresponds to the quantum autoencoder being able to recreate states that are very close to the initial ones. Using this figure of merit, post-selecting on the trash subsystem B is not necessary, since qubit A can be directly coupled to a new qubit initialized in |0 to act with the decoder, and then proceed to evaluating Tr ρ x σ θ x . There are various techniques to evaluate state overlaps on quantum hardware [35,54], the most common one being the SWAP test, and here we use leverage the so-called compute-uncompute method, whose circuit is shown in Fig. 8. Using a test set of M = 1000 samples, a simulation of the trained quantum autoencoder, even including stochastic measurement outcomes with 10 4 shots, yields an average fidelity Tr ρ xj σ θ xj = 0.975 ± 0.001 , which confirms again that the proposed variational quantum autoencoder is able to compress and later decode information. B. Classification Classical classifier The supervised classification algorithm used is the KNeighborsClassifier as implemented in scikit-learn. KNeighborsClassifier assigns the class to a point from a simple majority vote based on the k nearest neighbors of that point. The number of nearest neighbors is a parameter of the algorithm, and after some trials we fixed it at k = 100, which correspond to an optimal trade-off between performances and computational efficiency. The lowest panel of Fig. 9 shows the results of the classification, which is now anticipated but discussed later in comparison with the quantum algorithm results. In red and blue are the points correctly classified, while in yellow and green are those which were misclassified. The classification accuracy, evaluated as the percentage of correctly classified data, reach a remarkably high value of 89.7%, indicating that the compressed vector is able to summarize the information carried by the input data. Single qubit quantum classifier Once the quantum autoencoder has been trained to learn a compressed representation of the original information, the compressed quantum state can be used as input for a classification task. We expect that, if the compressed information is a suitable representation of the input data, the classification algorithm would be able to learn the classes assigned to the full-size input data through the clustering algorithm described in Sec. II. To do so, we can use the information-carrying qubit obtained with the encoder E(θ), as input to a quantum classifier which is trained to learn the desired clustering of the original data. A quantum classifier is made of two parts: a trainable parametrized operation, U , which tries to map inputs belonging to different classes in two distant regions of the Hilbert space; and a final measurement, which is used to extract and assign the label. Since we are dealing with a single qubit classifier, the most general transformation on a qubit is represented by the unitary matrix U (α, β, γ) = cos(α/2) e −iγ sin(α/2) e iβ sin(α/2) e i(β+γ) cos(α/2) . Thus, it is reasonable to use such operation as the trainable block of the classifier, since it ensures the greatest flexibility. Actually, as discussed later, the angle β in Eq. (9) does not influence the measurement statistics of the qubit, hence it has no influence on the training of the classifier. For this reason, it is kept fixed at β = 0, and the actual trainable gate used is U (α, 0, γ) = U (α, γ). As for the label assignment, since the measurement process of a qubit has only two possible outcomes, these are interpreted to be the two possible value for the labels, that is "Class A" and "Class B" described in Sec. II. More in detail, a label is assigned based on a majority vote on 0.6 0.8 between the initial pure state ρ x = |ψ x ψ x | and the generally mixed state σ θ x , obtained through the autoencoding procedure. The fidelity is obtained by counting the number of |00 outcomes. In fact, dropping the subscripts for simplicity, one has Tr P † σP |0 0| = Tr σP |0 0| P † = Tr[σ |ψ ψ|]. multiple shots of the same quantum circuit: an input is assigned to "Class A" if the majority of measurement gave |0 as outcome, "Class B" otherwise. Formally, be , the compressed quantum qubit, then the label is assigned following the decision rule: where p 0 denotes the probability that the measurement yields |0 outcome. As mentioned earlier, one can check easily that p 0 does not depend on the angle β of the unitary U (α, β, γ), and for this reason it is set to zero, yielding the variational unitary U (α, 0, γ) = U (α, γ). The loss function used to drive the training of the unitary U (α, γ) is the categorical cross entropy, defined as where y i is the correct label, andŷ i is the label assigned by the quantum classifier, and the optimizer used is COBYLA [55] as implemented in SciPy's python package [56]. Figure 9 shows the results of the classification obtained after the optimization of the variational parameters (α, γ), for a test set of M = 10 3 samples. The accuracy, measured as the ratio of correctly classified to total samples, is measured to be 87.4% when evaluated with exact simulation of the quantum circuit. As clear from the figure, the misclassified data are only those located near the edge connecting the two classes. In fact, in this region, the samples are not neatly divided but rather a blurred border exists. On the contrary, the quantum classifier, given its relatively simple structure, learns essentially a straight cut of the data in this region, thus committing some labeling errors. This should not come as a surprise however, since it is known from the literature that, if not using expressivity enhancement techniques like data re-uploading, a single qubit classifier can only learn simple functions (i.e. sine functions) of the input data [34,41,42,44,57,58]. In addition, remind that the classical data are loaded onto the quantum states by means of rotations, hence the dependence of the classification on the original classical data is strictly non-linear, with the presence of the encoder scrambling information even more. It is interesting to notice that the classification performances remain stable even when including sources of noise, such as stochastic measurement outcomes. In this case, using n shots = 1024, the accuracy amounts to about 82.5%, with the uncertainty due to the stochastic nature of the simulation. In addition, the classifier proves robust even with tests performed on real quantum hardware. In fact, the circuit for the trained classifier was tested against IBM's ibmq x2 quantum chip (accessed May 2021) but with a smaller test set of 75 samples, due to limitations in the device usage. In this case, using 1024 shots per circuit, and averaging on 5 executions with different test samples, the classification accuracy (evaluated again as the percentage of correctly classified data) was found to be (82.3 ± 1.3)%, indeed very close to the simulation including only measurement noise, and not much different from the noiseless result. VI. SUMMARY AND OUTLOOK We have presented a direct comparison between quantum and classical implementations of a neural network autoencoder, followed by a classifier algorithm, applied to sample real data coming from one of Eni's plants, in particular from a first stage separator. While the achievement of a clear quantum machine learning advantage with variational algorithms is still disputed [11,59,60], this work sets a milestone in the field of quantum machine learning, since it is one of the first examples of direct application of quantum computing software and hardware to analyse real data sets from industrial sources. As a first step, we have implemented and analyzed the performance of a variational quantum autoencoder to compress and subsequently recover the input data. We verified its performances using full simulation of the wavefunction, which allowed us to evaluate the average reconstruction error to aboutē = 5% -essentially identical to the classical autoencoder-thus confirming the capability of the quantum autoencoder to effectively store a compressed version of the original data set, and then being able to recover it. In addition, we also checked the correctness of the quantum autoencoding procedure by evaluating the quantum fidelity between original and decoded quantum states, which were again found to be very similar to each other even in the presence of simulated stochastic measurement noise. Once the optimal parameters for the quantum autoencoder were determined during the training phase, we used the compressed quantum state as input to a quantum classifier, with the goal of performing a binary classification task. The algorithm achieved an accuracy above 87%, absolutely comparable to that achieved in the classical setting using the neural network autoencoder followed by a nearest-neighbors classifier, thus indicating again that the quantum algorithm is able to correctly compress the relevant information of the input data. We also tested the performance of (c) Focus on the data that are mislabeled by the classifier. The color indicates the label assigned by the quantum classifier, and the "cross" marker means that the data were misclassified. Note that these samples lay on the border of separating the two classes. The accuracy, evaluated as the percentage of correctly classified data, amounts to 87.4%. (d) Result of the classification using the classical autoencoder followed by a KNN clustering procedure. Note that the axis are different from the quantum case due to normalization of the features. In this case the classification accuracy amounts to 89.7% the full quantum pipeline (given by the quantum autoencoder plus the classifier) on actual and currently available IBM quantum hardware, obtaining a classification accuracy of 82%, which is only slightly smaller than the ideal result. The small size of current quantum devices and their relatively high noise levels make it hard to run actually relevant and large scale computations, thus making an effective quantum advantage out of reach. We provided, on the other hand, a successful proof-of-concept demonstration that an original quantum autoencoder and a quantum classifier can actually reach the same level of accuracy as standard classical algorithms, on a data set that is sufficiently low dimensional to be handled on actual near-term quantum devices. In addition, it is worth emphasizing that the quantum autoencoder allows to obtain results that are quantitatively comparable to the classical algorithm by using only 6 parameters instead of 16, thus displaying an increased efficiency in terms of number of trainable parameters already reached on NISQ devices. With continuing progress in quantum technologies and quantum information platforms, we envision the execution of the very same quantum algorithms on larger scales, possibly reaching the threshold for a classically intractable problem. We believe these results take the first foundational steps towards the application of usable quantum algorithms on NISQ devices for industrial data.
2022-05-10T06:47:55.833Z
2022-05-09T00:00:00.000
{ "year": 2022, "sha1": "6d87ed3894274cedd1487c0cacb5269110473bca", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6d87ed3894274cedd1487c0cacb5269110473bca", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
219318657
pes2o/s2orc
v3-fos-license
Influences on the use of antidepressants in primary care: All England general practice‐level analysis of demographic, practice‐level and prescriber factors General practice (GP) antidepressants (ADs) prescribing in England has almost doubled in the past decade: how does location, GP characteristics, and prescribing selection influence antidepressant prescribing rate (ADPR) and growth. | INTRODUCTION The number of prescriptions for antidepressants (ADs) in England has almost doubled in the past decade. Data from NHS Digital show that 70.9 million prescriptions for ADs were given out in 2018, compared with 36 million in 2008 (Iacobucci, 2019). Evidence suggests that medication prescribing for many chronic health conditions, particularly in older persons is often inappropriate (Spinewine et al., 2007) with associated increases in morbidity and economic burden (Simonson & Feinberg, 2005). In 2017, one in six adults in England was prescribed ADs. The United Kingdom figures, covering the NHS as a whole, saw a total of 7.3 million people given at least one AD prescription in 2017. This included more than 70,000 people under the age of 18 years. Those aged over 60 were twice as likely as those in their twenties to be on ADs. One in five people in towns such as Blackpool and Great Yarmouth was prescribed ADs in 2017, while in London the figure was less than 1 in 10 (www.pulsetoday.co.uk). We have previously applied multivariate regression analysis on publicly available NHS data at the general practice level to identify how general practice factors relate to outcome in terms of glycosylated haemoglobin (HbA1c) (Heald et al., 2017(Heald et al., , 2018. This approach can be generalised to other areas of medicine, including prescribing in psychiatry and has proved informative in terms of appreciating the drivers of prescribing year-on-year in several long-term conditions. This exploratory study using national-level data aimed to look at how a range of quantifiable and nationally audited factors at a general practice (family doctor practice) relate to general the practice variation in antidepressant prescribing rate (ADPR) across England. | METHODS We collected the England national public published population demographic, practice characteristics and AD prescribing behaviour in each general practice and year and used multivariate regression analysis to establish their link to the practice ADPR. Only general practices with more than 2,000 registered patients (i.e. requiring ≥ one full-time general practitioner) were included in the analysis. The population demographic, general practice processes and prescribing behaviour in each practice and year were analysed. We examined three different classes of possible factors that could influence the ADPR. | Statistical analysis Stepwise multivariate regression analysis was used to establish the link between these factors to the ADPR at a general practice level. Only factors that had a p-value < .05 were retained within the analysis. As many factors are not independent of each other, this analysis was carried out both for each class (location, characteristics and prescribing behaviour) and across all classes and factors. | Ethical approval As we used publicly available general practice-level data, with no individual patient data, it was not considered necessary to seek Ethics Approval for this study. there was a 37% rise in the number of people being recorded on the depression register and 22% rise in total doses of ADs. F I G U R E 2 Cross-sectional analysis of the link between practice-level factors and AD prescribing. AD, antidepressant Total costs of ADs fell 15%, a reduction in the unit cost of 35%. The total number of different unique ADs at different dose levels increased from 94 to 107 in 2017-2018, with 2.1 billion doses of AD being prescribed into a total population of 52 million people. Average ADPR, Defined Daily Doses of AD/head population/day, was 0.096 and 80% of practices lay between 50 and 150% of this value. This highlights the wide variation in the use of antidepressants by local practices (Figure 1). | Multiple regression analysis Location and demographics including age, gender, ethnicity, social deprivation, population density and latitude accounted for 62% of the variation (Figure 2). The results for each of the factors that were included in the final model are shown in Table S1. Practice characteristics on their own including levels of comorbidities including depression could account for 62% of the variation. It is worth noting that the univariant analysis for % of patients on the depression register accounted for 30% of the variation in overall ADPR. The prescribing behaviours accounted for 51% of prescribing variation. The remaining explained variation came from practice prescribing behaviour including the number and mix and costs of different ADs being prescribed. Practices with higher cost/dose had lower ADPR, those using a higher number of different ADs had higher ADPR. As many factors were codependent, that is, age, social disadvantage and BME ethnicity could impact on comorbidities and prescribing behaviour so when all the factors were included, the model could account for 81% of the variation in GP practice ADPR. Factors cross-sectionally linked with relatively more AD prescribing at general practice level included: • Higher proportion of people with COPD and diabetes as major | Cost and AD prescribing rate The multivariate regression highlighted that practices with higher cost/dose had lower ADPR. Also, those using a higher number of different ADs had higher ADPR, and there was a significant reduction in The association of comorbidities-COPD and diabetes with increased AD prescribing highlights the importance of holistically addressing long-term health concerning the impact of long-term physical conditions on mental health. The influence of GP practice size and location on AD prescribing has not been reported before. The finding that a higher overall social disadvantage level is associated with greater prescribing of ADs is not surprising. Conversely, the link between higher proportion BAME ethnicity in the GP practice and lower AD prescribing may be a marker for profound cultural influences on the way that individuals perceive symptoms of depression and the implications of those symptoms. We have recently described using national GP practice-level data how the empowerment of individuals in managing long-term conditions, has the potential to reduce GP practice level prescribing of ADs (Heald et al., 2020). Practices more effective in empowering their patients as assessed by "How confident are you that you can manage any issues arising from your condition (or conditions)", was non-linear with less antidepressants prescribed for both high and low responses. The difference between the lowest and highest decile of prescribing for this response was over 10% and potentially modified by changing practice approach. Therefore measures that facilitate patient empowerment can potentially decrease the level of antidepressant prescribing. We have shown that demographic factors, socioeconomic deprivation, population density and location are also important factors associated with increasing prescriptions. Further research is needed to examine whether prescriptions are being effectively prescribed and whether areas of lower prescribing have lower rates of depression, higher unmet need or better use of non-pharmacological strategies. We nevertheless accept that we are applying at an individual level, conclusions drawn from general practice-level analysis. The limitations of our analysis are that it does not look at individual patient data and only includes data that is recorded on national registries. However, the data covers all GP surgeries in England and is, therefore, representative of the determinants of antidepressant prescribing across these nations. | CONCLUSION The results represent a benchmark against which general practices can establish their baseline ADPR, incorporating their local demographic and practice profile-and then consider mix and relevance of the various ADs to enhance the patient benefit of their prescribing protocols. We hope that our findings can inform local clinical behaviour, medicines management recommendations and provide insight that is helpful to general practices. DATA AVAILABILITY STATEMENT Any requests for data extracts will be considered by Dr Adrian H. Heald as the corresponding author. ETHICS STATEMENT As we used publicly available and GP level data, with no individual patient data, it was not necessary to seek Ethics Approval for this study. SUPPORTING INFORMATION Additional supporting information may be found online in the Supporting Information section at the end of this article.
2020-06-04T09:08:10.964Z
2020-06-03T00:00:00.000
{ "year": 2020, "sha1": "7f60fef9a799e52accd0a00d4945c32e797142da", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/hup.2741", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "14275a44622a8ea6574bbe88e642892419d91556", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
221511076
pes2o/s2orc
v3-fos-license
An analysis of equity in treatment of hip fractures for older patients with dementia in acute care hospitals: observational study using nationwide hospital claims data in Japan Background Globally, and particularly in countries with rapidly ageing populations like Japan, there are growing concerns over the heavy burden of ill health borne by older people, and the capacity of the health system to ensure their access to quality care. Older people with dementia may face even greater barriers to appropriate care in acute care settings. Yet, studies about the care quality for older patients with dementia in acute care settings are still few. The objective of this study is to assess whether dementia status is associated with poorer treatment by examining the association of a patient’s dementia status with the probability of receiving surgery and the waiting time until surgery for a hip fracture in acute care hospitals in Japan. Methods All patients with closed hip fracture were extracted from the Diagnosis Procedure Combination (DPC) database between April 2014 and March 2018. After excluding complicated cases, we conducted regressions with multilevel models. We used two outcome measures: (i) whether the patient received a surgery or was treated by watchful waiting; and (ii) number of waiting days until surgery after admission. Results Two hundred fourteen thousand six hundred one patients discharged from 1328 hospitals were identified. Among them, 159,173 patients received surgery. Both 80–89 year-olds (OR 0.87; 95% CI, 0.84, 0.90) and those 90 years old and above (OR 0.67; 95% CI, 0.65, 0.70) had significantly lower odds ratios for receiving surgery compared to 65–79 year-olds. Those with severe dementia had a significantly greater likelihood of receiving surgery compared to those without dementia (OR 1.21; 95% CI, 1.16, 1.25). Patients aged 90 years old and above had shorter waiting time for surgery (Coef. -0.06; 95% CI, − 0.11, − 0.01). Mild dementia did not have a statistically significant impact on the number of waiting days until surgery (P = 0.34), whereas severe dementia was associated with shorter waiting days (Coef. -0.08; 95% CI, − 0.12, − 0.03). Conclusions These findings suggest physicians may be taking proactive measures to preserve physical function for those with severe dementia and to avoid prolonged hospitalization although there are no formal guidelines on prioritization for the aged and dementia patients. Background Global population ageing is progressing rapidly. By 2050, one in six people in the world will be aged 65 years or over [1]. Improved survival beyond the age of 65 is fueling population ageing, putting increased financial pressure on the systems in place to support the older population, including healthcare. Now more than ever, countries need to ensure equity in healthcare with special attention to older people. Older adults commonly perceive discrimination in healthcare settings due to their age [2,3]. These perceptions are supported by empirical research findings of ageism at different levels of the healthcare system including age-biased clinical decision-making regarding diagnostics and treatments [4]. Negative attitudes of healthcare providers toward older patients are more commonly reported in acute health care settings, where targets and quick turnover are encouraged [5]. Studies suggest older people with dementia may face even greater barriers to appropriate care in acute care settings [6,7]. Japan has the most aged population in the world. There are growing concerns over the heavy burden of ill health borne by older people, and the capacity of the country's health system to ensure their access to quality care. Much attention is given to the increasing prevalence of dementia and its estimated societal cost, as they pose serious threats to the sustainability of the health and social care systems [8]. Yet, studies about the quality of care provided to the growing number of older patients living with dementia are still few [9,10], and even fewer studies have considered the care they receive in acute care settings for medical conditions and co-morbidities other than dementia [11,12]. The present study is one of the first studies in Japan to use hospital claims data to examine the receipt of acute care by older patients with dementia from an equity perspective. The objective of this study is to quantitatively assess whether dementia status is systematically associated with the likelihood of older patients receiving poorer treatment in acute care settings. Specifically, it will examine the association of a patient's dementia status with the probability of receiving surgery and the waiting time until surgery for a hip fracture in acute care hospitals in Japan, controlling for other patient factors and contextual factors. Hip fractures are a growing public health problem in Japan with the progression of population ageing. The estimated number of new hip fracture patients per year more than tripled from 53,200 new cases in 1987 to 175, 700 in 2012 [13]. International consensus is that hip fractures among older people should be operated on within 48 h of hospital admission [14], although research shows that hip fracture surgery within 24 h could produce considerably better outcomes [15,16]. On average across EU countries, more than three quarters (77%) of patients aged 65 and over admitted for a hip fracture were operated within 2 days in 2015, with most of them being treated either on the same day of their admission or the next day [14]. This is in accordance with a common guidance in Europe that hip fracture patients should receive surgery on the day of, or the day after, admission [17]. In Japan, similar guidance has not been issued by a national health authority, although the Japanese Orthopaedic Association recommends surgery within a week of admission [18]. Data from Japan show that the mean duration of preoperative hospital stay for hip fractures was 4.5 days, and the mean duration of hospitalization was 36.8 days in 2014. The long waiting time from hospitalization to surgery is reportedly due mainly to difficulties in securing operating rooms [19,20]. Waiting time for surgery is a process indicator of the efficiency or quality of the health system response often used in international reporting [14]. Studies concerned with equity in healthcare focus on the differences in surgery waiting time by patient characteristics or by contextual factors such as urban or rural geography or hospital characteristics [21][22][23]. Many of them examine differences in waiting time for elective surgery by patient's socioeconomic status. To the authors' knowledge, only one study has been conducted in Japan to date which considered the effect of a patient's dementia status on surgical delay for a hip fracture [20]. The study was conducted using data on 314 patients aged 60 years or above who were treated for hip fractures at one hospital between January 2006 and June 2012 and found no significant effect of dementia on surgical delay when controlling for other clinical and contextual factors. The present study will use a considerably larger database that covers over a thousand acute care hospitals across Japan. Sources of data The data were obtained from the Diagnosis Procedure Combination (DPC) database, a national administrative database commenced in 2003 with case-mix classification for the use of acute care inpatient reimbursement. Details of the DPC data are provided elsewhere [24,25]. As of 2018, 1730 acute care hospitals out of 7134 all hospitals are reimbursed through the DPC [26]. Also, 69.2% of all general hospital beds are included in the DPC reporting system [27,28]. In this study, we utilized 4 years of cross-sectional data from FF1 (or Yoshiki 1) of the DPC data covering the period of April 2014 to March 2018, which are the Japanese fiscal years of 2014 to 2017. In addition, we utilized detailed nationwide hospital data available from the Institute for Health Economics and Policy (IHEP) website [29] in order to append key hospital characteristics to each patient record. Study population We selected all patients with a closed hip fracture (closed fracture of neck of femur, closed pertrochanteric fracture, and closed subtrochanteric fracture; ICD10 codes S72.00, S72.10 and S72.20, respectively). Although hip fracture is one of the most frequently encountered injuries in daily practice in Japan, because it is neither malignant nor an emergency, treatment varies widely depending on patient characteristics and environmental resources. Recent guidelines and studies recommend early surgical intervention [15,16,[30][31][32]. We analyzed two outcome variables: (i) receipt of surgical operation (coded as 0 for no surgery and 1 for surgery performed) (i.e., not watching for spontaneous recovery), and (ii) number of waiting days until surgery following admission (coded as a continuous variable with a value of 0 assigned if the surgery was performed on the day of admission). Explanatory variables The main explanatory variable of interest was the patient's level of dementia and its impact on their functional ability as measured by the nationally standardized instrument used to assess the needs and eligibility for care under the long-term care insurance system. For the purposes of DPC data entry, the assessment is applied at the time of hospital admission to all patients 65 years old and older. There are six possible assessment outcomes: having no dementia (coded as 0); being on a scale of I to IV ranging from having some dementia but basically functionally independent (I) to requiring constant care due to severe symptoms or behavior and communication difficulties (IV); or having symptoms so severe that specialized medical care is required (coded M). For the present study, these were grouped into three categories: no dementia (coded as 0), mild dementia with little or no loss of function (coded as 1, comprising I and II above), and moderate to severe dementia with significant loss of function (coded as 2, comprising III, IV and M above). Analyses were also adjusted for age group, sex, fracture type (closed fracture of neck of femur, closed pertrochanteric fracture, closed subtrochanteric fracture), comorbidities (Charlson comorbidity index, groups 0-2), coma level, and ambulance use. These conditions were routinely recorded in the DPC data except Charlson comorbidity index (CCI) which is calculated from patients' comorbidities at the point of admission using Quan's protocol [33]. While other conditions were recorded at the point of admission, fracture type could be modified during the patient's hospital stay. Coma level was categorized into four consciousness depth levels using the Japan Coma Scale (JCS), which is routinely recorded in DPC data. Details of the JCS are described elsewhere [34]. Ambulance use was flagged when patients were transported by ambulance to reach the hospital. Ambulance use was included as a proxy for the level of emergency, which can also affect the probability of and time until surgery. Exclusions We excluded all types of complicated cases from the study population in an attempt to equalize baseline conditions. We excluded patients who died within 24 h after admission, and those with co-existing severe trauma (e.g., brain bleeding), repeated surgery cases, or clinically complicated fractures which include bilateral, multiple, implant-related fractures or fracture with dislocation. We also excluded patients with multiple admissions within the 4-year study period, multiple surgeries within one admission, and patients who received surgery more than 180 days after admission. In addition, we excluded patients under 65 years old, because the DPC system does not require recording of dementia status for those younger patients. The impact of these exclusions was subsequently assessed by sensitivity analyses. Statistical models For the first analysis of the probability of receiving surgical operation, we employed a multiple logistic regression model to obtain adjusted odds ratios (ORs) and 95% confidence intervals (CIs) associated with each explanatory variable using the entire study population. Then, a multiple linear regression model was applied to the subset of data on patients who received an surgery to obtain regression coefficients and 95% confidence intervals associated with each explanatory variable and the number of waiting days until the surgery. The explanatory variables described previously were all set as compositional factors, whereas hospital factors (i.e. city level and hospital function) were set as contextual factors. We built each model in four steps: 1) age and sex only, 2) age, sex and dementia level, 3) age, sex, dementia level and other patient clinical factors, and 4) the full model which included hospital and other contextual factors. Macrolevel variance was calculated for each model using multilevel analysis. Details of multilevel analysis including the calculation of macro-level variance are described elsewhere [35]. Sensitivity analyses were also conducted for each of the exclusion criteria. All analyses were conducted using Stata 16.1. Sample extraction and characteristics From a total of 572,983 patients with a closed hip fracture recorded during the study period (April 2014-March 2018), 554,225 patients were extracted after confirming target disease name with ICD-10 codes. Secondly, clinically complicated cases were excluded, reducing the patient pool to 264,125. Thirdly, 49,524 patients were excluded due to limitations of missing values and hospital data. As a result, 214,601 patients discharged from 1328 hospitals were identified as the study population. Among them, 159,173 patients from 1170 hospitals received surgery, and thus, were included in the secondary analysis of waiting days until the surgery. The sample extraction process is summarized in Fig. 1. Table 1 shows the baseline characteristics of the study sample and those who received a surgery. Females accounted for 77.9% of the total study sample, and the most common age group was 80-89 years old (49.2%). The number of those who were not diagnosed with dementia was 111,414 (51.9%), whereas 58,400 (27.2%) and 44,787 (20.9%) were diagnosed with mild and severe levels of dementia, respectively, in the total study sample. For the 159,173 patients who received a surgery, the mean number of waiting days was 3.66 (SD 3.72) days, with a median of 3 days. This indicates a longer waiting time than what is widely recommended in Europe, but is in accordance with relevant guidance in Japan [18]. While a majority of compositional factors affected the probability of an older hip fracture patient receiving a surgery with statistical significance, the impact of contextual factors was rather negligible. In the full model, while city level did not significantly affect the probability of receiving a surgery, hospital function had a rather high impact, where the probability of receiving a surgery was highest in regional support hospitals (OR 15.07; 95% CI, 11.13, 20.42) followed by university and advanced hospitals (OR 10.90; 95% CI, 7.05, 16.83) and other types of hospitals with over 200 beds (OR 8.36; 95% CI, 6.73, 10.40) compared to other types of hospitals with under 200 beds, respectively. Reduction of macro-level variance from model 3 (4.04) to model 4 (2.83) also showed the impact of these contextual factors. Analysis 2: waiting days for surgery The results for waiting time until surgery are shown in Table 3. Patients who were 90 years old and above had shorter waiting time (Coef. -0.06; 95% CI, − 0.11, − 0.01) compared to those aged 65 to 79, while the 80-89 yearold group did not (P = 0.10). In terms of dementia, similarly to the results for receipt of surgery, mild dementia did not have a statistically significant impact on the number of waiting days until surgery (P = 0.34), whereas severe dementia was associated with shorter waiting time (Coef. -0.08; 95% CI, − 0.12, − 0.03). Deeper coma levels incrementally lengthened waiting days; coefficients for coma level 2 and 3 were 0.26 (95% CI, 0.00, 0.51) and 1.18 (95% CI, 0.19, 2.17), respectively. Similarly with the results for receipt of surgery, city level was not associated with waiting time until surgery. However, two of the variables for hospital function had statistical significance, which were regional support hospital (Coef. Sensitivity analysis Sensitivity analyses were conducted for each of potentially arbitrary exclusion criteria. First, we ran all models for patients with each of clinically complicated case (i.e. death within 24 h after admission, patients with coexisting severe trauma, repeated surgery cases, clinically complicated fractures, patients with multiple admissions within the 4-year study period, multiple surgeries within one admission). Then, we adjusted for patients waiting days from admission to surgery which was limited 180 days from admission in present study. We adjusted for waiting days from 30 days to 365 days. We confirmed the main results were invariant. Discussion This study found no evidence of unfavorable treatment of patients with dementia for a hip fracture in acute care hospitals in Japan. On the contrary, the findings suggest that patients with severe dementia may be prioritized for surgery resulting in a greater likelihood of them receiving surgery. Furthermore, they may be given a shorter waiting time compared to patients without dementia or with only mild dementia who are otherwise similar in terms of clinical and contextual characteristics. Even patients with mild dementia are treated no differently from patients without dementia. Dementia level 1 and 2: represents "degree of independence in daily life for elderly people with dementia" criteria I-II and III-IV/M, respectively. b Coma level refers to the Japan Coma Scale (JCS) which has four decisive levels of consciousness. c Designated city has population over 500,000 and is designated by order of the Cabinet of Japan under Local Autonomy Law. d Advanced hospitals include 6 national centers for cancer, circulation and global health. Regional support hospital has over 200 beds and meets requirements such as referral rate over 80% for outpatients With regard to age, the study found that very old patients in their 80s and 90s are less likely to receive surgery compared to otherwise similar patients who are between the ages of 65 and 79. This result is concerning given that our analysis controlled for comorbidities and functional level. In other words, the observed difference cannot be explained by the possibility that the non-receipt of surgery among the older-old patients was clinically warranted, and thus ethical, due to contraindications or lower levels of functioning. However, for those who did receive surgery, the very old patients tend to have a shorter waiting time compared to the younger-old patients. These findings suggest that although there are no formal guidelines on patient prioritization, physicians may be taking proactive measures to preserve physical function through surgery for those who are younger and for those with severe dementia. Once the decision to perform surgery is made, then it appears older patients and those with severe dementia are prioritized to avoid prolonged hospitalization for these patients for whom the consequences are likely to be negative. One study from Germany suggests conducting preoperative cognitive assessment (e.g. Mini Mental State Examination; MMSE) for very old patients arguing cognitive impairment is an important prognostic factor for the development of perioperative complications and the duration of the hospital stay [36]. In line with this suggestion, our findings indicate that physicians in Japan knowingly or unknowingly prioritize patients based on their cognitive function thereby helping to avoid undesirable outcomes. However, the dataset we analyzed limits our understanding of the true causes of the observed patterns of treatment. We would like to think that the basis for the expedited surgery of patients with severe dementia and those who are very old is clinical benefit to the patient. However, it is also possible that hospitals are prioritizing and discharging these patients with complex needs who tend to have prolonged hospital stays, which can reduce turnover of hospital beds and reduce hospital revenue under prospective payment system. In fact, additional analysis from our study showed that the patients with dementia also had shorter lengths of hospital stay following surgery compared to patients with no dementia (Table 4). Qualitative research of the clinicians making these decisions would be informative. Whether the true driving force of this pattern is perceived benefit to the patient or financial incentive for the hospital, or both, the result for the patients with severe dementia and very old patients is that they have shorter waiting times until surgery, which in general is a good outcome. As these patients will require longer periods of recuperation and rehabilitation following discharge, early discharge should be followed by a supported discharge [5]. This study also found that contextual factors, and especially the type or function of the hospital in which the patient received care had significant impact on the probability of the patient receiving surgery for their hip fracture and on their waiting time until surgery, above and beyond the effects of patient characteristics. The positive finding is that patients are not simply disadvantaged by their rural residence. Given that financial barriers to healthcare are minimized in Japan by the national health insurance system, geography, or rural residence, is one of the major concerns related to equity in healthcare. This study found that patients with comparable individual characteristics living in remote areas are just as likely to receive surgery as those living in urban areas without delay as long as they can seek care in high-functioning hospitals. Additional analysis showed a similar pattern in the length of hospital stay in which rural residence had no impact but the hospital's functional level made a significant difference in the patient's duration of hospitalization (Table 5). Limitations The study population only included those patients experiencing hip fractures who received treatment in an acute care hospital which reports to the DPC data system. It did not include those patients who were admitted Conclusion We found hip fracture patients with severe dementia received surgery with a greater likelihood and with a shorter waiting time compared to patients without dementia or with only mild dementia. With regard to age, very old patients in their 80s and 90s are less likely to receive surgery compared to patients between the ages of 65 and 79. For those who did receive surgery, the very old patients tend to have a shorter waiting time. These findings suggest physicians providing acute care for hip fractures in hospitals in Japan may be taking proactive measures to preserve patient's physical function and to avoid prolonged hospitalization based on their age or dementia level in the absence of formal guidelines on patient prioritization. In terms of contextual factors, rural residence in itself was not a disadvantage for these patients seeking care in acute care hospitals; rather, the functional level of the hospital in which they sought care was more likely to affect their likelihood of receiving surgery and the waiting time until surgery. Further study is required to elucidate the extent to which the observed treatment pattern serves the interests of the patient, the healthcare workers, and hospital business administration. Dementia level 1 and 2: represents "degree of independence in daily life for elderly people with dementia" criteria I-II and III-IV/M, respectively. b Coma level refers to the Japan Coma Scale (JCS) which has four decisive levels of consciousness. c Designated city has population over 500,000 and is designated by order of the Cabinet of Japan under Local Autonomy Law. d Advanced hospitals include 6 national centers for cancer, circulation and global health. Regional support hospital has over 200 beds and meets requirements such as referral rate over 80% for outpatients Appendix
2020-09-07T13:44:34.917Z
2020-09-04T00:00:00.000
{ "year": 2020, "sha1": "6d60e8d932cf67f6c5580702a084cec3ab5c6269", "oa_license": "CCBY", "oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/s12913-020-05690-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "69d72ef82cb5dab7b083eb3c995f5e54d8f4f5e6", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
18608881
pes2o/s2orc
v3-fos-license
Affective and cognitive empathy in adolescents with autism spectrum disorder The broad construct of empathy incorporates both cognitive and affective dimensions. Recent evidence suggests that the subjects with autistic spectrum disorder (ASD) show a significant impairment in empathic ability. The aim of this study was to evaluate the cognitive and affective components of empathy in adolescents with ASD compared to controls. Fifteen adolescents with ASD and 15 controls underwent paper and pencil measures and a computerized Multifaceted Empathy Test. All measures were divided into mentalizing and experience sharing abilities. Adolescents with ASD compared to controls showed deficits in all mentalizing measures: they were incapable of interpreting and understanding the mental and emotional states of other people. Instead, in the sharing experience measures, the adolescents with ASD were able to empathize with the emotional experience of other people when they express emotions with positive valence, but were not able to do so when the emotional valence is negative. These results were confirmed by the computerized task. In conclusion, our results suggest that adolescents with ASD show a difficulty in cognitive empathy, whereas the deficit in affective empathy is specific for the negative emotional valence. INTRODUCTION Autistic Spectrum Disorder (ASD) is a triad of qualitative impairments in social interaction, communication and restricted, repetitive, and stereotyped behaviors (American Psychiatric Association, 2000). An important feature of the proposed criteria in DSM-5 for ASD is a change from three (autistic triad) to two domains: "social/communication deficits" and "fixated and repetitive pattern of behavior" (Wilkinson, 2012). These difficulties often make it very hard for people with ASD to be successful members of society and can present very serious challenges to parents, teachers, and other professionals. Major difficulties in social interaction have been a defining feature of individuals with autism (Fletcher-Watson et al., 2014). People with ASD often show an impaired comprehension of other people's mental states, such as thoughts, beliefs, and intentions (Frith and Happé, 1994;Frith and Frith, 2003;Jones et al., 2010;Gaigg, 2012;Schwenck et al., 2012). Recent studies showed that subjects with ASD have not only a difficulty in attributing another person's mental state but also in the capacity to respond to another person's mental state with an appropriate emotion (Sucksmith et al., 2013). These abilities seem to be involved in the multifaceted construct of empathy (Zaki and Ochsner, 2012). In agreement with recent literature (Jones et al., 2010;Baron-Cohen, 2011;Dziobek et al., 2011;Schwenck et al., 2012;Zaki and Ochsner, 2012) empathy should no longer be considered as a unitary concept, instead it comprises at least two components (Singer, 2006;Decety and Meyer, 2008;Dziobek et al., 2008). In fact, empathy includes the ability to understand what others are thinking or feeling, without necessarily "resonating" with that feeling state (cognitive empathy) and the ability to emotionally "resonate" with other people's feelings while understanding that they are distinct from one's own (affective empathy; Jones et al., 2010;Schwenck et al., 2012). The cognitive dimension of empathy requires complex cognitive functions, including perspective-taking and mentalizing (Shamay-Tsoory et al., 2002Shamay-Tsoory, 2011;Zaki and Ochsner, 2012), whereas affective empathy includes experience sharing of other persons' internal states (Zaki and Ochsner, 2012). Emotional contagion is a precursor of affective empathy, whereby embodiment entails the forming of a representation of the other person's feelings, and thereby sharing of their experience (Hadjikhani et al., 2014). Thus, mentalizing and experience sharing apparently represent two aspects of the same object, i.e., understanding and responding to another person's internal states, involving different mental systems. Mentalizing ability examines the theory of mind (ToM) capacity by asking subjects to draw explicit inferences about the mental states of other people. Experience sharing is the tendency to take on, resonate with, or "share" the emotions of others and it is often tied to a mechanism known as "internal resonance" (Zaki and Ochsner, 2012). Frontiers in Human Neuroscience www.frontiersin.org It is widely accepted that subjects with ASD do not possess a fully functioning ToM; even high functioning adults with ASD may struggle with complex ToM tasks (Ponnet et al., 2004;Fletcher-Watson et al., 2014). However, affective impairments found in people with ASD are mainly related to the cognitive recognition and processing of emotions, rather than to the actual ability to feel emotional distress or concern. The lack of a clear distinction between affective and cognitive empathy has led to an incomplete understanding of the empathic abilities of individuals with ASD. Interestingly, a few studies have formally assessed empathy in individuals with autistic conditions (Dziobek et al., 2008;Jones et al., 2010;Schwenck et al., 2012). Dziobek et al. (2008) showed an impairment in cognitive empathy, but the presence of normal empathetic concern (affective empathy) in adults with Asperger syndrome (AS), based on self-report questionnaires such as the Interpersonal Reactivity Index (Davis, 1980;Rogers et al., 2007) and the Multifaceted Empathy Test (MET; Dziobek et al., 2008). Moreover, another two studies (Jones et al., 2010;Schwenck et al., 2012) using only paper and pencil measures, have confirmed that ASD is characterized by difficulties in mentalizing ability (cognitive empathy), but not with affective empathy (Lockwood et al., 2013). The study of social skills in adolescents with ASD is crucial also for the construction of rehabilitation paradigms to improve empathic capacities. For this reason, in this study we investigated the empathic ability in adolescents with ASD compared to controls, using both paper and pencil and computerized measures, divided into mentalizing and experience sharing abilities in accordance with Zaki and Ochsner's (2012) model, to evaluate the presence of a dissociation between cognitive and affective empathic abilities in this population. MATERIALS AND METHODS The study included 30 participants: 15 adolescents (11 boys and 4 girls, mean age ±SD: 15.11 ± 4.89 years) were affected by ASD and 15 control subjects (10 boys and 5 girls; mean age = 16.50 ± 6.23), were recruited to match the ASD group with respect to age and education. Autistic spectrum disorder participants were selected by the Reference Regional Centre for Autism, Abruzzo Region Health System, L'Aquila (Italy). The ASD diagnosis were given by experienced clinicians according to the new criteria of the DSM-5 (American Psychiatric Association, 2013). ASD diagnosis of patients was made with Autism Diagnostic Observation Schedule, Second Edition (ADOS-2; Lord et al., 2012). Socio-demographic and clinical information of all the participants are summarized in Table 1. The parents of adolescents provided informed consent to participate in the study. First-order false belief test This task was designed to elicit a response that demonstrated the ability to make inferences about another individual's mental state, namely, that a character in the story holds a false belief. First order false beliefs require a subject to make an inference about the state of the world. To assess first order ToM two stories were used: The washing machine task (Rowe et al., 2001;Mazza et al., 2007) and the Cigarettes Task (Happé, 1994). Each subject obtained a score ranging from 0 to 1 for each question. If the subject gave a correct answer to both the first order stories, s/he had a global score for first order ToM equal to 2 (non-casual performance). Advanced Theory of Mind Task This task is an Italian adaptation of a cognitive task used by Blair and Cipolotti (2000) and proposed in the literature by Happé (1994). The task consists of a short version of 13 vignettes, each accompanied by two questions; the comprehension question "Was it true, what X said?," and the justification question "Why did X say that?" The 13 story-types included Lie, White Lie, Joke, Pretend, Misunderstanding, Double Bluff, and Contrary Emotion. Each subject obtained a score ranging from 0 to 1 for each question. The maximum score is 13. Basic Empathy Scale-Cognitive Subscale The Basic Empathy Scale (BES) comprises a total of 20 items (Jolliffe and Farrington, 2006;Albiero et al., 2009). The cognitive empathy subscale (CE subscale, nine items), measures the ability to understand another person's emotions. Each item (e.g., "I can often understand how people are feeling even before they tell me") asks participants to express their own degree of agreement on a 5-point, Likert-type scale, ranging from 1 ("strongly disagree") to 5 ("strongly agree"). The BES has demonstrated a good validity (Jolliffe and Farrington, 2006;Albiero et al., 2009). Cronbach's a coefficient was calculated to examine the internal consistency of the scale, considered globally and in its two dimensions, as yielded by the confirmatory factor analysis. The results showed satisfactory internal consistency for both the scale and its subscales, given that the global scale α coefficient was 0.87 and cognitive subscale α values was 0.74 (Albiero et al., 2009). EXPERIENCE SHARING MEASURES The Eyes Task is a revised version of the "Reading the Mind in the Eyes Test" (Baron-Cohen et al., 2001). In brief, participants are given 36 photographs depicting the ocular area in an equal number of different actors and actresses. At each corner of every photo, four complex mental state descriptors, e.g., dispirited, bored, are printed, only one of which (the target word) correctly identifies the depicted person's mental state, while the others are included as foils. The test is scored by totaling the number of items (photographs) correctly identified by the participant; therefore, the maximum total score is 36. In the Italian version the internal consistency (Cronbach's alpha) was 0.605. Test-retest reliability for the Eyes test, as measured by intraclass correlation coefficient, was 0.833 (95% confidence interval = 0.745 -0.902). The study of Vellante et al. (2013) confirms the validity of the Eyes test. Both internal consistency and test-retest stability were good for the Italian version of the Eyes test. In the Emotion Attribution Task (Blair and Cipolotti, 2000). This task assessed ability to represent the emotions of others. In this task, the participant was presented with 58 short stories describing an emotional situation and was required to provide an emotion describing how the main character might feel in that situation. The sentences were designed to elicit attributions of positive and negative emotions. The task was scored according to the number of correct attributions. For this test as well, validation studies are lacking (Mazza et al., 2007). The Basic Empathy Scale-Affective subscale (AE subscale, 11 items): measuring emotional congruence with another person's emotions. Example items included "I get caught up in other people's feelings easily." Each item asks participants to express their own degree of agreement on a 5-point, Likert-type scale, ranging from 1 ("strongly disagree") to 5 ("strongly agree"). Cronbach's alpha was 0.86 (Albiero et al., 2009). Multifaceted Empathy Test To assess empathy multi-dimensionally, we administered the MET (Dziobek et al., 2008), a measure of empathy that allows separate assessments of cognitive and affective aspects of empathic functioning. This test consists of a series of photographs that depict people in emotionally charged situations. In these pictures, taken from the International Affective Picture System (IAPS; Lang, 1980), the stimuli show individuals feeling different emotions: positive emotions (25 pictures that include emotions such as happiness, positive surprise), negative emotions (25 pictures that include emotions such as sadness, anger, disappointment). Positive and negative emotions were presented in random order. All the stimuli were displayed on a black screen. For each picture the subjects were required to infer the emotional states of the individuals shown in the image by selecting one of four emotional state descriptors (cognitive empathy). To assess affective empathy, subjects rate their level of empathic concern for the individuals displayed in the images on a 9-point Likert scale. Mann-Whitney U test was used to analyse the level of significance of participants' scores on First-order false belief task. T-test analysis was used to test significant differences between groups (ASD and control group) in socio-demographic, mentalizing (Advanced ToM and BES-cognitive subscale) and experience sharing measures (Eyes Task, emotion attribution task, and BES-affective subscale). To evaluate the difference in MET performance between two groups, a 2 × 4 repeated measure design was used. The assumption of normality of the outcome variable was assessed carrying out a Kolmogorov-Smirnov non-parametric test. Restricted maximum likelihood estimation (REML) and an unstructured correlation have been used. Marginal effects have been calculated to getting an estimation of the way the presence of ASD affects the scores of each model predictor. The overall statistical significance of the model has been set at 0.05 level. The Statistical Package for the Social Sciences (SPSS) software (version 22; SPSS Inc, Chicago, IL, USA) was used for calculating these statistics. In the First-order false belief task the groups differed significantly on the non-parametric Mann-Whitney U test (U = 86.5; Z = −5.44; p = 0.0001). The percentage of correct scores in Washing Machine and Cigarette Task was 25.8 and 45.2% for ASD, whereas 100 and 80.8% for the controls, respectively. Mentalizing performance scores (means and SD) are reported in Table 1. Frontiers in Human Neuroscience www.frontiersin.org EXPERIENCE SHARING MEASURES Adolescents with ASD showed lower scores compared to the control group in the Emotion Attribution Task total score (T 1,28 = −4.618; p = 0.001), with a significant difference in negative emotions (T 1,28 = −2.803; p = 0.011), but not in positive emotions (T 1,28 = −2.149; p = 0.068). Adolescents with ASD also showed lower scores compared to the control group in the Eyes Task (T 1,28 = −3.142; p = 0.004), but no significant differences in BES affective subscale (T 1,28 = −7.38; p = 0.322) were found. Experience sharing performance scores (means and SD) are reported in Table 1. Multifaceted Empathy Test Normalized MET data were analyzed by a linear mixed model for repeated measure design with REML. Analysis showed a significant group effect (z = −1.18 ± 0.18; p < 0.05). DISCUSSION The aim of this study was to investigate the empathy dimensions in a sample of adolescents with ASD. Specifically, in our research, we examined the empathic abilities in an ASD group compared The significant results are highlighted by bold numbers. FIGURE 1 | Marginal effects of the scores of adolescents with ASD and Controls. to normal controls, using a variety of assessment instruments, both paper and pencil and a computerized task. Our data show that adolescents with ASD have a deficit in the cognitive empathy dimension, but do not differ from controls in the affective empathy dimension when other people express emotions with positive valence. Their difficulty in empathizing with the emotional experience of other people is linked to sharing of emotions with negative valence. Specifically, the results obtained in the paper and pencil measures that investigate mentalizing abilities reveal that the adolescents with ASD hardly interpret other people's mental states (First-order false belief and Advanced ToM Task) compared to controls. The ASD group also have trouble understanding the meaning of what other people are saying and doing, and they typically struggle to take the other person's perspective (BES-cognitive subscale). The evaluation of mentalizing ability through false belief tasks is a key element in investigating the mentalizing skills in individuals with ASD. Therefore, these data confirm that ToM is a core deficit in ASD (Fletcher-Watson et al., 2014;Lai et al., 2014), which links both to precursor skills, such as joint attention and emotion recognition, and to subsequent abilities such as creating friendship and social inclusion. Instead, regarding the sharing experience measures (involving affective empathy, shared self-other representations and emotional contagion), adolescents with ASD were able to empathize with the emotional experience of other people when the latter expressed emotion with a positive valence. In contrast, they showed a deficit in sharing negative emotions. Moreover, the ASD group were unable to share other people's emotions by observing their ocular region (Eyes Task). The results obtained in paper and pencil measures were confirmed by the computerized empathy task (MET, Dziobek et al., 2008). The analysis of affective and cognitive empathy measures evaluated through the MET, showed significant differences between Frontiers in Human Neuroscience www.frontiersin.org the adolescents with ASD and the control group in the cognitive empathy dimension both when they had to understand and recognize positive and negative emotions. The cognitive empathic deficits of individuals with ASD could be due to a marked deficit in the ability to understand and explain the mental/emotional states of other people (Jones et al., 2010;Hirvela and Helkama, 2011;Samson et al., 2012;Schwenck et al., 2012;Lockwood et al., 2013). As far as affective empathy is concerned, the ASD group do not show difficulties in the degree of empathic concern when the emotion is positive, whereas the difficulty is present when observing emotional images with negative valence. The adolescents with ASD feel aroused and involved when others experience positive emotions like the healthy subjects do. Therefore, our results obtained in both measures (paper and pencil and the computerized task) suggest that the ASD subjects showed a difficulty in cognitively identifying the mental state of other people, regardless of the different emotions to which they had to respond; on the other hand, the deficit in affective empathy is linked to emotional valence. Several studies suggest that the processing of negative emotions is most difficult for individuals with autism (Howard et al., 2000;Ashwin et al., 2006;Corden et al., 2008;Wallace et al., 2008;Humphreys et al., 2013). The role of emotion in autism is still being debated. Ashwin et al. (2006) consider the difficulty of processing negative emotions in subjects with ASD to be linked to an atypical function and structure of the amygdala. In their study people with ASD were less accurate on the emotion recognition task compared to controls, but only for the negative basic emotions. This was discussed in the light of similar findings from people with damage to the amygdala. Based on our results, we assume that the impairment of experience sharing or affective empathy in adolescents with ASD is linked to their poor shared self-other representations of negative emotions. Blair (2008) has proposed that one of the key processes underpinning functional affective empathy is the recognition of other people's distress cues (i.e., fear and sadness). Past studies (Howard et al., 2000;Blair, 2008) have shown that children and adolescents with psychopathic tendencies have difficulties in recognizing negative facial and vocal expressions. Thus, it is not possible to speak of impairment of the affective empathy dimension in adolescents with ASD without considering the type of emotion to which the subject responds. Emotional contagion for negative emotions of other people (like sadness, distress, suffering, anger) is important for adaptive social behavior. The lack of sharing experience when other people have negative emotions, leads to a failure of appropriate empathic behavior in adolescents with ASD. Our results are important for the development of rehabilitation interventions that help these individuals to improve their social skills. These results are in agreement with recent literature (Jones et al., 2010;Baron-Cohen, 2011;Cox et al., 2012;Schwenck et al., 2012;Lockwood et al., 2013). In particular, Baron-Cohen (2011) shows that cognitive empathy is impaired but affective empathy is not, in individuals with autism. On the contrary, in other psychological conditions, such as psychopathic personality disorder (borderline personality disorder, narcissism, psychopathy), an intact cognitive empathy and impaired affective empathy are present (Baron-Cohen, 2011). The lack of affective empathy, but not of cognitive empathy, seems be an important factor to promote violent and aggressive behaviors. In conclusion, empathy is a multidimensional construct and requires three abilities: first, the recognition of emotions in oneself and other people via facial expressions, shown by the gaze or behavior; second, the sharing of emotional states with others, i.e., the ability to experience similar emotions to other people while being conscious that this is a simulation of the emotional feeling and it is not one's own emotion (Derntl et al., 2010) and finally to take the perspective of another person, though the distinction between one's self and other people remains intact (Decety and Jackson, 2004). For this reason, it is important to use more instruments that allow us to capture all aspects of empathy. Our approach enabled a more detailed analysis of these empathic competencies, also considering the role of emotions in the empathic construct. We believe that this dissociation in cognitive and affective empathy is of importance for several psychiatric conditions which show the empathic ability impairment, such as autism spectrum disorder but also schizophrenia (Fujino et al., 2014) or post-traumatic stress disorder (Mazza et al., 2013). Replication with a larger sample of ASD subjects will be necessary to confirm the present findings.
2016-06-17T16:39:16.800Z
2014-10-07T00:00:00.000
{ "year": 2014, "sha1": "3389f462fa05b526fa4c7b867d75ab81653a45ee", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnhum.2014.00791/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3389f462fa05b526fa4c7b867d75ab81653a45ee", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
249913405
pes2o/s2orc
v3-fos-license
Wheat yield estimation based on analysis of UAV images at low altitude . Information about the yield of wheat crops makes it possible to correctly assess their productivity and choose apropriate agronomic procedures to maximize yield. However, determining yields based on manual ear counts is labor intensive. Recently UAVs demonstrated high efficiency for rapid yield estimation. This paper presents a software package WDS (Wheat Detection System) for ears counting in wheat crops based on RGB images obtained from UAVs. WDS creates the flight plan, for the acquired images carries out automatic georeferencing to the appropriate fragment of the field, counts ears using the neural network models, reconstructs the density of ears in the crop and visualizes it as a heat map in the interactive web application. Based on the field experiment the accuracy of ears counting in plots was assessed: Spearman and Pearson correlation coefficients between the ears density counted manually and using WDS were 0.618 and 0.541, respectively ( p -value < 0.05). WDS avaliable at https://github.com/Sl07h/wheat_detection. Introduction Wheat is one of the most important crops that feeds a significant portion of the world's population. In the process of its production it is necessary to constantly have information about the yield of crops, which allows to properly assess their productivity and to adapt agronomic procedures to maximize the yield. Protocols for manually counting the density of ears in crops (number of ears per square meter) have been the only way to estimate yields for a long time. However, this method is labour-intensive and time-consuming. An alternative is the development of automated systems operating in the field [1]. Most of such systems allow to obtain 2D images of crops and use computer vision methods for their automatic processing, in particular, for counting ears in the image. Modern methods of image analysis based on neural network algorithms and deep learning allow ears identification on the image of crops and counting their number with high accuracy [2][3][4]. The use of these technologies is justified due to the lower cost and acceptable accuracy compared to the labour costs of manual human observation.In wheat yield estimation, stationary systems, ground-based robotic platforms and UAVs can be used to acquire images. The former allow high quality images to be obtained, but only for a small area of crops. The greatest degree of mobility can be achieved with UAVs, but the images are of poorer quality, often blurred due to wind and engine vibration. These effects are particularly significant at low UAV flight altitudes, which must be used to obtain images with sufficient resolution to analyze the spikes [5]. On the other hand, these interferences make it difficult to stitch the acquired images needed to relate them to the spatial coordinates of the field. In this paper, we have created a software package designed to perform a wheat yield estimation in the field based on ears counts from UAV images. Field experiments We studied wheat crops in a field of SibNIIRS located near Novosibirsk, Russia. Coordinates of the field: 54.875, 82.958. Sowing was performed on May 12, 2021; crop was photographed on July 29, 2021 in ear formation phase. Manual counting of ears was performed after harvesting from the area inside square frames with a side of 0.5 m (area 0.25 m 2 ) dug in at the beginning of the season. At the end of the season, all plants inside the frames were cut and the number of productive stems was counted. To reduce the influence of different soil composition, each experiment was performed in four replications; their sum gives an estimate of spike density per square meter. Flight plan preparation A DJI Mavic 2 Pro UAV with a 20 megapixel camera was used. The flight plan was composed based on the following conditions: flight height of 3 meters; flight speed of 1.5 m/s; camera pointing downward; frame frequency was determined from the condition of overlapping the images over the total area by at least 50%. A python script was developed to automate the construction of the flight plan when imaging a section of the field, given by the coordinates of four vertices. The program takes as input image resolution and camera view angle, altitude and flight speed, degree of image overlap and coordinates of the four field vertices in the format: degree, minute and second. The output is a csv file in which the flight route is recorded. This file is input to the Litchi software (https://flylitchi.com), which is used to perform the UAV's flight. The algorithm for building a flight plan included the following steps: 1. The longest side of the field quadrilateral and its west point A are selected. 2. The remaining points are rotated around point A by the angle between the side and the horizon. As result, quadrilateral oriented with a base parallel to the equator. 3. The number of flight paths based on the necessary overlap of images is calculated. 4. The intersection of tracks is built teking into account the image overlaps. 5. Step 2 is repeated, but in the opposite direction. Linking images to field coordinates During the flight, information about the UAV position is recorded in the metadata of each image file and includes the actual values of coordinates, flight altitude, camera view angle, roll, pitch and yaw. Using these parameters the developed script determines which area of the field got into the frame and whether there was a protocol violation at the moment of shooting (a significant deviation of the drone from the shooting route by coordinates and altitude). Images with protocol violations were excluded from further analysis during processing. The exiftool utility [6] and the exif library (https://pypi.org/project/exif/) were used to extract metadata. The metadata for all images of one overlap series were recorded in a single csv file. The coordinates of the border of the area caught in the frame are calculated in two steps: 1. Based on the values of the camera view angle and the height of the copter we calculate the length of the diagonal of the fragment of the field, which gets on the image in meters. Then using the proportions of the image we calculate the lengths of the sides of the field quadrilateral, which corresponds to this fragment. 2. We perform a rotation of a quadrangle corresponding to a fragment of a field on the image concerning the azimuth value and shift it so the position of the center corresponds to coordinates of the copter. The obtained coordinates of the vertices determine the localization of the field fragment that corresponds to the image. To display the results in the form of HTML-pages, the folium library (https://pypi.org/project/folium) was used, which allows creating an interactive map with different layers. When calculating the density of ears, the coordinates of each ear were determined based on the spatial coordinates of the image projection and the magnitude of the scale. On this basis, the number of images in which an ear could fall was determined. After that for each ear the weight inversely proportional to number of images on which it was localized was calculated. Then a grid is built over the area around our field with the given size and for each cell the number of ears caught in it is calculated and divided by its area. The obtained data is visualized in the form of a heat map. To calculate the number of ears in individual plots, we can use manual marking of their coordinates in geojson format. Ear counting performance estimation Two techniques were used to estimate the accuracy of wheat ear counting. The ear recognition accuracy estimation was used for neural network prediction results on an additional sample of images that were added to the 2021 GWHD dataset (not used in the network training) [10]. The average precision (AP) and mean average precision (mAP) for bounding boxes identified by neural networks with IoU over 50% were used as described in [11]. The metric implies how much overlap there is between the ear bounding boxes predicted by the model and those marked in the dataset; its value measured from 0 to 100%. The performance estimation based on the actual yield determination was done as described in section 2.1. After image processing and ear counting in the WDS system, we marked the coordinates of the plots. The system counted the number of ears per plot. Knowing that the area of each plot is equal to 24.75 m 2 , we obtain the ear density. Pearson and Spearman correlation coefficients were used to compare the values obtained by manual counting and our system. Results and discussion The structure of developed WDS package is shown in Fig. 1: (A) construction of the flight plan on the quadrangular field by 4 points, (B) marking of plot boundaries, (C) counting the number of ears on each plot. Software package, instructions and scripts for installation are available at https://github.com/Sl07h/wheat_detection. An example of visualizing the density of ears for an experimental field is shown in Fig. 2. Panel A shows the plot of image frames projections on the field with some images violating of protocol. The green frames correspond to the overlapping images of the field, which were taken at a height of 3 ± 0.3 m with the camera roll to the side by no more than 3 degrees. The red squares correspond to images that do not comply with the protocol (flight height 6 meters and camera tilt is not strictly down). In panel B, the intensity of the green color shows the densities of ears for 9 plots of the field, the coordinates of which were set by the user. Panels C and D show the density of ears in test crops at different sizes of the visualized grid. The results of testing the models of recognition and spike counting in the image are shown in Table 1. Model testing was performed on the 27 sets of images added to GWHD in 2021 [10]. Each set is provided by one of 10 institutions. The variability of performance metrics and shooting conditions varies greatly. The best accuracy (73.54 on the mAP metric) is provided by the efficient-det model. The arithmetic mean mAP for these samples is 41.40 for faster rcnn and 37.51 for efficient-det. A comparison of the ear density estimates made using our approach and those made manually showed that the Spearman and Pearson coefficient values between them are 0.6176, p-value=0.0013 and 0.5405, p-value=0.0064, respectively. Conclusions We have developed a software package to estimate wheat yield based on counting the number of ears in UAV images of wheat crops, which does not require image stitching. The software package allows to form a flight plan for low altitude flying over the crops (~3 m), to count the number of ears on each image by a deep learning neural network, to link the obtained images to the crop map, and to visualize the density of ears for the studied crops. We tested the packages on wheat crops in the summer of 2021 and showed that Spearman and Pearson correlation coefficients between the average values of ears density estimated by UAV and manually were more than 0.5.
2022-06-22T15:21:37.269Z
2023-07-15T00:00:00.000
{ "year": 2022, "sha1": "0dfccef5d105f6a375fd420d58c2635f2b8fba8f", "oa_license": "CCBY", "oa_url": "https://www.bio-conferences.org/articles/bioconf/pdf/2022/06/bioconf_itia2022_05006.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b2bca2823c788d5dff6eb0b733118be3e5014d75", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
233139093
pes2o/s2orc
v3-fos-license
Surgery for fibroadenoma arising from axillary accessory breast Background Patients with fibroadenomas in axillary accessory breasts (AABs) have a palpable mass, cyclic axillary pain, and aesthetic concerns that must be addressed. We compared the baseline patient characteristics, AAB characteristics, and surgical outcomes of patients with AABs with and without fibroadenomas undergoing surgical excision. We also monitored the patients for recurrence of axillary fibroadenomas. Methods This retrospective study involved 2310 women who underwent AAB excision from 2014 to 2019. Patients with and without a palpable fibroadenoma were divided into a fibroadenoma group and non-fibroadenoma group, respectively. All patients underwent complete excision of accessory mammary gland (AMG) tissue, including fibroadenomas in the AABs. We removed the fibroadenoma and the AMG tissue with a minimal axillary incision. Results Thirty-nine patients had a palpable fibroadenoma in the AAB, and all patients in the fibroadenoma group had cyclic axillary pain and a palpable axillary mass. There were no significant differences in the patients’ age, weight of the AMG tissue, liposuction volume, or fibroadenoma laterality between the two groups. The body mass index in the fibroadenoma group was lower than that in the non-fibroadenoma group (19.9 vs. 22.3 kg/m2, respectively; P < 0.000). Concurrent fibroadenoma excision in a normal breast on the chest was performed more often in the fibroadenoma group than in the non-fibroadenoma group (35.9% (14/39) vs. 4.1% (92/2271), respectively; P < 0.000). The mean fibroadenoma size was 2.1 cm (range, 1.1–9.1 cm). All patients were satisfied with the degree of postoperative pain relief, disappearance of palpable lesions, and cosmetic improvement. No patients developed fibroadenoma recurrence. Conclusions Complete excision of the AMG tissue and fibroadenoma is appropriate in patients with an AAB with a fibroadenoma. Surgeons should also consider the high incidence of concurrent fibroadenomas in the normal breasts on the chest. Background An axillary accessory breast (AAB) occurs in 2-6% of women [1], and some patients require treatment for associated cyclic axillary pain or aesthetic concerns [2,3]. Rarely, patients present with a chief complaint of a palpable nonpainful axillary mass, which has been described in case reports as a fibroadenoma [4]. Periodic enlargement of an AAB and cyclic pain are the primary reasons for surgical treatment. A palpable mass may also be an indication for surgical treatment. Lee et al. established an AAB classification system based on the severity of the external appearance of the AAB and recommended treatment by complete accessory mammary gland (AMG) excision with liposuction of the supramammary fat layer [5,6]. To the best of our knowledge, although several case reports describing treatment for fibroadenomas arising from AABs have been published, no guidelines have been published and no large studies have been performed. Fibroadenomas in a normal breast on the chest (chest normal breast (CNB)) are usually asymptomatic; however, patients with fibroadenomas in an AAB may have cyclic pain and aesthetic concerns as typical symptoms of an AAB secondary to the presence of AMG tissue. For successful treatment of fibroadenomas in an AAB, we must determine whether the problem is limited to the fibroadenoma or is accompanied by pain and aesthetic concerns caused by the AMG tissue. Treatment of a palpable fibroadenoma in the CNB involves excision of the fibroadenoma and preservation of the mammary gland [7], but treatment of a fibroadenoma in an AAB should resolve the pain and aesthetic concerns caused by the AMG tissue. Therefore, complete excision of both the AMG tissue and fibroadenoma is necessary to treat fibroadenomas arising from an AAB. With this approach, it is possible to prevent fibroadenoma recurrence in the AAB. In this study, we compared the baseline patient characteristics, AAB characteristics, and surgical outcomes of patients with AABs with and without fibroadenomas undergoing surgical excision. We also monitored the patients for recurrence of axillary fibroadenomas. Methods We retrospectively analyzed the data of 2310 patients with an AAB treated in Damsoyu Hospital, Seoul, Republic of Korea from January 2014 to October 2019. The inclusion criteria for patients with an AAB who were candidates for surgical treatment were a > 3-year history of either or both of the following: (1) AAB enlargement and axillary pain and/or (2) persistent AAB enlargement and related psychosocial/emotional distress and (3) no malignant tumor in the CNB or AAB. Classification of the AAB was performed in accordance with the Damsoyu-Lee classification [5]. All patients underwent ultrasonography and pathological confirmation of lesions in the CNB and AAB. The ultrasonographic findings of a fibroadenoma in the AAB were characterized by the same oval hypoechoic appearance as seen with a fibroadenoma in the CNB (Fig. 1). The axillary tail of Spence connected to the CNB was excluded from the AAB. All patients with a Breast Imaging Reporting and Data System (BIRADS) category ≥ 4 mass in the CNB or AAB underwent core biopsy before surgery. After confirming the pathology by preoperative core biopsy, all ≥ 2-cm palpable fibroadenomas in the CNB were removed during AAB excision. Lesions of BIRADS category < 3 were followed up regularly. The fibroadenoma diagnoses were confirmed in the post-excision final pathologic examination. When cancer was histopathologically diagnosed, we referred the patient to a tertiary center. In this study, 14 patients had cancer in the CNB and no patients had cancer in the AAB. We divided the patients into a fibroadenoma group (n = 39) and a non-fibroadenoma group (n = 2271) according to the presence of a fibroadenoma in the AAB. All patients underwent complete excision of the AMG tissue, including the fibroadenoma. We collected the following data from the patients' medical records: age at surgery; body mass index; symptom characteristics, including onset; type and duration of surgery; histopathological results; size, location, and number of fibroadenomas; AMG tissue weight; liposuction volume; postoperative complications; and satisfaction score. The onset of AAB symptoms was divided into two categories: after puberty (from the time of puberty, when secondary sexual characteristics appeared and menstruation began) and after pregnancy (no AAB symptoms before marriage; AAB symptoms appeared only after pregnancy). All patients completed post-hoc Fig. 1 Sonographic findings of a fibroadenoma in an axillary accessory breast. a Accessory mammary gland (AMG) (dashed outline) surrounding an oval-shaped fibroadenoma. In the image, the top of the fibroadenoma is close to the skin, and at the bottom of the fibroadenoma, the AMG is visible surrounding the fibroadenoma. b AMG (dashed outline) surrounding an elliptical-shaped fibroadenoma. The AMG surrounds the fibroadenoma satisfaction surveys evaluating their appearance and axillary pain 6 months after surgery. We evaluated fibroadenoma recurrence at the AAB site and overall satisfaction with the decreased level of axillary pain and alleviation of aesthetic concerns using a 5-point Likert scale in all patients [8]. Surgical technique and follow-up We adopted a previously reported AAB excision method through a 1-cm incision on the axillary crease and performed all procedures with the patients under general anesthesia [5,6]. For small fibroadenomas, we performed en bloc resection with the AMG tissue (Fig. 2). For fibroadenomas measuring > 3 cm, we first excised the fibroadenoma and then excised the AMG tissue ( Fig. 3) to reduce the severity of the incision scar. In patients with giant fibroadenomas, we removed the fibroadenoma by peeling to reduce the size of the resulting scar ( Fig. 4). We first instilled a tumescent solution through the incision into the accessory breast tissue. This was followed by liposuction with a power-assisted device. After liposuction of the supramammary fat layer, we performed complete AMG tissue excision and closed the incision with subcuticular absorbable sutures. The need for redundant skin excision was determined after 6 months [6]. An external drain was not inserted. After closing the wound, we applied adhesive skin closures (Steri-Strip; 3 M Health Care, Maplewood, MN, USA) to the incision. Followup examinations were routinely performed at 1, 3, and 6 months postoperatively. Statistical analysis and assessment methods All statistical analyses were performed using R software version 3.6.1 (R Development Core Team, Vienna; http:// www.R-proje ct. org). Continuous variables are presented as mean and range, and categorical variables are presented as frequency and percentage. The Shapiro-Wilk test was used to assess the normality of continuous variables. The t-test or Wilcoxon's rank-sum test was used for continuous variables, and the χ 2 test was used for categorical variables. A P value of ≤ 0.05 in the univariate analysis was considered statistically significant. We administered 6-month postoperative surveys evaluating the patients' satisfaction with their appearance and axillary pain compared with the preoperative levels and measured the patients' satisfaction with the scar, pain, and appearance separately using a 5-point Likert scale (1: continuous pain and very unsatisfactory appearance; 2: some pain and unsatisfactory appearance; 3: neutral; 4: little pain and satisfactory appearance; and 5: no pain and very satisfactory appearance) [8]. Clinical characteristics and skin excision outcomes between fibroadenoma and non-fibroadenoma groups The patients' clinical characteristics and surgical outcomes are summarized in Table 1. We compared the baseline patient characteristics and AAB characteristics between the fibroadenoma and non-fibroadenoma groups and found no differences in age, AMG tissue weight, liposuction volume, family history, or fibroadenoma laterality between the two groups. The body mass index in the fibroadenoma group was lower than that in the non-fibroadenoma group (19.9 vs. 22.3 kg/ m 2 , respectively; P < 0.000). All patients in the fibroadenoma group had axillary pain secondary to the AMG tissue, but there was no significant difference in pain between the two groups. The onset of AAB symptoms was more frequent after puberty in the fibroadenoma group (79.5%) than in the non-fibroadenoma group (57.0%) (P = 0.008), and the number of patients who underwent fibroadenoma excision in the CNB was greater in the fibroadenoma group than in the nonfibroadenoma group (35.9% (14/39) vs. 4.1% (92/2271), respectively; P < 0.000). Five patients had fibrocystic change in the AAB; none had a malignant tumor. The postoperative complication rate, redo operation rate, and satisfaction score were not significantly different between the two groups, and the satisfaction scores for aesthetic concerns and axillary pain were > 4.7 in both groups. No patients developed fibroadenoma recurrence in the axilla during the average follow-up period of 41 months. Clinical characteristics of AABs with fibroadenomas The characteristics and surgical outcomes of patients with a fibroadenoma in an AAB are summarized in Table 2. Fibroadenomas were present in bilateral AABs in 1 patient, in a right-sided AAB in 21 patients, and in a left-sided AAB in 17 patients. One patient had two fibroadenomas, and all other patients had a single fibroadenoma. The largest fibroadenoma measured 9.1 cm in diameter, and the mean fibroadenoma diameter was 2.1 cm (range, 1.1-9.1 cm). All patients were satisfied with the degree of postoperative pain relief, disappearance of palpable lesions, and cosmetic improvement. No patients developed fibroadenoma recurrence during follow-up. Discussion Benign and malignant tumors may occur via pathological mechanisms in an AAB, as they do in the CNB [9]. Fibroadenoma is a common benign tumor characterized by a nodule of fibrous tissue with epithelial elements [10]. Although no large studies of fibroadenomas in AABs have been performed to date, one report indicated that the prevalence of fibroadenomas was 2.0% (11/540) among patients with AABs who underwent surgery [5]. Other benign tumors and cancers other than fibroadenoma have been reported in AABs, but such lesions are very rare [5,11]. The main symptom of a fibroadenoma in an AAB is a palpable mass, and some patients present for evaluation because of a fear of cancer. When we examine patients with fibroadenomas, there are usually no symptoms directly related to the fibroadenoma; however, most patients have cyclic pain caused by the AMG tissue. In the present study, all 39 patients with a fibroadenoma in the AAB had cyclic pain. Such pain often occurs with cyclic swelling secondary to hormonal changes during menstruation and pregnancy, and the severity may be so great that the patients request surgical treatment [5,12]. Several reports have stated that the aim of AAB surgery is to improve the patient's appearance or to relieve axillary pain [2,13]. Complete treatment of an AAB involves removing all AMG tissue [14], which may be accompanied by liposuction to reduce scars. Complete excision of the AMG tissue should be performed to prevent recurrence [12,15]. In this study, the postoperative satisfaction surveys administered 6 months postoperatively revealed scores of ≥ 4.7 for axillary pain and aesthetic improvement in both groups, indicating Fig. 4 Photographs of a 28-year-old woman who underwent surgery to remove a 9.1-cm giant fibroadenoma arising from an axillary accessory breast (AAB). a Preoperative frontal appearance. The right side was misshapen by the giant fibroadenoma, and accessory mammary gland (AMG) tissue in both axillae was confirmed using ultrasonography. b Frontal view 6 months postoperatively. c Preoperative right axillary appearance with arms abducted. d Right axillary appearance 6 months postoperatively. e We removed the fibroadenoma by incising the skin and peeling away the fibroadenoma. f The fibroadenoma was completely removed from the axilla through a 1-cm skin incision. g Fibroadenoma specimen. h The AMG tissue was also removed through the 1-cm incision. The image on the left shows the AMG removed from the right AAB, and the image on the right shows the AMG removed from the left AAB that both the axillary cyclic pain and aesthetic concerns were almost eliminated. A fibroadenoma may recur secondary to remnant tumor tissue regrowth after mass excision, or a new fibroadenoma may develop in the remaining AMG tissue. One study revealed a 15% recurrence rate 5 years after fibroadenoma removal in the CNB [16]. In the current study, the mean follow-up period was 48.2 months, and no patients developed fibroadenoma recurrence. Eliminating all AMG tissue in addition to fibroadenoma excision can reduce fibroadenoma recurrence. Patients with a fibroadenoma in the CNB develop cancer more often than patients without a fibroadenoma [10], and patients with a fibroadenoma in the AAB may also develop cancer in the AAB. In our study, no patients had cancer in the AAB at the time of surgery, and no cancer developed in the axilla during the postoperative followup. Therefore, complete treatment of a fibroadenoma in an AAB requires both fibroadenoma excision and complete excision of the AMG tissue, which is the source of the fibroadenoma development and which causes axillary pain. This study is clinically meaningful in that it is the first large study to present a treatment guideline for fibroadenomas in AABs. This study had several limitations. First, our hospital does not treat malignant tumors and does not study carcinoma because patients with breast cancer are referred to a tertiary center when breast cancer is diagnosed preoperatively. Additionally, our hospital does not provide overall treatment for breast disease; instead, it is an institution that specializes in AAB treatment. No statistics are available on the prevalence of AAB because our hospital does not perform breast examinations for other breast diseases. This is why the prevalence of AAB in the present study may seem misleadingly high. Ethnic and geographical differences should be investigated in future studies. Second, the number of patients with fibroadenomas was relatively small in this study. Thus, larger studies involving higher numbers of patients are necessary. Additionally, although accessory breasts may occur in areas other than the axilla, only AABs were included in this study. Finally, patients without cyclic pain may not seek medical care; therefore, the prevalence of fibroadenomas in AABs may be lower than indicated in our study. Conclusions We recommend complete excision of the AMG tissue including the fibroadenoma, which is appropriate in patients with a fibroadenoma in an AAB. Patients with a fibroadenoma in an AAB are at increased risk of also having fibroadenomas in the CNB. Importantly, our results Data are presented as mean (range), n (%), or n * Most P-values represent comparisons between categorical variables, which were tested using the χ 2 test. Continuous variables were tested using the Wilcoxon ranksum test and t-test a Satisfaction score was measured according to scar, pain, and cosmesis level scores 6 months postoperatively using a 5-point Likert scale AAB, axillary accessory breast; BMI, body mass index; DL classification, Damsoyu-Lee classification
2021-04-07T14:05:00.971Z
2021-04-07T00:00:00.000
{ "year": 2021, "sha1": "6a8a3ce588e0a5d20a8738a9598e4e9eb9a5abca", "oa_license": "CCBY", "oa_url": "https://bmcwomenshealth.biomedcentral.com/track/pdf/10.1186/s12905-021-01278-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6a8a3ce588e0a5d20a8738a9598e4e9eb9a5abca", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248301530
pes2o/s2orc
v3-fos-license
Learning About Your Mental Health From Your Playlist? Investigating the Correlation Between Music Preference and Mental Health of College Students The present study explored the correlation between music preference and mental health of college students to make an empirical contribution to research in this field. The self-reported music preference scale and positive mental health scale of college students were adopted to conduct a questionnaire survey in college students. Common method variance was conducted to test any serious common method bias problem. No serious common method bias problem was observed. The results showed that college students’ preference for pop music, Western classical music, and Chinese traditional music has a significant and positive correlation with their mental health. Furthermore, college students’ preference for heavy music has a significant and inverse correlation with their mental health. This research presents a correlational study; therefore, no causality can be inferred. INTRODUCTION The World Health Organization statement "there is no real health without mental health" has long become a global principle (WHO, 2013). Mental health of college students and the measures that should be adopted by universities to deal with this problem have become a major concern (Castillo and Schwartz, 2013). Some researchers have reported that the mental health problems of college students are becoming increasingly complex and serious (Pledge et al., 1998;Benton et al., 2003). The increased prevalence of symptoms of depression, anxiety, eating disorders, and other mental diseases among college points toward a mental health crisis. Therefore, conducting active surveys and seeking possible solutions to meet the needs of college students are imperative (Lattie et al., 2019). Accordingly, the key factors influencing college students' mental health should be explored to provide an empirical basis for improving their mental health. Music can help college students adjust their mental state, release inner stress and pain, and express happiness, thereby serving as a means of decompression (Huang et al., 2020). It can also evoke inner feelings and guide emotions, which are the key factors influencing individuals' cognition, decisions, and actions (Koelsch, 2015). Researchers have been paying increasing attention to the students' music preference (Schwartz and Fouts, 2003;Ballmann, 2021). Earlier, some researchers defined music preference as people's preference for certain music in the face of two or more choices (Rentfrow and Gosling, 2003). In addition, some researchers argued that music preference is a decision on the overall music stimulation, which is made by individuals after listening to the whole piece of music and may continue to occur long after listening to the music (Brattico and Pearce, 2013). This study adopted the latter view and defined music preference as the degree of preference of college students for a specific music pattern after the overall music stimulation. Schwartz and Fouts (2003) classified music preference into three types, namely heavy music, light music, and compromised music. Liu (2020), a Chinese researcher, classified music preference into classical music, pop music, and Chinese folk music. Rentfrow and Gosling (2003) categorized music preference into five types, namely Chinese traditional music, percussion/Hip-hop music, pop music, classical music, and opera. In the present study, music preference was classified into pop music, Western classical music, Chinese traditional music, and heavy music because the studied cohort in this study was Chinese college students. The study explored the music preference of Chinese college students in the context of Chinese culture. Some researchers have asserted that people exposed to positive music hold a positive attitude and exhibit less negative experience, thus presenting a better mental state (Yuan, 2020). In other words, the mental state and mood of individuals who prefer to listen to positive music is better. However, these individuals may feel sad or depressed when they listen to the music of not their choice (Rentfrow, 2012). Janata et al. (2012) reported that the correlation of music preference with relieving stress and regulating psychology varies among individuals. Empirical studies have found that music preference has a significant correlation with mental health (Carlson et al., 2015). Music can invoke various emotions, which are reflected in individual physiological signals and the correlation with mental health (Rahman et al., 2021). This study inferred that college students' music preference may exhibit a correlation with their mental health. Most studies on the students' music preference have been conducted in Western countries (Carlson et al., 2015). However, the Western culture differs greatly from the Chinese culture (Rentfrow and Gosling, 2003). Therefore, this study considered Chinese college students as the sample and adopted the Chinese local music preference scale to explore the correlation between music preference and mental health of Chinese college students, making an empirical contribution to research on music preference and mental health. THEORETICAL BACKGROUND AND HYPOTHESES According to Stimulus-Organism-Response theory, individuals' response is triggered by their internal emotional state after being stimulated (Mehrabian and Russell, 1974). The theory holds that most changes in the environment act as a stimulus for an individual, which results in the transmission of information to the individual's nervous system, leading to reactions to muscles or psychology (Bergius, 1994). Some researchers adopted this theory to explore the relation between music and individual psychology (Juslin and Västfjäll, 2008), whereas other researchers adopted the theory to explore the relation between music environment and individual behavior intention (Zhuang et al., 2020). Music is one of those stimuli, and listeners may exhibit preference for music when the stimulation of music is consistent with their mental state (Droe, 2006). In addition, music preference has an correlation between individuals' emotions and mental health. Therefore, we adopted this theory to determine the correlation between music preference and mental health of college students. Pop Music Pop music involves music works with popular contents and sincere emotions that can be accepted, preferred, appreciated, and sung by listeners (Shuker, 2013). Pop music style can help listeners express their values and abilities, which in turn helps them gain recognition and make new friends, thus resulting in positive psychology (Schäfer et al., 2012). Pop music can also help teenagers express their values, aspirations, beliefs, and views on the world, as well as explore and express their identities and increase understandings of their thoughts and feelings, which play a crucial role in their development (North and Hargreaves, 1999;Rentfrow and Gosling, 2006). This music form is also associated with the empathy of teenagers, can enhance interpersonal relationships, and explore others' personalities. It serves a medium to facilitate communication and common activities, which are conducive to individuals' physical and mental health (Lull, 1987;Rentfrow and Gosling, 2006). Therefore, we proposed H1: College students' preference of pop music is significantly and positively correlated to their mental health. Western Classical Music Western classical music has been reported to exert a positive correlation between individuals' mental health and daily behavior, as well as reduce anxiety and alleviate depression in patients with psychiatric disorders such as schizophrenia (Harmat et al., 2008;Rahman et al., 2021). Moreover, classical music has been shown to exert a positive correlation between individuals' mental health and daily behavior by controlling their attention level (Baldwin and Lewis, 2017;Rahman et al., 2021). Some researchers have proved that classical music can exert a strong decompression effect, helping listeners in relieving tension and relaxing (Burns et al., 2002). Individuals may seek excitement in music that is similar to their emotions. A study indicated that listening to classical music increases the sense of relaxation and is conducive to the reduction of negative emotions, which is beneficial to people's mental health (Rea et al., 2012). Therefore, we proposed H2: College students' preference of Western classical music is significantly and positively correlated to their mental health. Chinese Traditional Music Chinese traditional music generally refers to various forms of Chinese music that has been passed on through generations, with national characteristics and creativity (Yung, 2019). Previous studies have shown that traditional music can help young people improve their sense of identity, motivate themselves, awaken their ability to express values, and obtain opportunities to know other people (Schäfer and Sedlmeier, 2009;Schubert et al., 2020). Some studies have indicated that Chinese traditional music can achieving the goal of aesthetic education, implying that the increased positive emotions in Chinese traditional music can gradually convert even the negative and sad experience into positive and lively experience, thereby providing a physiological feedback to individuals (Liu et al., 2021). Listening to Chinese traditional music can trigger the brain to produce endorphins to regulate unpleasant feelings and emotions, thus improving mental health (Lin et al., 2019). According to aforementioned studies, Chinese traditional music is closely related to the positive emotions of college students. Therefore, we proposed H3: College students' preference of Chinese traditional music is significantly and positively correlated to their mental health. Heavy Music In general, heavy music comprises all music styles characterized by metal music, including rock music, heavy metal music, and rap music, which usually produce loud and fast melodies to express intense emotions such as madness and roughness (Larson, 1995). Views on the relation between heavy music and mental health in previous studies have been inconsistent and contradictory. Some researchers have argued that individuals can release their emotions, anxiety, and anger after listening to heavy music (Martin et al., 1993). In addition, this music form can help listeners believe that they are not alone emotionally by helping them find solace and increase their sense of connection (Arnett, 1991). Conversely, other researchers have asserted that people who prefer heavy music experience more adverse outcomes such as depression, anxiety, and drug addiction, which can deteriorate their mental health (Miller and Quigley, 2012;Shafron and Karno, 2013;Monteiro et al., 2021). A few music fans reported worse experience after listening to heavy music and were prone to suicidal thoughts or self-destructive behaviors due to negative emotions produced by lyrics or theme artistic conception of the music (Miranda et al., 2012). Therefore, the contradictory and inconsistent research results warrant empirical research for deeper exploration. In the context of Chinese culture, domestic studies have found an association between heavy music and anxiety of Chinese college students. Students listening to heavy music are prone to be angry, irritated, and even self-doubted and depressed, which is not conducive to their mental health (Xu et al., 2010). Because of cultural differences between China and the West, Chinese college students' preference for heavy music and their mental health may exhibit a inverse correlation. Therefore, we proposed H4: College students' preference of heavy music is significantly and inversely correlated to their mental health. Sample and Procedure The scales of music preference of college students and positive mental health were adopted to conduct a questionnaire survey among college students in Shenzhen, Guangdong, China. A total of 139 students were selected through convenience sampling. The reliability and exploratory factor of the scales were analyzed using the pilot test data. Then, the data were collected from the students of three education reform pilot universities in Shenzhen, and all students participated voluntarily and anonymously. A total of 390 questionnaires were distributed, and 380 valid samples were obtained after removing 10 invalid samples, with a recovery rate of 97.4%, which met the sampling standard. Of the total, 198 participants were women and 182 participants were men. Additionally, the studied cohort comprised 81 freshmen, 116 sophomores, 118 junior grade students, and 65 senior grade students. The common method variance and correlation analysis were conducted on formal test data. Music Preference Scale of College Students To understand the music preference of Chinese college students, we first interviewed 40 college students in Shenzhen regarding the types of music they enjoy listening in daily life by using the scale developed by Liu and Wu (2018) and Larson (1995). We categorized the music preference scale into four dimensions in accordance with the interview results: (1) Pop music (Chinese mainland pop music, Hong Kong, Macao and Taiwan pop music, and Western pop music); (2) Western classical music (chamber music, symphony, and foreign piano works); (3) Chinese traditional music (Chinese folk songs, Chinese folk music, Chinese traditional opera and folk art); and (4) Heavy music (rock music, metal music, and rap music). The scale includes a total of 12 items, which are scored using the Likert's five-point scoring method. The scores 1, 2, 3, 4, and 5 on the scale denote very dislike, dislike, between like and dislike, like, and very like, respectively. Positive Mental Health Scale The positive mental health scale developed by Lukat et al. (2016) was adopted, which comprises nine items and one dimension. The scale was scored using the Likert's five-point scoring method, with 1 = srongly disagree; 2 = disagree; 3 = between disagree and agree; 4 = agree; and 5 = strongly agree. Exploratory Factor Analysis As shown in Table 1, the pilot test data was adopted to conduct an Exploratory Factor Analysis (EFA). According to the results, Kaiser-Meyer-Olkin (KMO) Test = 816, and the significance of Bartlett's Test of Sphericity p < 0.001. When KMO Test is greater than 0.8, and the significance of Bartlett's Test of Sphericity p < 0.05, the data are suitable for EFA (Kaiser and Rice, 1974). Therefore, the varimax rotation was adopted to conduct the EFA on the music preference scale. The rotated component matrix showed that four factors had eigenvalues >1, and their factor loadings ranged from 0.791 to 0.881, meeting the criterion of factor loading greater than 0.3 (Zaltman and Burger, 1975). The explained variance ratio of pop music, Western classical music, Chinese traditional music, and heavy music were 11.473, 7.207, 5.170, and 8.686%, respectively, and the cumulative explained variance ratio was 85.000%. The cumulative explained variance ratio = 74.287% Table 2, the pilot test data was adopted to conduct the EFA. The results showed that KMO Test = 965, and the significance of Bartlett's Test of Sphericity p < 0.001, indicating the suitability of data for EFA (Kaiser and Rice, 1974). Therefore, the varimax rotation was adopted to conduct the EFA on the mental health scale. The rotated component matrix showed that one factor had eigenvalues >1, and its factor loading ranged from 0.841 to 0.885, meeting the criterion of factor loading greater than 0.3 (Zaltman and Burger, 1975). The cumulative explained variance ratio was 48.741%. Reliability Analysis The pilot test data was adopted to conduct the reliability analysis on both music preference scale of college students and positive mental health scale. The reliability analysis of each dimension of the music preference scale indicated that the Cronbach's α of pop music, Western classical music, Chinese traditional music, and heavy music was 0.931, 0.898, 0.892, and 0.836, respectively, indicating good reliability of all dimensions. Cronbach's α of the positive mental health scale was 0.956, which indicated good reliability of the scale. The total Cronbach's α of the questionnaire was 0.926, which indicated that the questionnaire also had good reliability. Data Analysis AMOS version 22.0 was adopted to conduct a confirmatory factor analysis (CFA) on formal questionnaire results. In addition, the common method deviation test was conducted on the questionnaire by using SPSS version 22.0. The correlation between college students' music preference and their mental health was determined using the Pearson correlation analysis. Confirmatory Factor Analysis and Reliability Analysis Confirmatory factor analysis was conducted on formal questionnaire results of the music preference scale, and the results showed that χ 2 /df = 1.283, RMR = 0.022, GFI = 0.974, AGFI = 0.957, NFI = 0.982, IFI = 0.996, TLI = 0.994, CFI = 0.996, and RMSEA = 0.027, indicating that the measurement model fitted well with observations (McDonald and Ho, 2002). The factor loading ranged from.822 to.918, which is consistent with the standard. The CR of pop music, Western classical music, Chinese traditional music, and heavy music were.919,0.910,0.884, and.929, respectively, all of which are greater than the reference value of 0.6, meeting the standard. The AVE of pop music, Western classical music, Chinese traditional music, and heavy music were.791,0.772,0.718, and.814, respectively, which is consistent with the standard and indicates a good convergent validity. Cronbach's α of pop music, Western classical music, Chinese traditional music, and heavy music were.918,0.909,0.883, and.929, respectively, indicating good reliability. In addition, the CFA was conducted on formal questionnaire results of the mental health scale, and the results showed that χ 2 /df = 1.774, RMR = 0.015, GFI = 0.973, AGFI = 0.955, NFI = 0.985, IFI = 0.993, TLI = 0.991, CFI = 0.993, and RMSEA = 0.045, indicating that the measurement model fitted well with observations (McDonald and Ho, 2002). The factor loading ranged from 0.817 to 0.870, and the CR of total scale was 0.957, which is greater than the reference value of 0.6 and consistent with the standard. The AVE of preference for pop music was 0.711, which is greater than the reference value of 0.5 and consistent with the standard, thus indicating a good convergent validity (Anderson and Gerbing, 1988). Cronbach's α of total scale was 0.957, which indicated good reliability. Common Method Variance To test common method bias, Harman's one-factor test is used (Harman, 1976). The eigenvalues of 5 factors in the present study are greater than 1. the first factor can only explain 45.398%, which is far less than the accumulated explanatory variance of 50%. Therefore, no serious common method bias problem exists in this study. Table 3 shows that the correlation was significant and positive between pop music and mental health (r = 0.515, p < 0.001); between Western classical music and mental health (r = 0.453, p < 0.001); and between Chinese traditional music and mental health (r = 0.435, p < 0.001), whereas the correlation was significant and inverse between college students' preference for heavy music and their mental health (r = −0.215, p < 0.001). Therefore, the correlation coefficients between all dimensions of college students' music preference and their mental health were significant at the level of less than 0.8, indicating the absence of a strong correlation and thus of any serious collinearity. DISCUSSION AND CONCLUSION This study showed that college students' preference for popular, Western classical, and Chinese traditional music exhibited a significant positive correlation with their mental health. Similar findings have been reported in the literature. Popular music can help young people develop relationships as well as explore the personalities of others, provide a means for communication and common activities, and benefit their physical and mental health (Xu, 2021). Listening to Western classical music can help people release stress as well as relax their mind and body and contribute to their positive mental health (Rea et al., 2012). Chinese traditional music is characterized by nationality, inheritance, and regional characteristics, and students who listen to it may feel more familiar with it and relaxed (Liu et al., 2021). In this study, college students' preference for heavy music was found to have a significant and inverse correlation with their mental health. This finding is also consistent with those of some previous studies (Martin et al., 1993;Larson, 1995;Chen et al., 2006). Previous research has shown people who prefer heavy music often experience many psychological problems and are highly prone to develop negative emotions such as anger (Martin et al., 1993;Larson, 1995;Chen et al., 2006). Some researchers have reported that heavy music has correlation with the internalization problems of teenagers, such as depression, loneliness, anxiety, and stress (Bask, 2015;Schoemaker et al., 2019). However, the result of this research differs from that of some studies, which have reported that the preference for heavy music has a positive correlation with people's mental health (Arnett, 1991;Quinn, 2019). This may be due to cultural differences between China and Western countries. Chinese college students are more conservative and introverted than Western college students who are open-minded and extroverted (Yang, 2012). Chinese college students lack the ability to release themselves (Zhou, 2005). A recent empirical study in China also found that Chinese college students tend to become vulnerable, lonely, and even irritable and present more psychological problems after listening to heavy music (Wang, 2021). THEORETICAL CONTRIBUTIONS This study focused on the relationship between music preference and mental health of Chinese college students. The results showed that the correlation between college students' preference of pop, Western classical, as well as Chinese traditional music and mental health was significant and positive. By contrast, the results showed a inverse association between heavy music and mental health. This study has made three major contributions. First, limited empirical studies have been conducted on the relation between music preference and mental health over the past years, and most of these studies were observational that focused on mental diseases and patients' mental state by testing or conducting clinical trials (McFerran et al., 2018;Huang and Li, 2022). However, the present study involved a questionnaire survey for discussion and statistical analysis. Second, the music preference scales adopted in previous empirical studies were different owing to cultural differences between Chinese and Western countries (Schwartz and Fouts, 2003;Xu et al., 2010). However, the localized Chinese music preference scale adopted in this study helped us better understand the music preference of Chinese college students. Third, most of the previous studies were conducted in the Western context (Schwartz and Fouts, 2003), whereas the present study explored the correlation between Chinese college students' music preference and their mental health by taking Chinese college students as the sample and in the context of Chinese culture. This study showed that the music preferences of college students for different music forms have different correlations with their mental health. The results of this study can provide evidence for further empirical studies on the correlation between music preference and mental health. LIMITATIONS AND FUTURE RESEARCH DIRECTIONS The following limitations of this study should be considered to interpret the findings. First, this study was based on crosssectional data, and causality cannot be inferred. Future research should use a longitudinal design or cross-lagged panel analyses and apply an experimental approach to explore causality. Second, the study sample is college students selected using the convenience sampling method; therefore, the findings should be cautiously generalized to other samples. Finally, the study data are based on self-reports; hence, future research should collect qualitative data to understand the association of college students' music preferences with mental health for a thorough analysis of the findings. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Graduate University of Mongolia. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
2022-04-22T13:25:49.235Z
2022-04-22T00:00:00.000
{ "year": 2022, "sha1": "37e08fdb50232aec50084aa32bb581afd96f207a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "37e08fdb50232aec50084aa32bb581afd96f207a", "s2fieldsofstudy": [ "Education", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
80036540
pes2o/s2orc
v3-fos-license
JACIE and Quality Management in HSCT: Implications for Nursing Laboratory medicine, along with the airline industry, has a long history of utilising quality management systems. It took until 1999 for The Joint Accreditation Committee of the International Society for Cellular Therapy (ISCT) and the European Group for Blood and Marrow Transplantation (EBMT), known as JACIE, to be established as an accreditation system in the field of haematopoietic stem cell transplantation (HSCT). The aim was to create a standardised system of accreditation to be officially recognised across Europe and was based on the accreditation standards established by the US-based Foundation for the Accreditation of Cellular Therapy (FACT). Peripheral Stem Cell/Bone Marrow/Cord Blood Bone Marrow Res. (2012):Article ID 834040 (online)). However, there is a lack of published evidence demonstrating that this improvement directly results from better nursing care. Therefore, the authors conducted a survey of nursing members of the European Blood and Marrow Transplantation Nurses Group (EBMT (NG)) to identify how nurses working in the area of HSCT felt that JACIE impacted in the care they delivered and the general implications of JACIE for nurses. Background to JACIE The 1990s saw an increase in the number of transplant teams performing haematopoietic stem cell transplantation (HSCT) (Passweg et al. 2012). The procedure that was initially considered experimental during the 1960s/1970s was becoming an established treatment for many blood cancers, solid tumours and acquired or congenital disorders of the haematopoietic system within adult and paediatric populations. Towards the end of the 1990s, the source of haematopoietic stem cells was collectable from the marrow, peripheral blood and cord blood and from autologous, sibling and unrelated donations (Demiriz et al. 2012). In 1998 two leading European scientific organisations, The International Society for Cellular Therapy (ISCT) Europe and the European Group for Blood and Marrow Transplantation (EBMT), formed a joint committee to be known as the Joint Accreditation Committee for ISCT and EBMT (JACIE) (JACIE n.d.;Cornish 2008). The purpose of this new committee was to establish a system to allow transplant teams to self-assess against a group of standards (Cornish 2008), provide an inspection process and recognise compliance with the standards by awarding accreditation to those teams who worked within the field of HSCT. A pilot study of the JACIE inspection and accreditation process was carried out in Spain 2000Spain -2002 This enabled JACIE to assess sections of the stan-dards that gave rise to common difficulty experienced by the transplant teams and to assess what assistance, if any, would be required by the centres for them to obtain accreditation. The results of this pilot study underlined the need to implement national and international regulations (Pamphilon et al. 2008) within each European country. In January 2004, with the support from the European Union under the Public Health Programme (2003)(2004)(2005)(2006)(2007)(2008), the JACIE accreditation process was launched (Pamphilon et al. 2008). To enable a set of international standards for the provision of quality medical, nursing and laboratory practice in HSCT transplantation to be developed and recognised, JACIE collaborated with their American counterparts, the Foundation for the Accreditation of Cellular Therapy (FACT) (JACIE). The "FACT-JACIE International Standards for Hematopoietic Cellular Therapy Product Collection, Processing, and Administration" are revised on a regular basis. JACIE remains a non-profit organisation with all members being an expert within their specialty: clinical, collection or processing procedures of HSCT. Clinicians, nurses and quality managers who are experts in their field can volunteer to become JACIE inspectors, if they meet the criteria set. Potential inspectors must attend a training programme, pass the inspector's exam and act as an observer within the inspection team as a trainee before their first official JACIE inspection. As the JACIE accreditation process has evolved, the inspection team membership has extended to include apheresis nurses and more recently experienced quality managers recognising the multi-professional components of our HSCT programmes. The accreditation process is continuous reflecting an established quality management system (QMS); therefore accredited centres are required to apply for reaccreditation every 4 years. In 2016, many transplant teams were achieving reaccreditation for the 2nd or 3rd time, whilst other centres are applying for their initial application. At the beginning of December 2016, the JACIE website (www.jacie.org) (2016) cited 334 successful initial accreditations; 197 successful reaccreditations from 26 countries had been granted since 2000. Although the initial aim of the accreditation scheme was a voluntary process, in many countries, health-care systems/ commissioners or health insurance providers and tissue banking authorities increasingly view JACIE accreditation as important and demand accreditation to allow the procedure of HSCT to be performed. Accreditation is the means by which a centre can demonstrate that it is performing to a required level of practice in accordance with agreed standards of excellence. Essentially it allows a centre to certify that it operates an effective QMS. Furthermore, due to the increased use of unrelated donors from different countries, interaction and collaboration between units are key elements for the success of stem cell transplantation. JACIE accreditation is a guarantee that the donor and the cellular product have been handled according to specific safety criteria. A QMS is a mechanism to: • Ensure that procedures are being performed in line with agreed standards, with full participation by all staff members. In a HSCT programme, this ensures that the clinical, collection and laboratory facilities are all working together to achieve excellent communication, effective common work practices, shared policies where appropriate and increases guarantees for improved patient outcomes and the use of international donor criteria for related donors (Gratwohl et al. 2014;Anthias et al. 2015Anthias et al. , 2016. Nurses have successfully taking on the role of improving communications for donor mobilisation, collections and liaising with the staff of the processing facility. • Track and monitor collected cell products for safety and viability from the time of donation to the administration procedure. Patients' medical records must include not only the information of date and time of the collection but should include volume of collected product, type and volume of citrate and the product identification. A transport log will be required to ensure traceability of all products from collection to processing and then to clinical for administration. • Identify errors and incidents that can be reviewed and corrective actions be implemented and allow a plan of action to be put into place to minimise the error reoccurring. • Formalise training and competencies. • Clearly identify the roles and responsibilities of all staff working within the transplant team or with outside agencies (clinical, collection, processing and support services; intensive care, radiotherapy, cleaning and transport services, laboratories and donor panels). • Review documentation for evidence that standards have achieved compliance on a regular basis. Considerations In the early stages of preparing for accreditation, extra resources may be required: a dedicated quality manager, data collection manager and support staff (pharmacist, dietician, social worker) to fulfil the standards and prepare for the inspection. Formalisation of the QMS and accreditation will depend on structures already in place. There will be many processing facilities that are independent from the clinical transplant teams and may also be responsible for collections of apheresis products. In this situation, the processing facility and clinical facility have a choice of accreditation. They may decide to apply for separate or combined accreditation. However, in order to obtain JACIE accreditation, it is important that the QMS describes the communication processes between all facilities involved and provides the evidence that communications exist, e.g. minutes of weekly, monthly and annual meetings must include the names of the attendees, sharing evidence of engraftment and adverse events. It is important to remember a clinical facility must use an apheresis and processing facility that are JACIE accredited. Similarly, an apheresis facility must use a processing facility that is accredited before clinical and apheresis facilities can be awarded JACIE accreditation. Implementing a Quality Management System HSCT is a procedure with a high technological content, which requires extensive attention towards patients/donors who might introduce important problematic clinical factors and also towards sophisticated laboratory procedures related to the collection, manipulation, cryopreservation and transplantation of haematopoietic stem cells (HSCs). The continuous improvement of stem cell technology requires that all procedures regarding HSCT be guaranteed through the definition of qualitative standards recognised by scientific associations and international organisations. For the collection, processing and transplantation of HSCs, there are standardised procedures, which require specific clinical, haematological and laboratory knowledge and strict quality controls concerning all processes from cellular collection and manipulation to the administration of the collected product. Stem cell collection, processing, storage and transplantation must be carried out in a highly regulated manner to guarantee both safety and clinical efficacy. Therefore, quality assurance is a very important topic at all levels of HSCT, including robust nursing procedures, e.g. chemotherapy administration, use of stem cell mobilisation agents and collection of cellular material. The implementation of a QMS arises from the need to develop an appropriate system to optimise the quality of the service offered by a stem cell transplantation unit, in a general context of health-care quality improvement. A QMS is a tool that can be used to rapidly identify errors or accidents and resolve them to minimise the risk of repetition. A QMS assists in training and clearly identifies the roles and responsibilities of all staff (Cornish 2008;Caunday et al. 2009). In 1966, Avedis Donabedian wrote a paper entitled "Evaluating the Quality of Medical Care", where the concepts of structure, process and outcome in health care were introduced. The structure includes not only the physical aspects in which care is given but also the resources and tools available to the health-care team, the leadership and the staff. The process is how the health-care system and the patient interact. The outcome includes the effect of care on diseases and their prevention, such as the mortality rate, the error rate and the quality of life (Samson et al. 2007). During the 1950s, Edwards Deming introduced the plan-do-check-act (PDCA) cycle, an iterative four-step management method used for the implementation and improvements of processes and products, also known as plan-dostudy-act (PDSA). He also stressed the importance of viewing problems in the context of a system and that most mistakes were not the fault of the worker (Samson et al. 2007). The major objective of the JACIE Standards is to promote quality medical and laboratory practice in HSCT and other therapies using cellular products; therefore dedicated quality management standards are found within the FACT-JACIE manual (clinical facility B04, marrow collection facility CM04, apheresis collection facility C04, processing facility D04). Quality management is the management of activities involved in quality assessment, assurance and control that try to improve the quality of patient care, products and services in cellular therapy activities. A QMS could be implemented applying the PDCA cycle for the management and continuous improvement of processes and products. • PLAN means to establish the objectives and processes necessary to the centre. This means define the scope of the QMS and identify which processes within the scope are most important, those staff who are involved in the important processes and involve them in defining the targets to be used to measure the quality of the process. Ensure all staff knows how they can contribute to achieve the performance required. One important aspect to consider when implementing a QMS is the organisation and interaction between the different facilities (clinical, collection and processing). The plan shall include an organisational chart of functions, considering clinical, collection and processing staff, in particular for those tasks that are critical to assuring product or service quality. Training plans should be defined and put in place. Documentation may be displayed in a variety of formats (job descriptions, training records, qualifications certificates, retraining). A document system should be implemented serving multiple purposes for the QM programme. They provide instructions on: • Activities, policies and processes controlling various steps within the activities • Quality control and traceability of products, donor and patients The Quality Management Plan (QMP) (or Quality Management Manual) should be one of the first documents developed when preparing for JACIE accreditation. The centre must have a standard operating procedure (SOP) outlining the method by which to create, approve, implement and update SOPs (known as the "SOP for SOPs"). Clinical and collection protocols or laboratory methods must be translated into written procedures, in paper form or an electronic version, and readily available to staff. The purpose of document control is to ensure the correct approved documents are in use. In the 6th edition of the FACT-JACIE Standards, more specific requirements for validation and qualification studies have been delineated, and the concept of risk assessment has been implemented. • Validation is documented evidence that the performance of a specific process meets the requirements for the intended application. For example, the procedure for thawing frozen cells should be evaluated, as there is a risk of contamination and loss of cells during the thawing process. A thawing control, on three procedures, could be performed to assess these criteria would validate the process. • Qualification is documented evidence that the equipment/facility/utility is meeting the user requirement specification, working correctly and leading to the expected results. For example, "the dry shipper used for the transportation of frozen haematopoietic stem cells should be validated for temperature control". During the implementation phase, risk management should be an ongoing part of the quality management process, to minimise hazards for processes, patients and staff. There are several methods for the assessment of the risk, such as Failure Mode and Effects Analysis (FMEA) or Failure Modes, Effects and Criticality Analysis (FMECA), methods of assessing potential failure mechanisms and their impact on system, identifying single failure points. • DO means to implement the plan, execute the process and carry on the activities. Once the programme has been established and staff trained, the activities and the quality plan should be maintained, through the document system and the available resources. Policies and procedures could be revised, training programmes implemented and the outcome analysis of cellular therapy product efficacy reviewed to verify that the processes in use provide a safe and effective product. • CHECK is to measure the results and compare them against the expected results or goals defined by the plan. Audits represent one of the principal activities in this step and should be documented, independent inspection and retrospective review of activities to determine if they are performed according to written procedure and specified endpoints. They should be conducted to ensure that the QMS is operating effectively and to identify trends and recurring problems in all aspects of the programme. Moreover, the transplant programme should manage errors, accidents, deviations, adverse reactions and complaints and monitor activities, processes and products using measurable indicators (Harolds 2015). • Finally, ACT is to improve the QMS based on the results of the previous steps. Investigation of errors and indicators and the implementation of corrective or improvement strategies are undertaken and monitored with follow-up assessment to determine the effectiveness of the change. Data shown by Gratwohl and colleagues (Gratwohl et al. 2014) demonstrates the use of a clinical quality management system is associated with improved survival of patients undergoing allogeneic HSCT. 1.3 The JACIE Accreditation Process Start Working with the Standards The JACIE accreditation process begins when the transplant centre, with the support of the hospital management team (a key element in order to assure the required resources to successfully implement the JACIE accreditation process), agrees to start working according to the JACIE Standards. It is important to gather all the necessary information before commencing the JACIE accreditation pathway. First read the JACIE Standards, access the guides, manuals and supporting documentation from the JACIE website (www.jacie. org). Then utilise the JACIE Inspection Checklist as a self-evaluation tool. This document contains all the JACIE Standards and will help the centre establish their level of compliance against each standard and identify further work required to achieve accreditation. Furthermore, the checklist is the pivotal tool used continually throughout the JACIE accreditation process, until JACIE accreditation has been awarded. Application for JACIE Accreditation When the applicant has established a mature QMS, i.e. has been in place and operational for at least a year, a self-assessment of the standards has been performed and shows a high percentage of compliance the centre can formally apply for JACIE accreditation. The completed application form and inspection checklist should be submitted to the JACIE Office where the JACIE team will review and approve the application form, finalising this part of the process by signing the accreditation agreement with the centre. Within 30 days of the application being approved, the applicant will be required to provide the preaudit documentation to the JACIE Office. The JACIE team and the inspectors will determine that all required documentation has been correctly submitted. The documents can be provided in the language of the centre/applicant; however in some exceptional cases, a translation in English of some key documents can be requested. The preaudit documentation should be submitted using the predefined folder structure described on the JACIE website, which includes relevant documentation for all areas of the Stem Cell Transplant Programme such as personnel documentation, donor consent information, labels and summary of QMS activities (Quality Management Plan, audit report, policies). Arranging the Inspection Date The JACIE Office will begin the process to assign an inspection date and the inspection team. However, this part of the process can take approximately 6 months from the approval of the application. The inspection team will consist of one inspector per facility to be inspected. For example, if the applicant has applied for adult clinical and bone marrow, apheresis and processing accreditation, the inspection team will consist of the following: clinical, apheresis and a processing inspector (The clinical inspector will be responsible for clinical and marrow collection facilities). The inspectors are selected according to their area of expertise: clinical, apheresis and processing. For instance, a clinician will inspect the clinical facility. If a paediatric unit is part of the inspection, a paediatrician will be assigned. When there is more than one facility per area, for instance, two apheresis units, an extra collection inspector will be included in the inspection team. The applicant will be invited to view the list of JACIE inspectors, found on the JACIE website, and inform the JACIE Office if there are any inspectors that they prefer not to participate in their inspection, due to conflict of interest. The inspection will be performed in the language of the centre unless there are no JACIE inspectors that speak the language of the applicant centre; in these cases, the inspection will be performed in English with language support. The Inspection The inspection will take place over a period of 1-2 days and is a thorough examination of all aspects of the programme. The inspector will use the inspection checklist previously completed by the applicant to evaluate the centre's compliance with the standards. The inspection is usually divided in the following parts: • Introductory meeting by the programme director and the inspection team with all the programme personnel • MEDA/B data audit and review of documentation • Interviews with personnel • Closing meeting with programme director • Closing meeting summarising the inspection results with the transplant team The Inspection Report Following inspection, the inspectors submit their completed written report and inspection checklist to the JACIE Office. The inspection report is a fundamental part of the accreditation process. The report will be prepared and presented to the JACIE Accreditation Committee by the JACIE Report Assessors after their review and confirmation with the inspectors over any issues, if necessary. The Accreditation Committee is a group of experts from all areas of Stem Cell Transplantation (Clinical, Collection and Processing) that discusses each individual report and determines corrective actions the centre is required to implement in order to achieve the JACIE accreditation. Please bear in mind that although the inspectors identify areas of non-compliance, it is the JACIE Accreditation Committee who decide the corrective actions, not the inspectors. Corrections and Accreditation Award A high percentage of all inspections reveal deficiencies and the degree of deficiency identified will vary in seriousness. In most cases, evidence of corrections can be submitted electronically. However, if the deficiencies are considered a risk for patients, donors or personnel, a focussed reinspection will be required before JACIE accreditation can be finalised. Centres are allowed a period between 6 and 9 months to implement and submit the corrections to the JACIE Office. The same team of inspectors will review and assess the adequacy of the corrections provided by the centre. Once the inspectors are satisfied that all points have been resolved and with the approval of the JACIE Accreditation Committee, the applicant will be awarded the JACIE accreditation for a 4-year period, subject to an interim audit at the end of the second year. Post JACIE Accreditation The inspection is the most visible part of the JACIE accreditation process. The most challenging part, once the JACIE accreditation has been awarded, is maintaining accreditation. At the second year of accreditation, the interim audit will be due, and if the system has not been maintained, the hard work invested in achieving accreditation will become void and centres return to the beginning of the process when applying for reaccreditation. The JACIE Committee warns against failing to uphold standards or maintain the QMS between inspections. Those centres that fail to maintain their QMS due to lack of commitment or allow their system to devolve may discover standards that were compliant at the initial inspection may become partially or noncompliant during the inspection required for reaccreditation. Inspectors will identify failures to review documentation, perform audits and maintain competencies due to the lack of available evidence during the accreditation cycle. JACIE Standards that Affect Nursing: Clinical and Collection The JACIE Standards are divided into sections: clinical and donor (B), collection of marrow (CM), apheresis products(C) and laboratory (D). Many of these standards are shared across each facility as appropriate ( There shall be a nurse/patient ratio satisfactory to manage the severity of the patients' clinical status. Administration of cellular therapy products. Care of immunocompromised patients. There shall be written policies for all relevant nursing procedures, including, but not limited to: Palliative and end of life care. Administration of blood products, growth factors, cellular therapy products, and other supportive therapies. Training and competency shall include: Clinical Programs treating pediatric patients shall have nurses formally trained and experienced in the management of pediatric patients receiving cellular therapy. A current job description for all staff. A system to document the following for all staff: Initial training and retraining when appropriate for all procedures performed. Annual audit of verification of chemotherapy drug and dose against the prescription ordering system and the protocol. The Clinical Program shall establish and maintain policies and/or procedures addressing critical aspects of operations and management in addition to those required in B4. These documents shall include all elements required by these Standards and shall address at a minimum: Administration of HPC and other cellular therapy products, including products under exceptional release Labeling (including associated forms and samples) Equipment operation, maintenance, and monitoring including corrective actions in the event of failure. Prior to administration of the preparative regimen, one (1) qualified person using a validated process or two (2) qualified people shall verify and document the drug and dose in the bag or pill against the orders and the protocol, and the identity of the patient to receive the therapy. There shall be a policy addressing safe administration of cellular therapy products. Two (2) qualified persons shall verify the identity of the recipient and the product and the order for administration prior to the administration of the cellular therapy product. There shall be documentation in the recipient's medical record of the administered cellular therapy product unique identifier, initiation and completion times of administration, and any adverse events related to administration. (Table 1. 2) Senior staff should be aware that the patient's pathway, during the transplant process, can be unpredictable. There are episodes when the patient will experience complications of the treatment required for HSCT that will require a higher intensity of nursing care. During such episodes, the nursing management should have an established contingency plan to provide adequate nursing care for these patients. Possible options could be: • Nursing staff within the team allowed to work extra shifts • The employment of additional nursing staff with relevant experience from the hospital pool of nurses or from nursing agencies • Transfer of the patient to a high dependency or intensive care setting Whatever the contingency plan, there should be evidence in place, such as a written policy for staffing. This policy should describe the plan of action to be taken for small teams, apheresis, quality management and data collection teams, in case of planned or unplanned long-term absence from work, therefore allowing the patient's or donor's pathway to continue without affecting the nursing or medical care given. Not only should there be adequate nursing staff, the nurses should be qualified, trained and competent in the roles they perform. JACIE can be a challenge and an opportunity for nurses in: • Reviewing existing procedures -Especially those that have been performed automatically in the same way despite being inefficient • In adopting measures for clinical risk management -Paying more attention to long-term planning for continuing education of personnel, procedures and tools for monitoring, verifying and in achieving competence maintenance • Development and implementation of internal audits and quality indicators Methods for collection shall include a process for controlling and monitoring the collection of cellular therapy products to confirm products meet predetermined release specifications. Adequacy of central line placement shall be verified by the Apheresis Collection Facility prior to initiating the collection procedure. Administration of mobilization agents shall be under the supervision of a licensed health care professional experienced in their administration and management of complications in persons receiving these agents. Furthermore, JACIE is an opportunity for nurse recognition within the organisation they work, in terms of contribution to the overall results achieved. Training and Competencies (Tables 1.2 and 1.3) All hospitals should have their own programme for training, annual review/appraisal and competencies. The structure already in place for recording the individual staff members training can also be used to record the JACIE Standards' requirements. A new system for training records for JACIE is not required if the following is undertaken. • Basic training: -A route that leads to the skills acquisition in order to obtain new or improved "performance" • Educational training: -The set of activities, including basic training, aimed to develop and enrich the staff on the technical, special, managerial and cultural side aspects of their role • Competence: -The proven ability to use knowledge and skills • Competency maintenance: -The minimum activity that is required to be performed by each operator in order to retain the assessments defined in the specific job description. • Competency matrix: -The activities performed must be recorded in order to perform an annual assessment (quantitative and qualitative) for the activities that can be recognised. It is important that training and competency programmes are structured and ongoing, with documented evidence of training topics and dates. Most importantly, an attendance register for training and competency sessions is required. Whilst initial supervised training is more easily documented, annual competency maintenance can be difficult to show ( Training needs to be flexible to reflect staff requirements and should be performed in a timely manner to demonstrate annual updating. When staff cannot attend a particular training session due to staffing issues, holidays or sickness, a self-teaching system, e.g. an electronic system that includes the presentation and selfassessment tool, may be an option to consider. For those centres that apply for a combined adult and paediatric JACIE accreditation, it is important that training sessions should include relevant age-specific issues for each topic, especially if the two age group populations are nursed within the same ward environment. Where adult and paediatric patients are nursed on separate wards, training sessions may be separate for certain topics, but it is also important to share sessions, where appropriate, to provide evidence that both population groups are an integrated part of a combined transplant facility. The FACT-JACIE International Standards Accreditation requires that the clinical programme have access to personnel who are formally trained, experienced and competent in the management of patients receiving cellular therapy. Core competencies are specified within the standards, and evidence of training in these competencies must be documented. This may be achieved by evidence of in-service training, attendance at conferences, etc. During September 2016, the EBMT (NG) in collaboration with JACIE and the EOC (Ente Ospedaliero Cantonale -multicentre Swiss hospital name, www.eoc.ch) launched the first video recorded course, aimed at physicians, nurses and technicians working within JACIE-accredited centres. The course focused on competency maintenance and could be accessed in person, on the day, or through online conferencing and is now a source of video recorded e-sessions, lectures and questionnaires, available online free of charge. Upon correct answering of questionnaires on every topic, participants to the in-site or online conference are able to obtain a Certificate of Competence that is validated by EBMT and JACIE that can be used as evidence towards the JACIE inspection. In addition, the activities were granted a CME certification by EBMT/EBAH and Swiss CME credits (The course is available at http://www.dsit.it/prj/ebmt2016/inex2.php). This initiative was based on an online test system using a SharePoint internal hospital standard operating procedures compliant with FACT-JACIE standards, developed by the Bellinzona transplant centre (Babic et. al. 2015). Benefits of Quality Management (Table 1.3) The key aim of the JACIE process is to implement a QMS into clinical practice. Despite the difficulties that maybe encountered, the process can be useful for integration of staff from all disciplines and professional collaboration. Staff education plays a key role in the implementation of the whole system and in particular for the quality management system (Piras and Aresi 2015). The majority of the quality standards are aimed at providing evidence that there are systematic processes in place. Furthermore, several of the standards relate to having systems in place to record initial qualification, training and competencies and minimal qualifications for the trainer. The hospital system can be utilised for these standards, and this evidence can be shown to the inspectors. However, not all hospital record systems register the training qualification required by a member of staff who has a training role. (Table 1. 3) Some nurses may be unfamiliar with this area. One approach is to view audits as assessing the care you give, reviewing the evidence and making changes to improve the patient's or donor's experience and/or nursing care given. After a predeter-mined period of time, it is necessary to reassess the changes made to measure any improvements resulting from those changes. This is referred to as "closing the audit loop". A nursing audit schedule works best when the nursing teams initiate the audit topics. It is important to include the audits required by JACIE, e.g. (1) the verification of the chemotherapy drug and dose against the prescription and the protocol and (2) the verification of the haematopoietic stem cell infusion. It is important that the audit is performed by personnel that are not directly involved within the activity to be audited. (Table 1. 3) To enable adverse events to be fully reported, it is important that a culture of "no blame" is present. The hospital should have an established reporting system in place, and it is important that the adverse events for transplantation and collection of cellular products including apheresis and marrow can be coded separately to other departmental adverse events. This allows for clarity and a true record of the number of events recorded for the transplant programme. Each episode is reviewed and changes made if required. This is then followed by an audit of the changes made to minimise a reoccurrence. Nurses working with patients and donors have a very important role in reporting adverse events. It is important that all adverse events are recorded in the quality meeting minutes, quarterly and annual reports and most importantly shared with all the sections involved in delivery of the transplant programme (clinical, collection and processing), as appropriate. For example, if a recipient has a reaction to a stem cell infusion or there is a deviation from the time specified for each infusion of thawed cells, these events should be reported and shared with the processing facility. Where adverse events have been shared across departments, the inspector will require evidence that the events were discussed, and if any changes were made to practice that this was recorded, policies were updated and the episode monitored. Tracking of Collected Products (Table 1.3) To enable the safe collection, storage and distribution of collected products, it is important that each stage of the process is recorded. Therefore, collection, laboratory, transport and clinical staff should be involved in signing a transport log to accept the product and in some cases recording the temperature of the product. Policies should be in place to include what to do if there is a deviation in practice, e.g. temperature of the product falls outside the range of temperature agreed within the transport policy. It is important that policies and standard operating procedures that include responsibilities of more than one facility are shared and members of staff have ready access. The donor and recipient's medical notes must be completed, as part of the tracking system, to record the collection or transfusion of the collected product. The cellular product identification, time and date should also be included in the medical notes Common Deficiencies: 5th Edition of the JACIE Standards During the annual meeting of the EBMT (2015), the results of a review of JACIE inspection reports against the 5th ed. of the JACIE Standards were presented (JACIE Quality Management 2015). The aim of the review was to identify common deficiencies within the standards. Of reports issued against the 5th ed. of the JACIE Standards, 95% (145/152) had been reviewed. Standards relating to clinical personnel were rated as the group of standards with the highest number of deficiencies. This was due to the lack of evidence: • In training and competencies for physicians • Relating to donor and recipient informed consent • Of diagnosis and management of graft versus host disease, both acute and chronic Other clinical standards that highlighted lack of evidence were related to the administration of the preparative chemotherapy regimen and the administration of the transplanted product. The inspectors could not find evidence that two personnel had checked the identity of the recipient against the dose of the material to be infused. There were issues with quality management standards for clinical, collection and processing. Third-party agreements/service-level agreements failed to state the responsibilities of each facility involved within the process, e.g. who was responsible for transportation of the collected cellular product either from the collection facility to processing or transportation from processing to the clinical facility. For those clinical facilities that provide shared care for donors prior to collection of cellular material, it is important that third-party agreements/servicelevel agreements also include the responsibilities for the administration of mobilisation products. These responsibilities should be described within the appropriate standard operating procedure/policy (SOP), and it is important that all parties involved with the shared care have access to the SOP. Labelling of collected products was a common issue, either non-compliance with the International Society of Blood Transfusion (ISBT128) standards for labelling or personnel failed to complete all the data fields on the label. Often the volume and name of the citrate used and start and completion time of the collection were missing. JACIE: Implications on Nursing -The Nurse's Perspective Research demonstrates that patient outcomes and donor care are improved (Anthias et al. 2016;Gratwohl et al. 2011) when treatment is delivered within a JACIE-accredited centre. Therefore, it could be assumed that the JACIE accreditation process has had implications on nursing practice. A literature search was performed (using PubMed and Google search engine with the following parameters: quality management, standard operating procedures, nurse education, JACIE accreditation and audit), but no results were found reflecting the dearth of nursing research on implications of JACIE for nursing. Therefore, a simple survey was sent to the members of the European Group for Blood and Marrow Transplantation Nurses Group (EBMT (NG)) via email. The aim of the survey was to establish what implications the JACIE process had for nurses in their daily practice. Results of the Survey The survey (in the form of a word document) was distributed via email to 322 nurse members of the EBMT nurses group (EBMT (NG). 135/322 (41%) nurses opened the email received and the response rate was 31/322 (9.62%). One reply was rejected due to the transplant centre not working towards JACIE accreditation; therefore 30/322 (9.3%) replies from 11 countries were evaluated: Nurses who responded to the survey performed a variety of roles within the area of HSCT, worked in 11 different countries, and their replies varied from "no implications on daily routine" to "nurses obtaining new skills in areas such as developing standard operating procedures and risk management". Results A total of 322 EBMT (NG) members were contacted via email with a response rate of 9.62% (31 replies) from 12 countries. One reply was rejected due to the transplant centre not working towards JACIE accreditation therefore 30 replies from 11 countries were evaluated: The role, seniority and the involvement of the nurse, in the JACIE process, could have an influence on how each respondent responded. 97% (29/30) of respondents were classified as senior nurses: Seven ward managers Fourteen* clinical nurse specialists (CNS) Five quality managers Three nurse coordinators One nurse consultant responsible for SOPs in clinical and processing facility 3% (1/30) of respondents were classified as a staff nurse/junior nurse. *One CNS role includes data manager and one CNS is responsible for JACIE The majority of nurses, 93.3% (28/30), worked within the clinical area**, 3.3% (1/30) worked in the apheresis facility and 3.3% (1/30) within the processing facility. **Two clinical nurses worked in a second area (one in apheresis and one in processing) Implications: Staff Nurse's/ Junior Nurse's Point of View The only staff nurse/junior nurse to respond to the questionnaire described their role, instead of describing any implications that had occurred in working towards their first JACIE accreditation: "Nurses were involved in checking procedures, therefore providing documented evidence in education and patient care". This is a good demonstration that all staff within the transplant programme should be involved in the accreditation process. Implications: Ward Manager's Point of View Can Be Summarised as Follows JACIE has provided an improved structure to produce written procedures, which are reviewed regularly. This uniform working allows procedures to be described precisely, enabling all staff performing the procedure to perform the task in the same way and can be used an educational tool especially for new members of staff. The experience of JACIE has improved patient care, improved communication between all members of the team and has allowed for a closer working relationship. Nurses were able to learn new skills especially in understanding risk management. Implications: Clinical Nurse Specialist's Point of View Can Be Summarised as Follows Initially the documentation and developing the JACIE programme took many years and was hard work. Extra work had to be incorporated into a busy schedule, and time had to be allocated to attend the many meetings relating to JACIE. Now accreditation has been achieved, and the team works within an established programme. All the effort was worthwhile because everyone feels confident with the quality standards and programme. Quality is always a priority. Additional management hours were required to administer/manage the increased numbers of protocols and procedures. New presentation skills were learnt in presenting audit results to the transplant team, and new opportunities arose to develop a donor care programme to conform to the JACIE Standards. There was only one negative feedback: "Unfortunately no impact on daily practice". Implications: Quality Manager's Point of View Can Be Summarised as Follows The transplant team now has a greater awareness and importance of standard operating procedures (SOPs) by working within a document controlled system and being included in the multidisciplinary team meetings. Within the paediatric setting, there has been an improvement in medical SOPs for the care of the HSCT paediatric patients as well as an improvement in collaboration with the adult clinic, apheresis and stem cell laboratory teams. Patient outcome reviews have been valuable in improving care. The JACIE audit programme has encouraged the transplant team to perform internal audits, which has led to improvements in quality assurance. Implications: Nurse Coordinator's Point of View Can Be Summarised as Follows JACIE has highlighted more attention is required for nurse training, evaluation of competencies, document control and the registration process. The standards relating to donors has emphasised to the team the importance of the role of a "donor advocate", to prevent a conflict of interest of the transplant physician, and JACIE is a very useful working tool, especially for new colleagues. Conclusion of the Survey Although there was a very low response to the survey (9.62%), the results represent the views of senior nurses (97% of respondents). After reviewing the 45* comments from the 30 respondents, the authors of this survey would like to suggest that the JACIE accreditation process has had a positive impact on nurses. Only 9% of comments could be classified as having a negative impact on the nurse due to extra workload. * See Appendix 1.1 for a full list of citations written by the respondents to the survey. A further study is required within the BMT nursing community to fully understand the implications for nurses during the initial JACIE accreditation phase and post JACIE accreditation whilst maintaining and improving the quality system that is now embedded into daily practice. The study could be based on the Donabedian model looking at structure, process and outcomes. The JACIE Standards are reviewed every 3 years, allowing them to be adapted to the rapidly developing field of HSCT. Nurses are required to maintain compliance with the QMS and JACIE Standards and must familiarise themselves with the changes that occur in each edition of the JACIE Standards. Each edition will present fresh challenges to achieve the standards especially given the present day competing pressures on resource and finance. It is noteworthy that none of the surveyed nurses mentioned this aspect as a concern for nurses in their practice. As nurses working within JACIEaccredited centres, it is important to provide evidence of our continued monitoring of practice and processes through the QMS and not regard the JACIE accreditation process as a tick box exercise. Discussion Points Since the introduction of JACIE accreditation, nurses have submitted oral and poster presentations at the annual EBMT (NG) conference on the topic "Preparing for JACIE". The small response to our EBMT (NG) survey and a literature search that could not identify published articles on the topic of "JACIE and implications for nurses" could suggest the JACIE accreditation process has not impacted greatly on nurses. One of the five Deming principles (Health Catalyst 2014) that help health-care process improvements: "Quality improvement is a science of process management. If you cannot measure it you cannot prove it, therefore quality improvement must be data driven". As specialised nurses, working in the field of HSCT, we should be asking ourselves why are we not publishing our data or audit findings. Using the development of the apheresis collection services across Europe as an example, many teams will be nurse led. When the collection of HSCs became an established practice, the number of nursing teams increased, training became more formalised and apheresis nurse forums were established to try and reinforce policies and procedures. A QMS was introduced in the form of JACIE accreditation with risk management and audit became integral to the apheresis nurse role. Deming also states:" If nurses are going to manage care, they require the right data delivered in the right format at the right time and in the right place". Therefore, nurses with the HSCT programme should take ownership, perform audits, assess the results, make changes to patient care and reassess. These experiences and findings should be shared and published. If the reluctance to publish is a lack of ownership of quality management, or nurses perceive quality management as the responsibility of the quality manager, then they must be reminded that JACIE has a significant impact upon each and every role and that they must be aware and fully participate in the process. Audit, review of policies and procedures, competencies and risk assessment will become a key part of the nursing routine for the QMS to be maintained and to evolve. • Patient safety is highlighted. • All nurses are working in a more quality assured way, by only using adequate and current documents and working procedures. • The internal audits, which we have performed for several years, whilst working with JACIE, have led to improvements in quality assurance. • Before JACIE accreditation, we actually did not have strict medical SOPs for treatment of our paediatric transplant patients. • Since first accreditation as a separate paediatric centre, we have broadened our corporation with the adult clinic, apheresis and stem cell lab. Since then SOPs are more in common. "Nurses are now involved and appreciate being involved in the review meeting for patient outcome". Citations classed as neither negative nor positive: • More SOPs to write • Increased audits • Working with documents and internal audits • Updating SOPS, ensuring staff, including the multidisciplinary team, understand the importance of following the SOP Nurse coordinator's and nurse consultant citations Citations classed as positive: • Separate donor and recipient management. • JACIE is a good working tool, especially for new colleagues. Citations classed as neither negative nor positive: • More attention in the control of the working activities. • More attention in the registration of processes. • More attention in the nurse training and evaluation of competency. • My mission is to work for the HSCT programme of quality programme improvement process as required by the accreditation body JACIE. Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
2019-03-17T13:11:33.565Z
2018-01-01T00:00:00.000
{ "year": 2017, "sha1": "5e0dbd07b8a0bc5a813d70a829a3bb488c131864", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-319-50026-3_1.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "e28b90df3f817d68cc15918f20291283d92fb421", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
227156108
pes2o/s2orc
v3-fos-license
Should we increase the focus on diet when considering associations between lifestyle habits and deliberate self-harm? Background Despite increasing awareness of high rates of physical illness and poor lifestyle behaviours among patients with a history of repeated deliberate self-harm (DSH), there is little research on specific lifestyle factors that are potentially problematic for this group. This paper aims to explore the relationship between lifetime repeated DSH and certain lifestyle factors, including balanced meals, eating breakfast, consumption of ‘junk’ food, weight, exercise, substance/alcohol use, smoking and social support, in a cohort of patients who presented to the Emergency Department (ED) with suicidal ideation or DSH. Methods From 2007 to 2016, data from lifestyle and mental health measures were collected from 448 attenders at an outpatient clinic for DSH or suicidal ideation following ED presentation. Lifestyle behaviours (Fantastic Lifestyle Checklist) and mental health (Depression and Anxiety Stress Scale), clinical diagnosis and number of previous DSH episodes were measured on arrival. The associations between lifestyle variables and the number of lifetime DSH episodes were examined. Results Sex, age, depression symptoms, poor diet, and smoking were all associated with a higher average number of deliberate self-harm episodes across the lifespan. There were non-significant positive trends for the other poor lifestyle behaviours. There was no association between DSH episodes and diagnosis of depression or anxiety disorder. In a multiple linear regression model, the only factors that remained significant were age, smoking and eating balanced meals, however, the relationship between smoking and lifetime DSH was moderated by more immediate DSH behaviours. Conclusion In this sample of patients referred to a service following presentation to the ED with acute mental health concerns, balanced meals and smoking were the lifestyle behaviours that were found to have the strongest independent association with repeated DSH across the lifespan. Background While people presenting to Emergency Departments (EDs) with deliberate self-harm (DSH) are a heterogeneous group, studies have found these patients are more likely to have poorer long-term physical as well as mental health outcomes. Hawton et al. [1] longitudinal follow up study of 11,583 patients presenting to a hospital ED found that not only is DSH associated with increased risk of death by suicide, but also death from most natural causes, including respiratory disease, circulatory, neurological, endocrine, digestive, skin and musculoskeletal and connective tissue disorders. Another similar study [2] of 30, 950 presenters with DSH reported deaths due to natural causes were 2-7.5 times more frequent than in the general population. Diseases of the circulatory and digestive systems were major contributors to high Years of Life Lost (YLL) from natural causes. They concluded that in the management of self-harm, clinicians should consider patients' physical needs. The finding also suggests that there may be detrimental lifestyle factors associated with DSH that predispose these patients to premature death from all causes. Both studies suggested that lifestyle factors deserved greater attention in assessment and treatment of people presenting with DSH. Previous research on lifestyle factors associated with DSH has focused on substance abuse and related disorders, including alcohol abuse and smoking. There is extensive literature reporting that substance (including alcohol and tobacco) abuse is strongly associated with DSH [3][4][5][6][7][8]. In addition to substance abuse, there is now also emerging interest in other lifestyle factors, including diet, exercise and obesity, and their role in mental health. Jacka and Berk [9] proposed that diet, exercise and smoking are independent risk factors for depression. A review [10] of lifestyle interventions related to suicide prevention noted that alcohol use, smoking and sedentary lifestyles were risk factors for all ages. They also noted that other risk factors varied by age group, with psychiatric symptoms, substance and alcohol use, weight and occupational difficulties being pertinent in adults and organic disease and poor social support being risk factors in the elderly. Similarly, a recent systematic review [11] revealed a complex relationship between obesity and suicidal behaviours, with some papers suggesting obesity is a protective factor against suicide fatality yet may increase the likelihood of non-fatal suicidal thoughts and attempts. Regarding diet, the literature focuses on depression and suicidal behaviour, rather than DSH. Additionally, diet is a more difficult variable to define (and control) than exercise and hence the methodological variation between studies, with some comparing a diet high in 'wholefoods' with one high in 'processed foods' [12] and others exploring the effects of the Mediterranean diet [13]. Jacka and colleagues [14] conducted the world's first randomised controlled trial comparing the effects of social support (known to benefit people with depression) and dietary interventions in the treatment of clinical depression. They demonstrated that one third of patients receiving support by a clinical dietician over 3 months met criteria for remission of major depression, compared to 8% of patients who received social support only. These results were directly proportionate to the extent of dietary change and not explained by weight loss or physical exercise alone. An observational study demonstrated similar results when exploring the dietary differences between suicide attempters versus non-attempters. The retrospective populationbased study of almost 7000 adults [15], found that fruit, vegetables and meat, particularly fish/seafood, were significantly under-consumed in adults who were suicide attempters compared to non-attempters. This relationship remained significant after adjustment for various factors, including socioeconomic status, smoking, total caloric intake as well as medical and psychiatric illness. While most studies have focused on the nutritional components of an individual's diet, there are a few studies that have examined specific dietary habits, such as eating breakfast. This is of interest given the evidence that eating breakfast significantly reduces one's cardiovascular disease [16]. Eating breakfast is also correlated with decreased stress, depression and emotional distress [17,18]. As lifestyle interventions are relatively new to psychiatry, there are no clear guidelines regarding formal assessment of problematic behaviours, such as poor diet. We have previously reported the Fantastic Lifestyle Checklist (FLC) [19] to be a valid tool for measurement of lifestyle behaviours in a sample of patients presenting to the Green Card Clinic (GCC) referred with suicidal behaviour or ideation, or non-suicidal self-injury [20]. We have also reported a significant difference between the overall FLC scores of people presenting with one and repeated DSH episodes to the GCC at St Vincent's Hospital Sydney [21]. The purpose of our study is to further explore the relationship between the FLC diet and exercise items (nutrition, dietary habits exercise and weight) and lifetime history of DSH in this patient group. Our hypothesis was that poorer nutrition and dietary habits, infrequent exercise and obesity would be significantly associated with a higher number of lifetime episodes of deliberate self-harm. For the purposes of this study, deliberate selfharm encompasses both suicidal behaviours and nonsuicidal self-injury. Procedure This study used data collected from patients who attended the GCC at St Vincent's Hospital. The details of the GCC are discussed more fully in previous papers [20][21][22]. In summary, following presentation to the ED with suicidal behaviour or ideation, or non-suicidal self-injury, all patients receive routine medical and psychiatric assessments (by an emergency doctor and psychiatry trainee). If deemed appropriate for discharge from hospital, the psychiatry trainee will consider referral to the GCC if: (1) the patient is not already under the care of their own psychiatrist/psychologist or community mental health team, (2) they speak English, and (3) they do not have a cognitive impairment, or major mental illness (schizophrenia, schizoaffective disorder, delusional or bipolar disorder). On arrival to their first clinic appointment, patients are asked by the clinic receptionist to complete several assessment measures, including those outlined below. Patients included in this study were also given information and the opportunity to consent to their deidentified data being used for the purposes of research. The St Vincent's Human Research Ethics Committee approved the use of these data for this purpose. Participants From 2007 to 2016, of the 665 patients attending their first GCC appointment, 448 provided complete data including current presentation, age, sex, marital status, past history of DSH, psychiatric diagnosis, and selfreport measures (noted below). There were no significant demographic differences between those with complete data and those without. Over this time period, there were an additional 252 patients who either cancelled (n = 37), or did not attend (n = 215) their GCC appointment. There were no significant differences between people who attended their appointment (i.e. our sample in this paper) and those who did not. However, the minority of patients who cancelled their appointment were predominantly (27/37) female, and tended to be younger than those who did attend the GCC (mean age 27.0 ± 6.4 vs. 31.6 ± 10.9, F (2,927) = 4.38, p = .013). The patients' principal psychiatric diagnosis was recorded following clinical assessment by the GCC clinician (psychiatrist, psychiatry trainee, clinical psychologist or mental health clinical nurse consultant) and consensus by the team (following discussion of each case in weekly case review meetings). The clinical interviews were unstructured. Measures self-completed by patients (such as the Depression, Anxiety and Stress Scale; DASS) informed the clinicians' view of the patient but were not used for diagnostic purposes. While many patients had comorbidities, the diagnosis recorded was the one that was the focus of treatment in GCC. Measures The FLC is a 25-item measure that assesses 11 lifestyle domains using the acronym FANTASTIC (family, friends, activity, nutrition, toxins, alcohol, stress, sleep, personality type, insight and career). The current paper explored nine items within these domains, related to diet and health behaviours. These include receiving emotional support, exercise, eating balanced meals, eating breakfast, consuming excess sugar, salt, animal fats or junk food, weight, smoking, drug abuse, and alcohol. Items related to substance use, smoking, and social support were included as these factors may be potential confounds for the relationship between diet and DSH. Each item is scored on a 3-point Likert scale from 0 (hardly ever), 1 (some of the time), to 2 (almost always). There is some variation in wording depending on the item and some items are reverse scored. The checklist was provided to patients, who self-reported their lifestyle behaviours. The DASS21 Item Version [23] measures three negative emotional states (depression, anxiety and stress). The two scales assessing depression and anxiety (7 items each) were used in this study. The depression scale assesses dysphoria, hopelessness, devaluation of life, selfdeprecation, lack of interest, anhedonia and inertia. The anxiety scale assesses autonomic arousal, muscle tension, situational anxiety and subjective experience of anxiety. Respondents rate the extent to which they have experienced each state over the past week on a 4-point Likert scale ranging from 0 (never), 1 (sometimes), 2 (often) to 3 (almost always). Total scores for each subscale range from 0 to 21, with a score above 11 on the depression subscale or 8 on the anxiety subscale being considered "severe." This self-report scale was completed by patients independent of the diagnosis they received from the clinician who assessed them. Outcome measure The outcome measure for this study was the total number of self-reported lifetime DSH episodes, including the current presentation. Suicidal intent was not measured, therefore DSH episodes in this study include both suicide attempts and non-suicidal self-injury. Statistical analyses The data were analysed using the Statistical Package for the Social Sciences (SPSS, v22, IBM Corporation, 2013). Descriptive statistics were used to quantify baseline outcome measures and other variables (including demographics, psychiatric diagnosis and lifestyle factors). For the purpose of analysis, several dichotomous variables were created. The two groups in the marital status variable included single/separated/divorced/widowed and married/defacto. The diagnosis variable was collapsed into four categories: depressive disorder, anxiety disorder, substance use disorder, and all other diagnoses. In forming dichotomous variables from the lifestyle factors, the decision was made to isolate the most extreme negative response. For example, the balanced meals variable was divided into the two groups: (1) 'hardly ever' and (2) 'some of the time' or 'almost always'. The exceptions to this rule were the smoking and drug abuse variables. The smoking variable was divided into the two groups: (1) smokers ('occasional use' or 'daily use'), and (2) nonsmokers. Likewise, the drug abuse variable was divided into two groups of people who abuse drugs: (1) 'some of the time' or 'frequently' and (2) 'never or seldom'. The rationale for this was that the middle response options for the smoking and drug abuse variables were particularly difficult to quantify, and considering there is no amount of smoking or drug abuse that is considered 'safe' by national guidelines (unlike alcohol), this seemed to be the most logical way to divide these variables into two groups. Additionally, research shows that risk of death from suicide tends to be primarily associated with amount of alcohol consumed per drinking day, rather than drinking frequency or overall alcohol consumption, which supports guidelines that limit consumption to 2 drinks or less per drinking day [24]. Basic descriptive statistics were used to describe participant characteristics (counts and percentages for categorical variables, means and standard deviations for continuous variables). One-way ANOVA and correlation analyses were used to explore the relationship between the number of lifetime DSH episodes and a range of variables including sex, marital status, depression, anxiety, substance use disorder (SUD) and the list of dichotomous lifestyle variables (as described above). Where demographic and diagnostic variables were found to have a significant univariate relationship with the number of lifetime DSH episodes (p < .05) in the ANOVA or correlation analyses, they were included in a multiple linear regression model to assess the predictive ability of the various lifestyle and demographic variables. This was run as a hierarchical model with two steps; the first step included only those demographic and diagnostic variables that showed a significant univariate relationship with the outcome. The second step included all of these variables, as well as a variable representing the reason for the patient's presentation to the ED. This was due to a significant relationship between the patient's reason for presentation and the outcome, which is described further in the results section. Results Of the total sample, 107 (23.9%) had never engaged in DSH, 162 (36.2%) reported one previous episode of DSH and 179 (40.0%) reported 2 or more previous episodes of DSH. The reasons for their ED presentation were overdose (40.8%), suicidal ideation (41.7%), cutting (10.3%), hanging (1.6%), jumping (2.0%) carbon monoxide poisoning (0.2%) and other (3.3%). The majority of patients (58.1%) presented to the ED with DSH, with a smaller proportion (41.9%) presenting with SI. Characteristics of the sample are described in Table 1. Demographic, clinical and lifestyle data were explored by the number of lifetime DSH episodes. There was a significant sex difference, with females having a higher average number of DSH episodes than males (2.0 ± 2.8 vs. 1.4 ± 1.9, F (1,447 = 6.7, p = .01). Similarly, there was a significant effect for age, with younger age being associated with a higher number of DSH episodes (r = − 0.17, p < .01). There was no significant association between the number of DSH episodes and marital status or diagnosis of depression, anxiety or substance use disorder. However, greater frequency of DSH episodes was associated with higher DASS depression scores, with a weak effect size (see Table 1). The tobacco smoking and diet were only lifestyle variables that demonstrated a positive relationship with the number of DSH episodes. Current tobacco smokers and those who reported they "hardly ever" ate balanced meals both reported more lifetime DSH episodes. All the other lifestyle variables, including excessive alcohol, minimal emotional support unhealthy body weight, high junk food consumption, not exercising and not eating breakfast demonstrated non-significant effects. As substance use is likely to affect other lifestyle factors, we compared those with and without a diagnosis of substance use disorder. There was evidence of cooccurrence between several key behaviours; of the 86 participants with a diagnosed substance use disorder, 65 (75.6%) also currently smoked tobacco. Similarly, of the 90 people who "hardly ever" ate balanced meals, 63 (70.0%) were current smokers. Discussion The aim of this study was to add to existing data on lifestyle behaviours of patients presenting to ED with SI or DSH, using data from a further cohort of GCC attenders. We focused on diet as an area of growing interest that has not been investigated in DSH groups and because the FLC has a number of specific diet and dietrelated items along with a range of variables, including demographics, psychiatric diagnoses, and DASS scores gathered at their first visit. Results showed a significant relationship to sex and age, with younger people and females reporting more lifetime episodes of DSH. Higher DASS depression scores were associated with more frequent DSH episodes, rather than clinical diagnosis of depressive or anxiety disorders per se. This is similar to the findings from the previous cohort [22] where DASS depression scores were higher in those with repeated DSH episodes than those presenting with a single DSH attempt. While clinical diagnosis of SUD was not associated with higher rates of DSH, it was strongly related to other health behaviours including smoking, which in turn was related to the outcome, suggesting an indirect relationship between substance misuse and DSH. In terms of lifestyle factors explored, there was a significant relationship between the diet and smoking variables and repeated DSH and a non-significant trend on all other lifestyle variables, including emotional support, breakfast, drug use, junk food, weight and alcohol. However, using a multiple linear regression model, we demonstrated that balanced meals, current smoking and younger age were the only variables found to be significantly associated with lifetime episodes of DSH and such variables as eating junk food and obesity were not. This is an interesting finding as 'eating balanced meals' and smoking were more significant items for this patient group than items such as substance and alcohol use, which are more likely to be assessed. We know from our findings that there is considerable overlap between those who were smokers and those who did not have balanced meals [25] and speculate that these two questions may be a proxy for poor lifestyle behaviours in general, in keeping with the previously reported 'poor health behaviours' component in a factor analysis of the Fantastic Lifestyle Checklist [20]. While current smoking was significantly related to the number of lifetime DSH episodes, this effect was no longer significant when the reason for the current presentation to the ED was adjusted for. This suggests that smoking may be more strongly related to current emotional state, rather than lifelong DSH behaviours, although clinically, we had the impression that our attenders were long term smokers and speculate that some restarted smoking in response to stress rather than taking up smoking for the first time. However, we did not explore this. Conversely, DASS depression was not related to number of DSH episodes in the first step of the regression, but became significant when reason for ED presentation was included. Our findings imply that once current emotional distress is accounted for, there is a chronic aspect of depressive symptoms that is more strongly related to lifelong DSH. The importance of diet and nutrition in the assessment and management of patients with DSH requires further attention. Given the emerging evidence of the importance of certain elements of a balanced diet, including fruit, vegetables and fish, these dietary elements need to be explored in relation to history of repeated DSH. Future research utilising a more detailed dietary assessment may be beneficial. Lifestyle interventions hold promise because of their ease of implementation and cost-effectiveness, as well as their known association with depression. They also have the potential to prevent the excessive morbidity and mortality resulting from high rates of physical illness among patients with a history of suicidal behaviour. Jacka has used the term 'nutritional psychiatry' to highlight the burgeoning clinical and research interest [14]. Having previously argued for importance of asking about smoking [25] in assessing people presenting with suicidality, we now suggest that asking about 'balanced meals' can segue into discussing nutrition in general (the lifestyle checklist also asks about consumption of junk food and breakfast-eating habits). The two questions may provide a simple 'window of opportunity' to discuss lifestyle issues with people presenting with DSH before they develop further mental and physical health problems. The review of lifestyle interventions related to suicide prevention [10] concluded that "lifestyle behaviours including cigarette smoking, alcohol use, and sedentary lifestyle are associated with suicide risk in all age groups" and for adults, "psychiatric symptoms, substance and alcohol abuse, weight, and occupational difficulties seem to have a significant role in suicide risk. However, most of the research cited in adults focuses on people with a history of chronic mental illness. Our group of attenders were almost exclusively in the adult group but did not have chronic mental illness (such as schizophrenia or bipolar disorder) with concomitant symptoms of poor diet, metabolic syndrome and other chronic health issues. Furthermore, less than half had a primary clinical diagnosis of depression, anxiety or adjustment disorders: the others were more representative of people arriving at an ED in crisis, following including acute stress or substance misuse. While there is a growing body of research into lifestyle risk factors associated with increased suicide risk, there is relatively little on the differences in lifestyle behaviours of people who repeatedly self-harm compared to non-repeaters. This is important given the reports of increased mortality and morbidity from a range of natural causes [1,2] in people presenting to EDs with DSH and the links between repeated DHS and poorer lifestyle behaviours documented in our earlier study [21]. In that study of an earlier cohort from the same clinic in 1998 to 2005, we have reported that nearly half the group (42.7%) reported repeated DSH episodes and had the poorer scores on the FANTASTIC lifestyle checklist than either single DSH episode or SI groups. We suggested that repeat self-harmers, may do better an approach aimed at 'lifestyle change' rather than based on current psychological stressors. In this later cohort, we sought to pursue the issue of lifestyle further, with particular focus on diet as this is an area of growing interest that has not been investigated in DSH groups. A limitation of this study design is that data were collected when the patients were distressed and/or depressed and not repeated later (these patients are relatively mobile and difficult to follow up). Additionally, this study included only people who presented to hospital and the GCC, consented to provide their data, and completed all measures, thus excluding those who did not. This gives a potential bias but interestingly, the nonattenders were more likely to be slightly younger females. In addition, we measured only the patients' primary psychiatric diagnosis, and did not assess any secondary diagnoses or comorbidities. Finally, we measured lifetime DSH, but did not assess level of suicidal intent in previous episodes, so are not able to differentiate between suicide attempts and non-suicidal selfinjury. However, this can be difficult to determine retrospectively and we consider that our approach reflects clinical reality. We believe the issues raised reflect those seen in services for people presenting to ED with DSH and are of clinical interest from an individual and public health perspective. This paper also has considerable strengths. To our knowledge, it is the first to quantify the specific lifestyle behaviours associated with the number of lifetime DSH episodes. We were able to successfully identify poor diet and cigarette smoking as the strongest lifestyle correlates of DSH, which may inform holistic approaches to reduce or prevent DSH behaviours among those experiencing psychological crises. Conclusions These findings can contribute to a first step in evaluating the impact of interventions relating to such lifestyle interventions as improving diet, exercise, social inclusion, and ceasing smoking and other substances for individuals presenting with suicidality. It also provides an opportunity for more education to patients about the importance of diet, which seems particularly important given the high rates of metabolic and cardiovascular disease among patients with a history of DSH [1,2]. There is enormous potential to be gained in understanding the impact of lifestyle behaviours on mental health and educating and empowering people to make lifestyle changes, not only to prevent them developing non-communicable disease, such as diabetes, but also for their emotional wellbeing and sense of agency.
2020-11-25T14:49:09.460Z
2020-11-25T00:00:00.000
{ "year": 2020, "sha1": "0599ef91cf72de08f5040fd4283bafa6730fc224", "oa_license": "CCBY", "oa_url": "https://bmcpsychiatry.biomedcentral.com/track/pdf/10.1186/s12888-020-02950-0", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0599ef91cf72de08f5040fd4283bafa6730fc224", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
245161999
pes2o/s2orc
v3-fos-license
Effect of clariflocculator and pulsator based sedimentation technology and poly-aluminium chloride coagulant type on the efficiency of the water treatment plant Polyaluminium chloride (PAC) with different basicity is used as a coagulant in most drinking water treatment plants (WTP). The aluminium concentration in PAC and its hydrolysis mechanism varied with the basicity of PAC. Incremental addition of PAC changes various Physicochemical properties and turbidity removal mechanisms in water. Water treatment plants use the PAC concentration beyond its optimum dose without considering other aspects, including residual aluminium concentration. In the present work, the effect of high and medium basicity of PAC on different Physico-chemical properties like pH, zeta potential, and residual aluminium concentration of water was investigated. The pH of treated water decreases with the incremental addition of PAC, and an increase in zeta potential and residual aluminium concentration in treated water was evidenced. The change in pH after PAC addition is responsible for deciding the coagulation mechanism and efficiency of the coagulation process. pH reduction is comparatively more in high basicity PAC than medium basicity. PAC hydrolysis mechanism is controlled by the zeta potential of water and can be used as an alternative method to decide the optimum coagulant dose. The performance of clariflocculator and pulsator-based WTP was also evaluated for raw water from the same source. To reduce down the turbidity below the acceptable level, the coagulant requirement for clariflocculator based WTP is comparatively less than pulsator based WTP. The floc blanket in the pulsator gets disturbed with a slight change in the coagulant chemistry and quantity. INTRODUCTION In India, to cater drinking water demand of small and big urban settings the surface water is the first choice. The surface water-bodies get recharged during the rainy season to supply the requisite amount of water to adjacent cities for drinking purposes and agricultural use. The use of surface water for drinking purposes creates new challenges in front of water work engineers and technologies due to various factors not limited to water quality, natural organic matter (NOM), emerging soluble contaminants, and color, mainly due to turbidity and algal growth. Coagulation is a proven and widely used process worldwide to remove dissolved organic carbon and turbidity. In the rainy season, the floodwater carries the turbidity. A significant portion of it gets settled down with due time. The small-sized particles remain suspended for a longer time and contribute towards the non-settleable turbidity. These particles continuously remained in motion and repelled each other due to a slightly negative charge on them. In water treatment plants (WTP), the coagulant plays a significant role in neutralising these particles' electrostatic repulsive forces. The coagulant's metal ion attracts these negatively charged particles and coagulates with them to form small-sized floc. Mainly aluminumbased coagulants, e.g., Alum, Poly-aluminium chloride (PAC), are widely used worldwide. Out of all available coagulants, PAC is the most economical and convenient in terms of lower dose concentration and less sludge generation, high durability and rapid flocculation (Van Benschoten & Edzwald 1990;Gregory & Dupont 2001;Aguilar et al. 2002;Yu et al. 2007;Yang et al. 2011;Kumar & Balasundaram 2017) PAC coagulation mechanism depends on various raw water quality parameters, including water chemistry, pH of the water (Vepsalainen et al. 2012;Wang et al. 2014) and coagulant dose (Yu et al. 2010). The charge neutralization (Zhao et al. 2011), electrostatic patch (Popa et al. 2010), double-layer compression (Edzwald & Haarhoff 2011), sweep coagulation (Packham 1965;Zhao et al. 2011) and bridge aggregation (Wang et al. 2002;Wu et al. 2007;Ye et al. 2007) are the commonly studied mechanisms (Lin et al. 2008;Wei et al. 2015). The activity of PAC in the water is controlled by various factors, including the environmental temperature, pH of raw water, types of other ions, and their concentration. At high environmental temperature, the more amounts of less stable phases get formed and lead to enhanced residual Al in treated water. Similarly, solution pH also affects the types of phases formed in the water after addition. The presence of Al in treated water leads to various health problems beyond acceptable limits. In addition to that, it also creates various operational problems during water distribution, including loss hydraulic capacity, increased turbidity and reduced disinfection efficiency. Therefore there is an urgent need to control the residual Al concentration in the treated water. In conventional WTPs various processes are involved, including aeration, pre-chlorination, coagulation, flocculation, sedimentation or clarification, filtration by sand media and post-chlorination. Coagulation, flocculation, and clarification play a vital role in WTP's efficiency and efficacy (Camp 1946;Swamee & Tyagi 1996;Cripps & Bergheim 2000). It affects both the quality and quantity of the treated water. It also contributes towards the maximum WTP capital cost during the plant's execution, typically around 1/4th to 1/3rd of the total cost of WTP (Heikal et al. 2017). Various types of technologies are in use for clarification and flocculation, including the well known and widely installed Clariflocculator and Pulsator based technology. The flocculation followed by sedimentation and floc removal from the treatment unit is the function of the type of technology used. In view of the above, it is essential to optimize the settling tank's efficiency to reduce down the overall cost of water treatment. The different type of sedimentation technology has different flocculation mechanism. In Clariflocculator, the flocculation and clarification coincide in a single tank. The raw water, along with added coagulant, enters into the inner flocculation tank. Continuously moving paddles help to grow the flocs. The heavy flocs settle down to the bottom, and clear water flows upwards into the outer clarifier zone. In the clarifier, the floc free water enters into peripheral launder through the peripheral weir and send to sand filtration media. In pulsator based clarifiers, the developed floc gets separated from the water by a thick blanket of the floc. After adding coagulant, water enters directly into the clarifier's bottom and gets distributed uniformly through perforated pipes. Water along with flocs move upward through the sludge blanket where flocs get separated from the water. The clear water then enters into the launders, a channel situated at the top of the pulsator to collect the clear water. The clear water them send to the sand filtration media for further filtration. The sludge blanket increases in size due to the continuous accumulation of floc from water and, after reaching a certain height goes into the concentrator. Both technologies have their own merits and demerits. Clarifier based technology is easy to operate with very minimal maintenance and an economically viable process. The continuous change in water chemistry does not create any hurdle in operation. The limitation of clarifier based technology is it requires more space. Pulsator-based technology can treat a large quantity of water due to compact design, coagulation, and flocculation, which occurs in one unit. In spite, it is susceptible to the change in water chemistry, including turbidity load. The sludge blanket gets disturbed due to the change in water chemistry and environmental temperature (Kawakami et al. 2016). The present work investigated the effect of different basicity of PAC on various Physico-chemical properties of water like pH, zeta potential, and residual aluminium concentration. In addition to that, the change in the zeta potential after incremental addition of PAC coagulant is explored as a tool to decide the optimum dose of coagulant. The effect of the gradual addition of coagulant on the residual Al of treated water is also discussed in detail. The pH of treated water decreases with the incremental addition of PAC, and an increase in zeta potential and residual aluminium concentration in treated water was evidenced. The change in pH after PAC addition is responsible for deciding the coagulation mechanism and efficiency of the coagulation process. Despite both the conventional Clarrifloculator and advanced pulsator-based clarifloculator used worldwide for water treatment, no theoretical and experimental study has been reported regarding their real-time comparison. The stated comparative study is unique in nature, as getting different types of WTPs (clariflocculator and pulsator) installed on the same raw water source is rare. During our study, both the WTPs were in operation at their total capacity of 125 MLD and 600 MLD and catering to the need of drinking water of selected cities in more than various distracts of Rajasthan, including the capital of the state. The results of various experiments presented in the manuscript are their real-time data collected during their 24Â7 operation at its full-scale treatment capacity. MATERIALS AND METHODS The poly-aluminium chloride of High Basicity (HB) and Medium Basicity (MB) was collected from Surajapura and Kekri WTP Rajasthan, India, respectively. The coagulants were supplied by industry and were of commercial-grade quality, and were used as received during all laboratory experiments. Raw water used for all laboratory experiments was collected from the Bisalpur dam. Coagulation-Flocculation experiments (Jar test) of water (ASTM D2035) The jar test experiments were performed as per the standard procedure depicted in the ASTM D2035 (ASTM 2003). The Phipps and Bird jar tester is used during the entire experiments. Multiple stirrers with continuous speed variation from 20 to 200 RPM were used to perform experiments. 1,000 ml turbid water was taken in six beakers of one-liter capacity each, and predetermined doses of PAC were added and mixed at 120 rpm for 1 minute for flash mixing. The stirring speed was reduced to 50 rpm for 20 minutes. The developed flocs were allowed to settle for 15 minutes. Without disturbing the bottom layer, the samples were collected with the help of a pipette from 3 cm below the surface for further analysis. Analysis of treated water for various physico-chemical parameters Various physicochemical parameters of water samples were analysed by standard methods (APHA 2005). Metal ion concentration in the water samples was analysed using Inductively Coupled Plasma-Mass Spectrometer (ICP-MS). All samples were acidified up to pH 2-3 by using ultrapure nitric acid immediately after experiments and passed through the membrane filter prior to analysed on ICP-MS. The chemical analysis of both the as-received PAC was done by ICP-MS, as shown in Table 1. All experiments were performed in triplicate, and the mean of the result was reported. Detail about study location The Bisalpur dam is a gravity-based dam on the Banas River near Deoli in Tonk district Rajasthan state, India. The dam was constructed in 1999 for irrigation and drinking water supply, with a total storage capacity of 1095 MCM. The reservoir is the source of drinking water for the Ajmer, Jaipur and Tonk districts of Rajasthan. [Wikipedia] Two WTPs were constructed on the dam, one at Kekdi with a total treatment capacity of 274 MLD to cater drinking water demand of Ajmer district and another at Surajpura with a treatment capacity of 600 MLD to supply drinking water to Jaipur and Tonk districts. The raw water for both the WTPs is collected from the same point and pumped in different pipelines, and transported to Kekri and Surajpuara WTP. The Kekri WTP is around 40 Km from the Bisalpur dam and has two treatment units (OLD and New) for which the raw water is carried from Bisalpur dam in two separate pipelines. The flocculator at Kekri WTP is based on the Clariflocculator based technology. The Kekri WTP has two different setups with four clari-flocculators in each setup, followed by filtration of treated water by the rapid sand method. WTP also follows pre and post-treatment disinfection by chlorination. Surajpura WTP is around 10 Km from the Bisalpur dam. The flocculators at Surajpura WTP are based on pulsator based technology. The plant has 12 working pairs of pulsator with one standby pair and 26 sand filtration beds. Aeration and pre Uncorrected Proof and post chlorination is also part of treatment at WTP. The tentative locations of the reservoir and WTPs are shown in Figure 1. RESULTS AND DISCUSSION The turbidity removal efficiency of high basicity PAC (PAC-HB) and medium basicity PAC (PAC-MB) on the raw water collected from the Bisalpur dam was studied in detail by various laboratory experiments. (Sec 3.1) This will help in understanding the mechanism of turbidity removal by PAC used and comparing the performance of both the WTPs. Effect of PAC dosage and basicity on the various properties water The optimum dose of PAC was understood by residual turbidity and zeta potential of the raw water after incremental coagulant addition. The raw water turbidity is in between the 2-3 NTU. It reduces with increasing PAC dosage, as shown in Figure 2. It reached the stagnant and do not varied much with coagulant addition after an optimum dose of 10-15 mg/L. Enhancing PAC dosing ever more than 50 mg/l does not reduce the turbidity below 0.5 NTU, which may be due to the suppressed PAC hydrolysis after the optimum dose. The slight turbidity reduction is mainly due to the trapping of particles in PAC precipitates at higher doses. The various factors responsible for this behavior are described in detail below. The coagulant precipitate decreases with increasing PAC dosage. The optimum PAC coagulant precipitation occurs in the pH ranges of 7-9, and it is suppressed below and below the optimum pH. The incremental addition of coagulant alters various factors in water chemistry, including zeta potential and pH. The zeta potential of raw water is À22.1 mv. Due to the adsorption of polar matrix ions on the surface of colloidal solids, the surface water zeta potential is generally negative (Salopek et al. 1992). Uncorrected Proof The zeta potential of water after adding the coagulant increases continuously and nearly reaches zero at optimum doses of 10-15 mg/l. An increase in zeta potential with dose concentration occurs as the hydrolysis product of PAC is positively charged. Indirectly, change in zeta potential also gives inside of coagulant's optimum dose based on the charge neutralisation mechanism. As per ASTM the zeta potential of the solution should be (+5) for the strong coagulation-flocculation (Letterman 1999). Over and above the optimum coagulant dose, charge neutralisation occurs, and turbidity does not alter much as precipitated coagulant floc combines each other through the mosaic effect similar to the polymeric bridging (Dentel 1991). Gradual increase in zeta potential after the optimum dosage is mainly because of charge reversal of developed floc due to restabilization mechanism (Yao 1987). Comparatively, both HB-PAC and MB-PAC do not show much difference in turbidity removal behavior. In both cases, zeta potential crosses its limit of (+5) at PAC doses of 10-15 mg/l, as shown in Figure 2(a) and 2(b). This indicates the optimum dose concentration for both the PAC is 10-15 mg/l; above that mosaic effect dominated the precipitates start combining each other. At the same time, the charge neutralization occurs slightly earlier for HB than that of MB PAC. Ye et al. (2007) also found that the zeta potential for high basic PAC increases quicker than less basic PAC. This may be linked to the change in the solution pH after PAC addition at the same dosage, as shown in Figure 3. The mechanism of turbidity removal by the PAC is influenced by various factors, including the pH of the solution, concentration PAC dosing, temperature of solution, and other water quality parameters. The pH of the solution decreases with the addition of PAC, mainly due to PAC hydrolysis, Figure 2(c) and 2(d). The high basic PAC (HB-PAC) lowers pH of water in more extent than the lower basic PAC (MB-PAC). This could be one of the significant factors for the quicker increase in the Uncorrected Proof zeta potential in the case of HB-PAC compared to MB-PAC. The solution's pH plays a vital role in the coagulation mechanism and ultimately in the coagulant's performance. The optimum PAC precipitation occurs at pH 7-9, and it gets affected below and above the optimum pH. The residual Al concentration in the treated water increase over and above the optimized PAC dose concentration. Comparatively, more residual Al concentration is observed in the case of HB-PAC than MB-PAC (Figure 2). The enhanced residual Al is well explained by pH reduction of the solution with PAC dosing and the change in coagulation mechanism at lower pH. This is well defined by the PC-pH diagram of Al, as shown in Figure 4. Over and above the circumneutral pH, the solubilisation of Al in treated water increases due to the formation of different Al phases. Al forms various monomeric (Al(OH) 2þ , Al(OH) 2 þ , Al(OH) 4 À ), polymeric (Al 2 (OH) 2 4þ , Al 2 (OH) 5 þ ) and amorphous species. All species formed have different solubility constants at specified solution pH. At circumneutral solution pH, the polymeric form of Al(OH) 3 is the most stable and dominating species. Whereas with decreasing pH of Al(OH) 2 þ , Al(OH) þ , and Al 3þ initiated the dominating the speciation. The formed Al(OH) 3 is the most stable species and have low solubility constant and is responsible for neutralizing the negative charge and stabilizing nonsettlable turbidity particles by coagulation-flocculation mechanism (Holt et al. 2005;Bensdoka et al. 2008). The mechanism at optimum pH is charge neutralisation. In the pH range of 6-8 four main coagulation zones occurred i.e., stabilisation, charge neutralisation, destabilization, restabilization and sweep, as shown in Figure 3. At pH 10, restabilisation step gets missed. Whereas, below pH 6, the typical sweep zone gets missed (Wei et al. 2005). Out of all mechanisms, PAC's best coagulation efficiency occurred mainly by charge neutralisation or sweep coagulation mechanism. The type and concentration of the specified species is the function of the final solution pH. At low pH of around 6.0, charge neutralisation of PACl generally starts at Al concentration of around 2.2 mg/l, whereas, with increasing pH in the range of 8-10 it will require Al concentration as low as 0.8 mg/l. Al concentration in treated water depends on the kind of species form, as shown in Figure 3. Effect different coagulation technology in WTP To understood the effect of different sedimentation technology use in the Kekri and Surajpura WTPs on its turbidity removal efficiency was studied in detail. In the present study, the raw water source and water collection point for both the WTPs is the Uncorrected Proof same. The only difference is that water is transported 42 KM for Kekdi WTP and 10 km for Surajpura WTP from the dam through the separate pipelines. The turbidity and pH variations of Bisalpur dam water over the year is given in Table 2. The turbidity and pH of the water do not show much variation with seasonal change. The raw water collected from the Kekri and Surajpura WTP shows a similar kind of activity towards the added PAC coagulants in terms of turbidity removal during laboratory experiments. The 10-15 mg/l PAC coagulant dosing is required to reduce the turbidity of treated water below the acceptable limit of 1 NTU. During the actual working, the Kekdi WTP was found to be operational at the MB-PAC dose 5-10 mg/l to remove turbidity below the acceptable limit. Whereas, at Surajpura WTP the HB-PAC dose of 21 mg/l was in use to remove the turbidity below the acceptable limit of 1 NTU (Table 3). The PAC dose was in use at Surajpura WTP was around 3-4 times more than that of Kekri WTP. For the same raw water, both WTPs were using different PAC dosing. This difference in the PAC dose requirement may be attributable to the different types of sedimentation technology used at both the WTPs. The only difference is that the type of PAC, at Kekri WTP the medium basicity PAC (MB-PAC) was in use whereas, at Surajpura it was High basicity (HB-PAC) Uncorrected Proof To compare both the WTPs it was decided to change the PAC type from HB-PAC to MB-PAC at Surajpura WTP. The shifting of around 600 MLD working WTP from HB-PAC to MB-PAC was a big and challenging task. Looking towards the sensitivity of the situation and the intention of not disturbing the WTP much, initially on the safer side, the dose 25 mg/l MB-PAC was incorporated by keeping all other WTP parameters constant. Pulsator-based WTP shifting makes it more complicated due to the highly sensitive floc blanket at the clarifier, which can get disturbed with small changes in water chemistry. During this transaction, changes occurring in the entire WTP and treated water were closely monitored with the time interval of 1 hr. Even after 12 hours of changing PAC type, the changes in the WTP, including pulsator outlet and treated water turbidity, were not seen, as depicted in Figure 5. The floc size and its movements in the pulsator and at the pulsator outlet was not altered by changing the PAC type. Assuming the WTP was stable at 25 mg/l, the dose of MB-PAC was reduced to 21 mg/l from 25 mg/l and the WTP has monitored for any changes in the floc size, quantity, its appearance and setting behavior in the pulsator for 30 hrs postdose changes. After reducing the MB-PAC dose from 25 mg/l to 21 mg/l the slight variation and fluctuation in the pulsator outlet's turbidity were observed. The small-sized flocs were seen moving in the pulsator's outer surface and in the launders. These flocs were continuously in motion and coming towards the water's surface and in small quantity entering into the launders along with water in almost all chambers. It shows that the pulsator's sludge blanket gets disturbed due to PAC type and dose concentration changes. This concludes that the pulsator based clarifier is highly sensitive towards the small change in the operating conditions and water chemistry. These small-sized flocs from launders were further arrested in the sand filtration media, and ultimately the treated water turbidity was unaltered ( Figure 4). The overall study reveals that the Surajpura WTP was functioning well at the the HB-PAC dose of 21 mg/l. A small change in the PAC dose and chemistry leads to disturbances in the sludge blanket in the pulsator. It is concluded that the pulsator based clarifier is more sensitive compared to the conventional Clariflocculator. The higher coagulant dose usage leads to more operation and maintenance costs of water treatment. The shifting from HB-PAC to MB-PAC at the same dose of 21 mg/l leads to a reduced 31.4% coagulant cost reduction. Whereas the further decrease in MB-PAC dose to 10 mg/l leads to 67.3% coagulant cost reduction, as shown in Table 4. CONCLUSION Different sedimentation technologies on the efficiency of conventional water treatment plants (WTP) have been investigated in detail. The well-known and popularly accepted and installed Clariflocculator, and pulsator-based sedimentation technologies have been chosen for comparative study. The effect of different basicity of Polyaluminium chloride (PAC) coagulant to remove turbidity from surface water was evaluated by laboratory experiments and in two different working WTPs of various capacities. The overall conclusion is that, the PAC's different basicities do not impart much effect on the turbidity removal efficiency, especially at low turbidity of around 3-5 NTU. Most of the turbidity gets removed at the dose concentration of 10-15 mg/l. The concentration of soluble Al and zeta potential increases with increasing the dose of PAC in treated water. The sedimentation is the most crucial part to deicide the economic feasibility of conventional WTPs. The different sedimentation technologies behave differently with an added coagulant and require the dissimilar amount of coagulant to drop down the turbidity below an acceptable limit. The Clariflocculator based WTP requires only 5 mg/l of PAC dose to remove turbidity below the acceptable limit of 1 NTU. In contrast, the Pulsator-based WTP requires as high as a PAC dose of 21-25 mg/l to comply with the purpose. The floc blanket in the Pulsator-based WTP gets disturbed with slight variation in the coagulant dose and type. It shows that the Clariflocculator based WTP is quite robust and economically viable, whereas the pulsator based WTPs are more sensitive towards the slight change in working parameters and water chemistry. DATA AVAILABILITY STATEMENT All relevant data are included in the paper or its Supplementary Information.
2021-12-16T16:19:00.529Z
2021-12-14T00:00:00.000
{ "year": 2021, "sha1": "8869f187f0e86be41432a994cac6cb320154b2ef", "oa_license": "CCBY", "oa_url": "https://iwaponline.com/ws/article-pdf/doi/10.2166/ws.2021.434/977199/ws2021434.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b471f01328da428c9d96f5c2c89c80ae4ba3f51f", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
251188622
pes2o/s2orc
v3-fos-license
Chemical Composition and Biological Activities of Hedychium coccineum Buch.-Ham. ex Sm. Essential Oils from Kumaun Hills of Uttarakhand Hedychium coccineum Buch. Ham. ex Sm. is a perennial rhizomatous herb belonging to the family Zingiberaceae. The aim of the present study was to compare the chemical composition and biological activities of H. coccineum rhizome essential oil (HCCRO) and H. coccineum aerial part essential oil (HCCAO). The plant material was subjected to hydro-distillation using Clevenger’s apparatus in order to obtain volatile oil and analyzed for its chemical constituents using GC-MS. The comparative study of the rhizome and aerial part essential oils of H. coccineum displayed that (E)-nerolidol (15.9%), bornyl acetate (13.95%), davanone B (10.9%), spathulenol (8.9%), and 1, 8-cineol (8.5%) contributed majorly to the HCCRO, while 7-hydroxyfarnesen (15.5%), α-farnesene (11.1%), α-pinene (10.9%), spathulenol (7.7%), and β-pinene (6.8%) were present as major constituents in the HCCAO. Both the essential oils were studied for their biological activities, such as nematicidal, insecticidal, herbicidal, antifungal, and antibacterial activities. The essential oils exhibited significant nematicidal activity against Meloidogyne incognita, insecticidal activity against Spodoptera litura, and moderate herbicidal activity against R. raphanistrum sub sp. sativus, and good antifungal activity against Fusarium oxysporum and Curvularialunata. Essential oils were also tested for antibacterial activity against Staphylococcus aureus and Salmonella enterica serotype Typhi. Both oils showed good to moderate activity against the tested pathogens. The significant nematicidal, insecticidal, herbicidal, antifungal, and antibacterial activities of both the essential oils might be helpful for the development of environmentally friendly pesticides that could be an alternative to synthetic pesticides in the future. Principal Component Analysis Principal Component Analysis (PCA) is one of the best multivariate statistical methods used to describe most significant aspects of a dataset. PCA pattern recognition of two essential oils was used to evaluate the phytochemical variability due to the type of plant portion from which essential oils were obtained. The collective contribution rate of variance of the first two principal components (PC1 and PC2) obtained from the PCA method was 100% for chemical compositional differences, which describes most of the variance information. Therefore, these two PCs defined the total compositional variability in the essential oils. PC1 contributed 62.79% in the total variance, which was positively correlated with α-famesene, α-pinene, β-pinene, spathulenol, and 7-hydroxyfamesen, whereas contribution of PC2 to the variance was 37.21%, which was positively correlated with β-eudesmol, γ-eudesmol, 1,8-cineol, davanone B, bornyl acetate, and (E)-nerolidol. The Principal Component Analysis (PCA) of HCCAO and HCCRO is shown in Figure 3. Effect on Mortality of Second Stage Larvae of M. incognita The nematicidal activity of HCCAO and HCCRO was applied to second-stage juveniles (J 2 ) of M. incognita for durations of 24, 48, 72, and 96 h. Percent mortality for both the samples was found to increase with an increase, in concentration as well as the incubation time with the essential oils. After 96 h, HCCAO was found to be most effective at 1 µL/mL dose level with 41.33% inhibition in larval mobility, followed by 0.5 µL/mL with 30.66% inhibition. HCCRO was also found to be most effective at 1µL/mL dose level, with 61.66%, inhibition in larval mobility, followed by 0.5 µL/mL with 52.66% inhibition. Silva-Aguayo et al. [19] reported significant nematicidal activity of the essential oil (from Peumusboldus) against Haemonchus contortus at similar levels of concentration (0.25, 0.5, and 1.0 µL/mL). The overall activity of HCCRO for the durations of 24, 48, 72, and 96 h was observed to be higher than HCCAO. HCCAO and HCCRO exhibited significant variation in immobility against M. incognita larvae. The LC 50 values of the HCCAO at 24, 48, 72, and 96 h after treatment were 0.26, 0.13, 0.06, and 0.003% and LC 50 values of HCCRO were 2.34, 6.92, 2.33, and 0.23%, respectively. The detailed experimental observation of percentage mortality and LC 50 values of HCCAO and HCCRO for nematicidal activity against second-stage juveniles of M. incognita has been represented in Tables 2 and 3, respectively. Effect on Egg Hatchability of M. incognita HCCAO and HCCRO showed a strong inhibitory effect on hatching from eggs in a concentration-dependent manner. The rate of egg hatching was found to be directly proportional to exposure time period and inversely proportional to oil sample concentration. In comparison with HCCAO, HCCRO had a stronger inhibitory effect on M. incognita in terms of egg hatching. After 96 h, the maximum rate of egg hatching in HCCAO (55.00%) and HCCRO (22.66%) was observed at a dose level of 0.25 µL/mL, while the minimum rate of egg hatching in HCCAO (17.66%) and HCCRO (11.33%) was observed at 1 µL/mL. Therefore, maximum egg hatching inhibition was observed in HCCRO at lowest as well as highest concentration levels. It was discovered that increasing the concentration of HCCAO and HCCRO delayed the start of egg hatching. The IC 50 It has been reported that β-dihydroagarofuran, kessane, elemol, (E)-nerolidol, davanone B, spathulenol, 7-hydroxyfarnesen, rosifoliol, T-muurolol, linalool, and E-isovalencenol were among the most oxygenated sesquiterpenoids observed as main components in plant essential oils, and showed egg-hatching and nematicidal activity in terms of mortality against the root knot nematode, M. Incognita [20]. Oxygenated sesquiterpenoid (E)-nerolidol, davanone B, spathulenol, 7-hydroxyfarnesen, globulol, and τ-muurolol) have been reported to efficiently inhibit the nematode eggs hatching and mortality, which indicates that essential oils with a high content of these compounds could be useful as natural nematicides for the control of M. incognita. The presence of one of the single major compounds or synergetic effects of major and minor constituents of essential oil might be responsible for the nematicidal activity of HCCAO and HCCRO towards the egg hatching and immobility of second-stage larvae of M. incognita [21,22]. Insecticidal Activity The insecticidal activity of essential oils from rhizome and the aerial part of H. coccineum was estimated against Spodoptera litura (cotton cutworm) insects using the leaf-dip method. Fourth instar larvae of S. litura were used for different concentrations of essential oils to test the activity. The experiment was conducted in triplicate, and the total number of test insects per treatment was five. Tween-20 (1.0%) water solution was taken as control. The results showed that HCCRO was more effective than HCCAO and showed good mortality in a concentration-dependent manner ( Table 6). During the experiment, no mortality was observed after 72 h. The mortality percentage of S. litura insect, treated with the essential oils of rhizome and aerial part of H. coccineum, is presented in Table 6. The LC 50 values of HCCAO were 0.007, 0.006, and 0.005%, and the values of HCCRO were 0.007, 0.006, and 0.005% at 24, 48, and 72 h, respectively. The LC 30 , LC 50 , and LC 90 value of essential oils from rhizome and the aerial part of H. coccineum are presented in Table 7. Significant insecticidal activity was reported for the essential oil (Mentha pulegium) at concentrations similar to the present investigation (10-100 µL) in fumigation conditions against Bruchus rufimanus [23]. The insecticidal efficacy of H. coccineum rhizome essential oil has also been reported against three insects, Stephanitis pyrioides, Aedes aegypti, and Solenopsisinvicta [9]. The toxicity of essential oils against test insect might be due to the presence of various terpenoids found in the essential oils, or even may be due to the interaction of the major and the minor components present in the botanicals. Inhibition of Seed Germination The mean percent seed germination inhibition of essential oils from aerial part and rhizome of H. coccineum at different concentrations (50-200 µL/mL) has been depicted in Table 8. The essential oils possess moderate herbicidal activity in a dose-dependent manner. The herbicidal activity of rhizome and aerial part essential oil of H. coccineum at the highest concentration (200 µL/mL) was found in the order of HCCRO (96%) > HCCAO (92.00%). Essential oils from Limnophila indica have also been reported to have significant herbicidal activity at similar levels of treatment concentrations (50-200 µL/mL) [24]. IC 50 was calculated at the time when 100% germination was achieved in the control and is used to compare the relative herbicidal activities of all the samples, as the lower the herbicidal activity, the higher its IC 50 values. The order in which the activity was observed in terms of LC 50 was as follows: HCCRO (62.78 ± 5.86 µL/mL) > HCCAO (88.09 ± 3.42 µL/mL) in Table 9. Pendimethalin * 100 ± 0.00 100 ± 0.00 100 ± 0.00 100 ± 0.00 *-Standard herbicide; HCCAO-Hedychium coccineum aerial part essential oil; HCCRO-Hedychium coccineum rhizome part essential oil; values are means of three replicates ± SD; SD-standard deviation. Within a column, mean values followed by the same letter are not significantly different according to Tukey's test (p < 0.05). It was observed that HCCRO exhibited more herbicidal activity than HCCAO. Herbicidal activity of the Hedychium spicatum rhizome essential oil has also been reported against Radish (Raphanus raphanistrum) seeds in a previous study [25]. It was inferred that the herbicidal activity was due to the presence of various bioactive components such as camphor, 1,8-cineole, isoborneol, and linalool in the essential oil, or might be a possible synergistic effect of the minor as well as major compounds present in the H. coccineum rhizome and aerial part essential oils. Inhibition of Root Length The inhibition of root length was assessed as the measure of herbicidal activity. The percent root length inhibition of seeds germinated was calculated when 100% germination was achieved at various concentration ranges of 50, 100, 150, and 200 µL/mL. In the case of HCCRO, the percent inhibition of root length was recorded as 34.44%, 53.33%, 67.77%, and 84.07% from lowest to highest concentrations, while in the case of HCCAO, the percent inhibition was measured as 27.03%, 56.29%, 73.33%, and 90.37%, respectively, from lower to higher concentrations, as represented in Table 10. IC 50 was calculated when 100% germination was achieved in the control, and was used to compare the relative herbicidal activities in terms of inhibition of root growth of all the samples, as the lower the herbicidal activity, the higher its IC 50 values. The order in which the activity was observed was as follows: HCCRO (94.68 ± 2.74 µL/mL) > HCCAO (96.85 ± 0.38 µL/mL) (Table 11). Pendimethalin * 100 ± 0.00 100 ± 0.00 100 ± 0.00 100 ±0.00 *-Standard herbicide, HCCAO-Hedychium coccineum aerial part essential oil; HCCRO-Hedychium coccineum rhizome part essential oil. Values are means of three replicates ± SD; SD-standard deviation. Within a column, mean values followed by the same letter are not significantly different according to Tukey's test (p < 0.05). Inhibition of Shoot Length The inhibition of shoot length was also assessed as the measure of herbicidal activity. The percent shoot length inhibition was calculated when 100% germination was achieved at various concentrations ranging between 50, 100, 150, and 200 µL/mL. In case of HCCRO, the percent inhibition of shoot length was recorded as 40%, 47.77%, 74.44%, and 99.62% from lowest to highest concentrations, while in case of HCCAO, the percent inhibition was measured as 34.44%, 52.22%, 66.66%, and 81.11%, respectively, from lower to higher concentrations, and represented in Table 12. IC 50 was calculated when 100% germination was achieved in the control, and was used to compare the relative herbicidal activities in terms of inhibition of root growth of all the samples, as the lower the herbicidal activity, the higher its IC 50 values. The order in which the activity was observed in terms of IC 50 values was as follows: HCCRO (87.44 ± 2.98 µL/mL) > HCCAO (133.06 ± 17.22 µL/mL) ( Table 13). Pendimethalin * 100 ± 0.00 100 ± 0.00 100 ± 0.00 100 ± 0.00 *-Standard herbicide; HCCAO-Hedychium coccineum aerial part essential oil; HCCRO-Hedychium coccineum rhizome part essential oil; values are means of three replicates ± SD; SD-standard deviation. Within the dataset, mean values with the same letter in superscript are not significantly different, based on Tukey's test (p < 0.05). Antifungal Activity The antifungal activity of HCCAO and HCCRO was evaluated against two phytopathogenic fungi (Fusarium oxysporum and Curvularialunata) at varied doses (50-750 µL/mL). The antifungal activity of the essential oils is shown in Table 14. The essential oils exhibited good antifungal activity by inhibiting the mycelial growth of pathogenic fungi. HCCRO (88.1%) had the maximum antifungal activity against F. oxysporum, followed by HCCAO (83.3%), while HCCAO (84.1%), followed by HCCRO (74.8%), had the strongest antifungal activity against C. lunata at higher concentrations (750 µL/mL). The antifungal activity of HC-CAO and HCCRO was significantly lower compared to standard fungicide Carbendazim (100%), even at a higher concentration (750 µL/mL) against both the tested fungi. Antifungal activity was also demonstrated for the essential oil at 50-500 µL/mL in a previous study [26]. Several biologically active compounds, such as (E)-nerolidol, davanone B, spathulenol, limonene, (E)-caryophyllene, bicyclogermacrene, and 7-hydroxyfarnesen have been reported to possess the antifungal properties of the essential oils tested against Colletotrichum acutatum, C. fragariae, and C. gloeosporioides [9]. Studies have confirmed that the Hedychium essential oil, which is rich in (E)-nerolidol, α-farnesene, α-pinene, and β-pinene, shows potential antifungal activity against Candida albicans and Fusarium oxysporum [27]. The presence of individual major compounds or the synergetic effect of major/minor constituents of essential oil might be responsible for the antifungal activity of HCCAO and HCCRO towards F. oxysporum and C. lunata. Antibacterial Activity The emerging antibiotic resistance in bacteria and the high cost of developing novel antimicrobial drugs has encouraged researchers to search for novel effective and economically viable broad-spectrum natural products with different modes of action. Essential oils and their chemical constituents in pure form have been reported to have effective action against resistant microbial strains [28][29][30]. Therefore, in this study, we have explored the antibacterial activity of HCCRO and HCCAO using zones of inhibition assay against Gram-positive bacteria, Staphylococcus aureus, and Gram-negative bacteria, Salmonella enterica serovar Typhi. The spot diffusion method confirmed that both HCCAO and HCCRO showed antibacterial activity against both the bacterial pathogens. However, HCCRO showed a higher zone of inhibition against both Gram-positive and Gram-negative pathogens. Of these strains, Gram-positive Staphylococcus aureus was more susceptible to HCCRO than Gram-negative Salmonella enterica serovar Typhi, with average zones of inhibition of 25 mm and 6 mm, respectively. Staphylococcus aureus is a Gram-positive opportunistic pathogenic bacterium which causes nosocomial and community infections such as bloodstream infections, pneumonia, skin and soft tissue infections, and bone and joint infections [31]. Salmonella enterica serovar Typhi is a common and clinically significant Gram-negative pathogenic bacterium that causes gastroenteritis and typhoid fever in humans, affecting over 20 million people worldwide and killing 220,000 people each year [32,33]. Results showed that HCCRO had potential antibacterial activity against both bacterial pathogens. The colony farming unit (CFL/mL) of Staphylococcus aureus and Salmonella enterica serovar Typhi by essential oils from the aerial and rhizome part of H. coccineum is represented in Table 15. Determination of Minimum Inhibitory (MIC) Concentration and Minimum Bactericidal Concentration (MBC) The minimal inhibitory concentration (MIC) and minimum bactericidal concentration (MBC) values were determined using the broth dilution method to evaluate the effectiveness in controlling bacterial pathogens. The results revealed that in the presence of HCCRO (2.5 µL/100 µL) and HCCAO (2.5 µL/100 µL), 6.5 and 6 Log CFU/mL, respectively, reductions in the growth of Staphylococcus aureus were observed, while the growth was completely inhibited at higher concentration (5 µL/100 µL). The MIC and MBC values of HCCRO against Staphylococcus aureus were 2.5 µL/100 µL and 5 µL/100 µL, respectively. Meanwhile, in the case of Salmonella enterica serovar typhi, 3 and 2.3 log reductions in the CFU were observed in the presence of HCCRO and HCCAO, respectively. Changes in bacterial cell suppression by essential oils could be attributed to chemical components and the volatile nature of their components, or differences in the composition of Gram-positive and Gram-negative bacterial membranes [34,35]. In Silico PASS Prediction of HCCAO and HCCRO In silico PASS predictions for antibacterial, antifungal, and nematicidal activity of selected phytochemical compounds from HCCAO and HCCRO are reported in Table 16. Among the identified compounds, davanone B, α-farnesene, davanone B, α-curcumene, germacrene D, and (E)-caryophyllene were observed to exhibit acceptable Pa/Pi values. However, other compounds were observed to exhibit negligible nematicidal activity as per PASS prediction. These data support the in vitro nematicidal activity for HCCRO and HCCAO performed in the present investigation. From the PASS prediction data, it can be inferred that the nematicidal activity of these essential oils is governed by one of the above-mentioned compounds having acceptable Pa/Pi values or the result of the synergistic effect of more than one component present in essential oil. Volatile compounds exhibited a good Pa/Pi range, (0.45 > 0.02). Among the identified compounds, 7-hydroxyfarnesen, bicyclogermacrene, germacrene D, α-farnesene, (E)-caryophyllene, and (E)-nerolidol were found to exhibit acceptable antibacterial effects (in terms of Pa/Pi values). However, some other major compounds, such as β-pinene, 1,8-cineol, borneol, γ-eudesmol, α-curcumene, and β-dihydroagarofuran were predicted to have comparatively low antibacterial activities. Overall, the PASS prediction supported the antibacterial activity of HCCAO and HCCRO compounds. The Pa/Pi value of major compounds such as (E)-nerolidol, linalool, α-farnesene, davanone B,limonene, (E)-caryophyllene, bicyclogermacrene, 7-hydroxyfarnesen, and spathulenol for the antifungal potential was higher than that of these compounds for antibacterial activity. The other predicted compounds also exhibited superior antifungal activity. Hence, the PASS prediction supports the present high antifungal activities of HCCAO and HCCRO. Therefore, it is supposed that these biological activities of HCCAO and HCCRO are governed by the compounds showing a higher Pa/Pi ratio, or it may be a combined effect of more than one compound. PASS-prediction of activity spectra for substance; Pa-probable activity; Pi-probable inactivity. Essential Oil Isolation The essential oils from the aerial part and rhizome of H. coccineum were extracted using the hydro distillation method by subjecting the fresh plant materials (1.2 kg of arial part and 0.9 kg rhizome) to the Clevenger-type apparatus for about 3 h [38][39][40]. The obtained essential oils were dried over anhydrous sodium sulphate before being filtered and stored in dark glass vials at 4 • C for further use. GC-MS Analysis The phytochemical composition of both essential oils was analyzed using gas chromatography-mass spectrometry .LIB), as well as comparing the spectra with literature data [15]. Nematode Population Collection Meloidogyne incognita eggs were collected from nematode-infected tomato (Solanum lycopersicum) roots collected from the Crop Research Center, G. B. P.U.A.T, Pantnagar, in a glasshouse, maintained at 25 ± 2 • C. The sample was collected on the basis of the visual symptoms of root knots or galls formed in the plant. Hand-picked matured egg masses from infected tomato roots were cultured in distilled water in a growth chamber at 25 • C. For future use, emerged juveniles were collected and preserved at 5 • C [41,42]. In Vitro Mortality Assay on Second Stage Larvae of M. incognita For in vitro mortality assay, second-stage juveniles (100 in number) collected from hatched eggs within 48 h were placed on gridded Petri dishes with stock solution and 1.0 mL of distilled water. There were three different doses, i.e., 0.25, 0.5, and 1 µL/mL of essential oils in a 1.0% Tween-20 water solution. The treatments were performed in triplicate and arranged in randomized order. The juveniles immersed in Tween-20 (1.0%) water solution were used as a control group. The number of dead juveniles was counted using a stereo-binocular microscope throughout time periods of 24, 48, 72, and 96 h. Totally motionless (dead larvae) nematodes were picked out of the Petri dish and placed in distilled water. Percent mortality was calculated using Abbott's formula [43]. where, Nt = Mortality in treatment; Nc = Mortality in control. Effect of Essential Oils on Egg Hatchability Test of M. incognita Two egg masses of M. incognita were suspended in 0.25, 0.5, and 1 µL/mL conc. of HCCAO and HCCRO in gridded Petri dishes. The egg masses suspended in a Tween-20 (1.0%) water solution were used as a control. All of the treatments were set up in triplicate and in a completely random order in the BOD incubator at a constant temperature of 27 ± 1 • C. Observations on percent egg hatching were made at time intervals of 24, 48, 72, and 96 h. The counting of the number of eggs hatched was performed under a microscope at a magnification of 4×. Percent egg hatching was computed using Abbott's formula [44]. where, Nt = egg hatching in treatment; N C = egg hatching in control. Test Insect Insecticidal activity of HCCAO and HCCRO were tested against cotton cut worm (Spodoptera litura belongs to family: Noctuidae and order: Lapidoptera), which is a serious polyphagous pest in Asia, Oceania, and the Indian subcontinent. Although it is a harmful pest in tobacco, it also attacks cole crops, castor, cotton, chilli peppers, tomato, etc. Collection of Larvae and Maintenance Initial culture of S. litura as egg mass was collected from wild castor (Ricinus communis) plant from CRC (Crop Research Center), G.B.P.U.A&T., Pantnagar, Uttarakhand, India. The test insects were reared in a clean plastic container covered with muslin cloth in ideal laboratory conditions, with the temperature kept at 27 • C, and humidity kept at 75-80%. Test insects were served fresh castor leaf every day until they reached the fourth instar larval stage. Finally, fourth instar larvae were starved for 12 to 24 h before being used in insecticidal activity. Bioassay of Insecticidal Activity The leaf dip method was used to assess the insecticidal activity of rhizome and aerial part essential oils of H. coccineum [45]. For evaluating the insecticidal activity, different concentrations of essential oils (10, 25, 50 and 100 µL/mL) were prepared in Tween-20 (1.0%) solution in distilled water. The castor leaves were cleaned and washed in distilled water before being air dried for an hour. Each castor leaf was sliced into a 25 sq.cm section and immersed in various concentrations of essential oils. The leaf discs were slanted on blotting paper for 2-3 min before being placed in the tray to drain excess solution for 2 h at room temperature. Four instar adult five larvae were released in individual Petri dishes after being starved for 12-24 h. Blotting paper was placed at the bottom of each plate. For 72 h, these Petri plates were monitored for any insecticidal activity. This activity took place in ideal laboratory conditions, with a temperature of 27 • C and a relative humidity of 75-80%. The mortality (%) was calculated after 24, 48, and 72 h of the treatment using Abbott's formula [43]. LC 50 values were analyzed using Probit analysis [46]. where, T = Mortality in treatment; C = Mortality in control. 3.6. Herbicidal Activity 3.6.1. Evaluation of Herbicidal Activity The herbicidal action of essential oils was assessed based on various parameters such as inhibition of seed germination, inhibition of shoot length, and inhibition of root length against R. raphanistrum subsp. Sativus (Radish) seeds. Herbicidal Bioassay The herbicidal activity of essential oils was evaluated using the method reported by [47][48][49][50]. Raphanus raphanistrum subsp. Sativus (L.) (Radish) seeds were obtained from the VRC (Vegetable Research Centre), G.B.P.U.A.T. Pantnagar. To evaluate the seed germination inhibition, various conc. of essential oils were prepared in Tween-20 (1.0%) aqueous solution. Prior to usage, R. raphanistrum subsp. sativus seeds were surface sterilized for 15 min in a 5% sodium hypochlorite solution. Ten sterilized seeds of R. raphanistrum sub sp. sativus were placed on the Petri plates, which were coated with regular filter papers. Then, 2 mL of various concentrations of the tested sample were put onto the plates and left to germinate at 25 ± 1 • C for 12 h in an incubator. Pendimethalin was used as a standard herbicide. Tween-20 (1.0%) solution in sterilized distilled water was taken as a control for essential oils. Percent inhibition of seed germination and inhibition of root and shoot length were measured after 5 days of incubation. The formulae used for determination of inhibition of seed germination, inhibition of shoot length, and inhibition of root length are as follows. where, Rt-root length in treatment; Rc-root length in control. Antifungal Activity Fusarium oxysporum and Curvularia lunata, two phytopathogenic fungi, were provided by the Department of Plant Pathology, College of Agriculture, G.B.P.U.A.T, Pantnagar, India. HCCRO and HCCAO were tested against the test fungus using the poisoned food technique developed by [51]. The phytopathogenic fungi were revived and grown by placing the fungal colonies aseptically on the Petri plates containing the Potato Dextrose Agar (PDA) media. The Petri plates were incubated for one week at 26 ± 2 • C. The assay discs (diameter = 5 mm) of a 7-day-old culture of the test fungus were inoculated aseptically, with the prepared plates containing varied conc. of essential oils (50-750 µL/mL) prepared in Tween-20 (1.0%) water solution. A control devoid of essential oils was prepared under the same conditions. The control plate was cultured for 7 days until the growth reached the plate's edge. The percent inhibition of radial growth of each fungal strain was calculated in comparison with the control. Antifungal activity was detected by clear zones of mycelia growth inhibition surrounding the Petri plate, which were measured in millimeters. Carbendazim (50% WP) was employed as the standard fungicide, and percent inhibition was calculated using McKinney's formula [46]. where, X = Radial growth in control, Y = Radial growth in treatment. Diffusion Agar Antibacterial Assay The antibacterial activity of the essential oils was investigated qualitatively via diffusion assay. Briefly, the overnight grown bacterial cultures (Staphylococcus aureus and Salmonella enterica serovar typhi) were sub-cultured in Luria Bertani (LB) broth and grown till OD 600nm reached 0.2. Next, 100 µL of the above culture of each bacterial cell was spread plated on an LB agar plate. Then, 10 µL of rhizome and aerial essential oils was spotted onto the LB agar plates separately and incubated at 37 • C for 24 h. Upon incubation, the inhibition zone diameter of the inoculated plate was measured. Determination of Minimum Inhibitory Concentration The susceptibility of both Gram-positive (Staphylococcus aureus) and Gram-negative (Salmonella enterica serovar Typhi) bacterial cells to essential oils was estimated by the micro broth dilution method as per clinical and laboratory standards institute (CLSI) guidelines in brain heart infusion (BHI) and MH broth, respectively [52][53][54]. Briefly, the overnight grown bacterial cells were sub-cultured in respective broths and grown till the mid log phase (OD reached 0.4). After that, each bacterial cell suspension was diluted 1000-fold to attain an inoculum of 10 5 colony forming units (10 5 CFU/100 µL) and mixed with an equal volume (100 µL:100 µL) of 2-fold-diluted essential oils. The growth of bacterial cells was assessed by enumerating CFU in the agar plate after incubating the bacterial cells for 12 h under a static condition in a humidity-controlled incubator at 37 • C. The MIC of a plant extract is the lowest concentration that inhibits observable microorganism growth. The experiments were repeated three times, with two replicates in each dish. In Silico PASS Prediction of Biological Activities The biological activities of 20 major compounds present in the HCCAO and HCCRO essential oils were predicted using PASS (prediction of activity spectra for substances) software [55,56]. PASS is a free online cheminformatic software that assesses the biological activities of chemical compounds based on structural similarities to a large library of active compounds. Pa or Pi readings were used to calculate the bioactivity score. If the Pa value (chances to be active) was greater than the Pi value (chances to be inactive), the projected compound was likely to be active. HCCAO and HCCRO were predicted to exhibit diverse bioactivities (Pa > Pi). Statistical Analysis All of the experiments were carried out in three replicates, with the results represented as mean ± standard deviation (SD). A two-way analysis of variance (ANOVA) followed by a Tukey's multiple comparison test was performed to test the differences in the means of treatment using RStudio2021.09.2. OriginPro 2021 version 9.8.0.200 was used to perform Principal Component Analysis (PCA) on the chemical composition of the essential oils under investigation to identify the most significant feature in the dataset. Conclusions According to the present study, it can be observed that GC-MS analysis of aerial part and rhizome essential oils (HCCAO and HCCRO) of H. coccineum showed the presence of 50 and 32 compounds, respectively. The tested essential oils possessed significant antibacterial (S. aureus and S. typhi) and antifungal (against F. oxysporum and C. lunata) activities and moderate nematicidal (against M. Incognita), insecticidal (against S. litura), and herbicidal (against R. raphanistrum subsp. sativus) activity in a tested concentration, which can be used to create a highly effective botanical pesticide. The antimicrobial action of H. coccineum essential oil on bacterial and fungal strains demonstrated the plants' potential as a source of natural antimicrobial agents. Nematicidal activity of the essential oils might be a good source of more selective, biodegradable, and environmentally friendly natural nematicides, acting as a substitute to synthetic nematicides and a good source of herbal nutraceuticals and phytochemicals. The herbicidal activity results were also validated by IC 50 values, as the higher the IC 50 value, the lower the herbicidal activity. The order in which the samples exhibited herbicidal potential in terms of percent seed germination inhibition was found HCCRO (62.78 ± 5.86 µL/mL) > HCCAO (88.09 ± 3.42 µL/mL). Herbicidal potential in terms of root length inhibition was found in the following order: HCCRO (94.68 ± 2.74 µL/mL) > HCCAO (96.85 ± 0.38 µL/mL), while herbicidal potential in terms of shoot length inhibition was found in the following order: HCCAO (133.06 ± 17.22 µL/mL) > HCCAO (87.44 ± 2.98 µL/mL), respectively.
2022-07-31T15:13:10.283Z
2022-07-28T00:00:00.000
{ "year": 2022, "sha1": "da12bdcc070b4bd34af68f8e12702c7481da1f1e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/27/15/4833/pdf?version=1659408401", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fa717bff2395085301a076ca2ab74bfaef109045", "s2fieldsofstudy": [ "Chemistry", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
253783180
pes2o/s2orc
v3-fos-license
Penicillin resistance in bovine Staphylococcus aureus : Genomic evaluation of the discrepancy between phenotypic and molecular test methods Staphylococcus aureus is a major pathogen in humans and animals. In cattle, it is one of the most important agents of mastitis, causing serious costs in the dairy industry. Early diagnosis and adequate therapy are therefore 2 key factors to deal with the problems caused by this bacterium, and benzylpenicillin (penicillin) is usually the first choice to treat these infections. Unfortunately, penicillin resistance testing in bovine S. aureus strains shows discrepant results depending on the test used; consequently, the best method for assessing penicillin resistance is still unknown. The aim of this study was therefore to find a method that assesses penicillin resistance in S. aureus and to elucidate the mechanisms leading to the observed discrepancies. A total of 146 methicillin-sensitive S. aureus strains isolated from bovine mastitis were tested for penicillin resistance using a broth microdilution [minimum inhibitory concentration (MIC)] and 2 different disk diffusion protocols. Furthermore, the strains were analyzed for the presence of the bla operon genes ( blaI , blaR1 , blaZ ) by PCR, and a subset of 45 strains was also subjected to whole genome sequencing (WGS). Discrepant results were obtained when penicillin resistance of bovine S. aureus was evaluated by disk diffusion, MIC, and PCR methods. The discrepancies, however, could be fully explained by WGS analysis. In fact, it turned out that penicillin resistance is highly dependent on the completeness of the bla operon promotor: when the bla operon was complete based on WGS analysis, all strains showed MIC ≥1 µg/mL, whereas when the bla operon was mutated (31-nucleotide deletion), they were penicillin sensitive except in those strains where an additional, bla operon-independent resistance mechanism was observed. Further, WGS analyses showed that penicillin resistance is truly assessed by the MIC assay. In contrast, caution is required when interpreting disk diffusion and PCR results. INTRODUCTION The problem of antibiotic resistance in bacteria has increased at an alarming rate over recent decades, causing difficult-to-treat or even untreatable infections associated with high mortality rates (World Health Organization, 2014;CDC, 2019). To prevent the development of new resistances, the inappropriate use of antibiotics (AB) should be avoided. To achieve this, accurate diagnostics are crucial. Antibiotics are used worldwide in both humans and animals for treating and preventing infectious diseases. Use and misuse of AB in either can cause the development of resistant bacteria strains. Resistance to AB can be transmitted between strains and even species of bacteria, which in turn can be transferred to humans, animals, and the environment and circulate between them (World Health Organization, 2014;EFSA, 2015). Infections with resistant bacteria are a serious problem in health care settings, causing life-threatening infections such as bacteremia, pneumonia, and wound infections. In the United States alone, antimicrobial resistance (AMR) causes more than 23,000 deaths every year (CDC, 2019). In veterinary medicine, mastitis is the leading cause of economic loss in dairy herds due to reduction in yield and quality of milk, treatment costs, and culling of animals because of treatment failure (Halasa et al., 2009;Peton and Le, 2014;Ruegg, 2017). In Switzerland, the disease results in total costs of about US$131 million per year (Heiniger et al., 2014). Staphylococcus aureus, together with Escherichia coli and Streptococcus uberis, is 1 of the 3 most important mastitis pathogens (Peton and Le, 2014). Staphylococcus aureus normally causes subclinical chronic mastitis in cows (Sears and McCarthy, 2003). In some cases, only a few cows are infected; in other cases, the majority of the herd is affected (Leuenberger et al., 2019). Genotyping of S. aureus by ribosomal spacer PCR (RS-PCR) as developed by Fournier et al. (2008) demonstrated that the rate of infected cows in a herd is highly dependent on the genotype. When S. aureus genotype B was isolated, up to 87% of cows in a herd were infected (Fournier et al., 2008;Graber et al., 2009;Cremonesi et al., 2015;van den Borne et al., 2017). In contrast, infections by genotype C, genotype S, or other genotypes were restricted to at most a few cows in a herd (Fournier et al., 2008;Graber et al., 2009;Cremonesi et al., 2015;. The S. aureus genotype B was almost exclusively associated with clonal complex (CC) 8 when the strains were subtyped by multilocus sequence typing (MLST), whereas spa typing typically revealed t2953 . Staphylococcus aureus genotype C, however, was always t529, and in most cases it was CC705 . For all the other genotypes, the link between RS-PCR, MLST, and spa typing was less obvious, which is largely because the typing methods rely on different genetic information . Ribosomal spacer PCR is particularly suited for clinical application, as it is a high-throughput method that is cheap and comes with an analytical resolution for bovine strains that is at least as good as spa typing Boss et al., 2016). For subtyping at the biological level, however, MLST is more appropriate because it represents an S. aureus clone (Feil et al., 2003) and, as a consequence, its evolutionary identity (e.g., Kläui et al., 2019). In Switzerland (Swiss Administration, 2018), Finland (Pyorala, 2009), and many other countries (EFSA, 2019), penicillin G (penicillin), also known as benzylpenicillin, is the most commonly used AB for treating IMI of cows caused by S. aureus and other gram-positive mastitis pathogens. In S. aureus, AMR against penicillin and all other β-lactamase-sensitive penicillin is encoded by the bla operon that can be located as a transposon on plasmids or on the chromosome (Lowy, 2003;Llarrull et al., 2011). The bla operon contains 3 genes (Clarke and Dyke, 2001;Llarrull et al., 2011): blaI encodes the repressor of the bla promoter inhibiting transcription of blaR1 and blaZ in the absence of β-lactam AB; blaR1 encodes the sensor for penicillin and other β-lactam AB and inactivates BlaI after AB binding; and blaZ encodes the β-lactamase that cleaves β-lactam AB by breaking down the β-lactam ring. Previous studies showed marked discrepancies between the results of phenotypic resistance testing for penicillin and PCR results for the presence of blaZ gene in S. aureus isolates obtained from bovine mastitis: indeed, 40% of isolates carrying the blaZ gene were phenotypically susceptible to penicillin using the nitrocefin test (Haveri et al., 2005); furthermore, in the study by Russi et al. (2015), concordance between blaZ-PCR and phenotypic tests ranged between 87.5% (acidimetric test) and 93.0% [disk diffusion (DD)]. Although these concordance rates in the second study are better than in the first one, the discrepancy is still considerable. Even worse, the outcome is highly dependent on the method used. Taken together, penicillin resistance testing in bovine S. aureus strains appears to be highly unreliable, and the best method for assessing penicillin resistance is still unknown. As a consequence, this may lead to ineffective AB therapies and an increased risk of AMR. The aim of this study was, therefore, to find the method that best assesses penicillin resistance in S. aureus and to elucidate the mechanisms that lead to the observed discrepancies. To this end, phenotypic and molecular methods including whole genome sequencing (WGS) were applied to a large number of S. aureus strains isolated from dairy cattle. Strains Initially, 108 bovine strains of S. aureus were selected from our European strain collection, which includes 456 strains. The selection was made in a way that the distribution of CC types among strains was the same as observed for IMI in our European survey study . Accordingly, 37 strains of CC8, 33 of CC705, 18 of CC97, and 20 strains of other CC were selected. Within each CC, they were randomly chosen. Subsequently, an additional 41 CC8 strains were randomly selected for more detailed investigation, as the preliminary results frequently showed discrepancies among the different methods used to assess penicillin sensitivity. These 149 strains all originated from different herds, located throughout Europe: Austria, Belgium, France, Germany, Ireland, Italy, Norway, Sweden, and Switzerland. They had all been isolated from aseptically collected milk samples of cows with IMI, having been sent in for diagnostic purposes. The strains were stored in skim milk at −20°C. Previous characterization included PCR for the nuc gene (highly specific for S. aureus; Brakstad et al., 1992;Graber et al., 2007), RS-PCR Ivanovic et al.: PENICILLIN RESISTANCE IN STAPHYLOCOCCUS AUREUS for genotype and genotypic cluster (CL) attribution (Fournier et al., 2008;Cosandey et al., 2016), spa type, and MLST (CC attribution) as described . Genotypes and their variants of S. aureus were combined to genotypic CL. For genotype B and its variants, the resulting cluster was named CLB. Accordingly, other clusters such as CLC or CLR were obtained. Bacterial Lysate Preparation Strains were spread on sheep blood agar (Biomérieux Suisse SA) and aerobically incubated at 37°C for 18 h. DNA was obtained by the boiling method . In brief, a single colony was resuspended in 100 µL of 10 mM Tris-HCl (Merck) and 10 mM EDTA (pH = 8.5; Merck), incubated at 95°C for 10 min, and immediately placed on ice. Lysates were stored at −20°C. To be used as a PCR template, lysates were thawed and diluted 1:100 in H 2 O. Testing of Methicillin Susceptibility by PCR All strains were tested for methicillin sensitivity by PCR using the commercial SureFast MRSA 4plex kit (Congen Biotechnologie GmbH). The assay is a real-time PCR for the direct, qualitative detection of S. aureus, of the mecA and mecC genes, and of the junction between the orfX gene and the staphylococcal cassette chromosome (SCCmec) carrying mecA/mecC. These genes code both for the penicillin binding protein (PBP) 2a, providing phenotypic resistance to β-lactam AB (Fergestad et al., 2020). The assay was performed according to the instructions of the manufacturer, using for each strain 5 µL of diluted lysate plus 20 µL of master mix (Congen). Real-time PCR was performed in a Mic qPCR Cycler (Bio Molecular Systems) using an initial denaturation of 95°C for 60 s, followed by 45 cycles with a profile of 95°C for 15 s and 60°C for 30 s. A strain giving positive quantitative PCR (qPCR) signals for S. aureus, the orfX/SCCmec junction, and the mecA/mecC gene was considered a methicillin-resistant S. aureus. PCR Analysis of the bla Operon Genes All strains were analyzed for the presence of blaI, blaR1, and blaZ of the bla operon using singleplex melting curve PCR (mPCR). Primers (Table 1) were designed based on the target sequence of the S. aureus pLUH02 plasmid (FR714929) using the OLIGO 6.53 software (Molecular Biology Insights Inc.). Primer synthesis was performed by Microsynth (Microsynth AG). The reactions were carried out in a total volume of 20 µL, containing KAPA SYBR Fast 2x (Merck), 300 nM of both primers, and 2.5 µL of diluted lysate as a template. The PCR amplifications were performed in a Rotor-Gene 6000 real-time thermal cycler (Corbett Life Science). For blaI as well as for blaR1, the following cycling protocol was used: an initial step of 95°C for 3 min, 35 cycles of 95°C for 3 s, 55°C for 30 s, 72°C for 2 s, and a final elongation of 55°C for 5 min. Afterward, melting of amplicons was performed from 55°C to 94°C with rising steps of 1°C and a 5-s waiting time at each step. For blaZ, amplification included an initial step of 95°C for 3 min, 35 cycles of 95°C for 3 s, 60°C for 30 s, and a final elongation of 60°C for 5 min. Melting of amplicons was performed from 60°C to 94°C with rising steps of 1°C and a 5-s waiting time at each step. Amplicons with a single melting peak identical to the positive control were considered as specific amplification: under our conditions, they were 76.2°C for blaI, 76.9°C for blaR1, and 77.4°C for blaZ. The DNA of S. aureus strains known to be positive or negative for all 3 bla genes were used as positive and negative controls, respectively. MIC Assay For the MIC assay, the broth microdilution method was used according to the protocols of the Clinical and Laboratory Standards Institute (CLSI; CLSI, 2018b, 2020). Specifically, in 96-well microtiter plates, crystalline penicillin (Merck) was diluted in a geometric 1:2 dilution series (from 8 µg/mL penicillin to 0.063 µg/mL) using freshly prepared, commercially avail- Primers were designed based on the target sequence of the Staphylococcus aureus pLUH02 plasmid (FR714929). able Ca ++ -and Mg ++ -adjusted Mueller-Hinton broth (Thermo Fisher Diagnostics AG). Preparation of the inoculum, inoculation, incubation (35°C, 18 ± 2 h), and visual determination of the MIC were performed according to CLSI protocols (CLSI, 2018b(CLSI, , 2020. As a reference, the strain S. aureus ATCC 29213 was used. The MIC method was chosen a priori as the reference for all phenotypic and mPCR methods because it is the standard method used in most reference laboratories in many countries (Tenover, 2019). A MIC value greater than or equal to 0.25 µg/mL is considered resistant to penicillin, whereas strains with values less than or equal to 0.125 µg/mL (CLSI, 2020) require a negative induced β-lactamase test (see below) to confirm penicillin susceptibility (CLSI, 2020). DD Assay The DD was performed according to CLSI (DDC; CLSI, 2018a, 2020) and the European Committee on Antimicrobial Susceptibility Testing (DDE; EUCAST, 2022a,b) applying commercially available Mueller-Hinton agar plates (Thermo Fisher). The conditions for inoculum preparation and incubation (35 ± 1°C, 18 ± 2 h, aerobic) were the same for both methods. However, there was a difference in the penicillin content of the disks: the DDC protocol requires disks containing 10 IU of penicillin (Thermo Fisher), whereas 1 IU of penicillin is necessary for the DDE protocol. As reference strains, S. aureus ATCC 25923 was used for the DDC protocol, and S. aureus ATCC 29213 was used for the DDE protocol. For both protocols, penicillin evaluations can be transferred to all β-lactamase-labile penicillins. According to the EUCAST guidelines (EUCAST, 2022b), plate reading includes evaluation of the zone diameter and of the zone edge: a fuzzy zone edge and a zone diameter greater than or equal to 26 mm is considered susceptible, whereas a sharp zone edge and a zone diameter greater than or equal to 26 mm or a diameter less than 26 mm is considered resistant. For the DDC, a zone diameter less than or equal to 28 mm is considered resistant to penicillin; for diameters greater than or equal to 29 mm, a negative induced β-lactamase test (see below) is required to confirm penicillin susceptibility (CLSI, 2018a, 2020). Nitrocefin Test (Induced β-Lactamase) The CLSI protocols (CLSI, 2018a, 2020) require a negative induced β-lactamase result to confirm sensitive MIC and DDC results. Therefore, a nitrocefin test was performed. Nitrocefin is a β-lactam molecule that changes color when hydrolyzed by a β-lactamase. Al-though only sensitive MIC and DDC results need to be confirmed, all strains were subjected to this test. For this purpose, the inoculum was spread on Mueller-Hinton agar, a cefoxitin disk (30 µg; Thermo Fisher) was added, and the plate was incubated aerobically for 17 ± 1 h at 35 ± 1°C. Using colonies from the zone margin surrounding the disk, a commercial nitrocefin test (Beta-Lactamase sticks, Thermo Fisher) was performed following the manufacturer's protocols. The strain S. aureus ATCC 29213 served as a positive control, and strain ATCC 25923 served as a negative control. Whole Genome Sequencing Strain Selection. Forty-five strains were selected for WGS. Twenty-nine of these strains were positive for CC8 and showed a positive result using blaZ mPCR. To rule out regional effects, they were selected from different herds that were spread all over Europe. The high number of CC8 was selected to better understand the considerable discrepancy between the molecular and phenotypic results (see below). In addition, all non-CC8 strains showing penicillin resistance by the MIC assay were included (n = 10). The CC of these strains were CC9, CC29 (each n = 1), CC97 (n = 6), and CC133 (n = 2). Additionally, 6 strains (1 CC20, 1 CC133, and 4 CC705) were selected as negative controls that were all penicillin sensitive and negative for blaZ by mPCR. DNA Extraction. Three to 4 colonies picked from blood agar were resuspended in 4.5 mL TSB (Tryptic Soy Broth; Thermo Fisher) and incubated aerobically at 37 ± 1°C for 18 ± 2 h at 140 rpm. One mL of this culture was subsequently added to 500 mL TSB in a centrifuge bottle and incubated using the previous conditions. Then, the culture was centrifuged (4,600 × g, 4°C, 25 min), the supernatant discarded, and the pellet resuspended in 15 mL of 10 mM Tris/HCl (pH 7.8) buffer and transferred to a 50-mL tube. After recentrifugation (18,000 × g, 4°C, 5 min), the supernatant was discarded, and the pellet resuspended in 2 mL RES buffer of the NucleoBond Xtra Maxi Kit (Macherey-Nagel AG). The suspension (1.5 mL) was transferred to a 2-mL Eppendorf tube containing 350 mg of glass beads (bead size 212 to 300 µm; Merck). The cells were subsequently lysed using a Bead Ruptor Elite (Omni International) at intensity level 6 for 45 s. After centrifugation (13,500 × g, 4°C, 5 min), the supernatant was transferred to a 100-mL glass bottle containing 22 mL of RES and 24 mL of LYS buffer (Macherey-Nagel). Then DNA extraction was performed using the Nucleo-Bond Xtra Maxi Kit (Macherey-Nagel) according to the manufacturer's protocol. The resulting pellet was dissolved in 200 µL of H 2 O and the DNA repurified for maximum purity using the High Pure PCR Template Preparation Kit (Roche). Elution was performed using 200 µL of elution buffer. Quality and total amount of extracted DNA were evaluated by spectroscopy using optical density (OD) ratio of OD 260 /OD 280 (Quick-Drop; Molecular Devices) and by Qubit assay (Thermo Fisher), respectively. This extraction procedure resulted in enrichment of plasmid DNA whereby, as desired, substantial amounts of chromosomal DNA were copurified. The molecular ratio between plasmid DNA and chromosomal DNA was approximately 1.5 for S. aureus genotype B when chromosomal DNA was quantified by lukEB qPCR and plasmid DNA by qPCR for sed (Boss et al., 2011). WGS and Assembly of the Sequence Reads. The DNA samples were sent to Eurofins Genomics GmbH for WGS using the HiSeq sequencing platform (Illumina), guaranteeing at least 1.5 gigabytes of reads for each sequenced strain. A high number of reads was wanted to verify the nucleotide at each position of the genome as consistently as possible (high site coverage) and to avoid assembly interruption because of missing reads. To enrich bla operon-specific reads of each strain in silico, total reads were first assembled against the chromosome of S. aureus NCTC 8325 (NC_007795; devoid of bla operon containing transposons) using the SeqMan NGen 16 software (default settings) included in the DNASTAR Lasergene 16 software package (DNASTAR Inc.). The remaining unassembled reads of the query strain (containing plasmids, transposons) were then de novo assembled with the SeqMan NGen 16 software (DNASTAR), deactivating the "repeat handling" option in the software settings, selecting a minimum match for overlapping read segments of 93%, and selecting contigs with lengths greater than 1,000 nucleotides. The contigs were then screened for the presence of the bla operon using a blast-like algorithm of the Clone Manager 9.51 software (CM9; Sci Ed Software) and the bla operon on plasmid SAP047A as a reference. In most cases, the bla operon was found on only 1 contig. In 5 CC8 strains, however, hits for the bla operon were found on 2 different contigs. For 1 of these contigs, alignments against the reference bla operon (SAP047A) always resulted in a similarity greater than 98%, whereas the similarity for the other contig was always less than or equal to 93%. Blasting one of these new, lower matching contigs against the NCBI nr database (https: / / blast .ncbi .nlm .nih .gov; Altschul et al., 1990) resulted in a 95% match with a transposon (Tn13616) and its bla operon present on the chromosome of the S. aureus strain NCTC 13616 (NZ_LR134193). The Tn13616 transposon was subsequently used as an assembly reference to obtain the complete transposon for these 5 CC8 strains using the SeqMan NGen 16 software (DNASTAR) and a minimum match for overlapping read segments set to 99%. Using this software setting, only those reads were selected that were highly specific for the new transposon. In Silico Analysis of Plasmids All contigs were tested for representing a plasmid using PlasmidFinder 2.1 (https: / / cge .cbs .dtu .dk/ services/ PlasmidFinder; Carattoli et al., 2014). For naming, the new plasmids were blasted (https: / / blast .ncbi .nlm .nih .gov) against the NCBI nr (https: / / ncbi .nlm .nih .gov) database to look for the bla operon with which the unknown operon fits best in terms of highest query coverage and similarity. If there were several plasmids with identical coverage and similarity values, the best annotated one was selected (= reference plasmid) and the name was kept for all new plasmids, if coverage and similarity values were closest to the reference plasmid. The newly detected plasmids were further annotated using the Rapid Annotation using Subsystem Technology Server (https: / / rast .nmpdr .org; Overbeek et al., 2014). Previous studies had shown that plasmid-based enterotoxin genes sed, sej, and ser (Benkerroum, 2018) can be present in bovine S. aureus strains (Fournier et al., 2008;Graber et al., 2009;Hummerjohann et al., 2014;Cosandey et al., 2016). As Rapid Annotation using Subsystem Technology is inconclusive for enterotoxin genes, the plasmids were further compared with our reference library for the staphylococcal enterotoxin genes sea-sex and tst using the Needleman-Wunsch algorithm (CM9, Sci Ed). The library was created based on data published by Merda et al. (2020). In Silico Analysis of the bla Operon The bla operon of plasmids and transposons were subsequently precisely located by nucleotide sequence alignment using the Needleman-Wunsch algorithm (CM9, Sci Ed) and the bla operon of the SAP047A plasmid as a reference. Afterward, within each operon, the 3 genes blaI, blaR1, and blaZ were translated in silico into their corresponding proteins using the standard genetic code and the CM9 software (Sci Ed). The translated AA sequences were checked for full length by aligning them to the Uniprot reference sequences for BlaI (P0A042), BlaR1 (P18357), and BlaZ (P00807), respectively. The BlaI protein was then checked for the presence of the AA Asn at position 101 (Asn101) and Phe102, which form the proteolytic cleavage site for BlaR1 (Zhang et al., 2001). BlaR1 was tested for the presence of Ser389 and Lys392, which are critical for sensing β-lactam AB (Zhang et al., 2001), as well as for the presence of His201 and Glu202, which are key in activating the proteolytic domain of the protein (Zhang et al., 2001). BlaZ was checked for Ser70, which forms the active site for cleaving β-lactam AB (Chen et al., 1996). Furthermore, based on preceding analyses using the Uniprot reference sequences (P0A042, P18357, and P00807), the newly detected BlaI had to best match with the functional family (FunFam) 266241 of the CATH/Gene3D v4.3 database (http: / / www .cathdb .info). In the case of BlaR1 and BlaZ, the fit had to be best with FunFam 21251 and 12260, respectively (http: / / www .cathdb .info). Only if a protein fulfilled all the mentioned criteria (full length, presence of all key AA, FunFam) was it considered to be functional. According to Dawson et al. (2017), proteins with the same FunFam-that is, with the same 3-dimensional protein domain structure together with its specific AA sequence forming this domain-share the same biochemical function with high probability. All in silico analyses were performed a priori-that is, without being aware of the phenotypic and mPCR tests results. Statistics Statistical analyses were performed using the Systat 13.1 software (Systat Software). Data were expressed as frequencies, percentage, or median, minimum, and maximum. To investigate the agreement between MIC (reference) and DDE, DDC, and mPCR, respectively, the following parameter were calculated: sensitivity (= fraction of penicillin-resistant or blaZ-positive strains), specificity (= fraction of penicillin-susceptible or blaZnegative strains), and Cohen's kappa. The 95% confidence interval (CI95) for proportions was estimated by the Wilson score interval (Wilson, 1927) using R 4.0.3 (R Core Team, 2020) together with the "binom" library. Differences in penicillin resistance among CC or CL were calculated using Fisher's exact test for 2 × 2 contingency tables or a generalized version of the test for k × m tables as implemented in Systat 13.1 (Systat). For comparison of 2 methods (paired samples), McNemar's test for symmetry was applied (Systat). Analysis of Methicillin-Sensitive S. aureus Strains Out of the initial 149 selected S. aureus strains, 146 turned out to be methicillin-sensitive S. aureus (MSSA); 1 CC97 and 2 CC398 strains were resistant (methicillin-resistant S. aureus) and were excluded from the present study. Disk Diffusion Assays and PCR Analysis Disk diffusion analysis using the DDC protocol revealed 40 out of 146 (27%) strains that were penicillin resistant (Table 2 and Supplemental Table S1). For the DDE protocol, 82 strains (56%) were penicillin resistant, whereas for PCR, 87 (60%) S. aureus strains were positive for blaZ (Table 2 and Supplemental Table S1). Importantly, all strains were either positive for all 3 bla operon genes (blaI, blaR1, and blaZ) or negative for all of them (Table 2 and Supplemental Table S1). The DDC results clearly differed from those obtained by DDE or PCR (for each P < 0.001). In addition, a minor difference was observed between the DDE and the PCR assays (P = 0.025). For all 3 methods (DDC, DDE, PCR for blaZ), the results were highly CC and CL dependent (for each method P < 0.001). In fact, all CC705/CLC strains were always penicillin susceptible and always negative by PCR for blaZ, respectively. Regarding the other CC and CL, the results were variable (Table 2 and Supplemental Table S1). Including all strains, negative mPCR results (n = 59) for all bla operon genes were linked to a MIC value less than or equal to 0.125 µg/mL, indicating penicillin susceptibility, with 53 isolates having a MIC less than 0.063 µg/mL, 1 isolate 0.063 µg/mL, and 5 isolates 0.125 µg/mL. If the CC8 strains were excluded, all the remaining 19 PCR-positive strains except 1 showed MIC values from 0.5 to 8 µg/mL, indicating penicillin resistance (Table 2 and Supplemental Table S1). Using WGS, a total of 28 plasmids were detected. Out of these, 27 originated from S. aureus CC8 and 1 from CC97 (Table 5). All plasmids isolated from CC8 and CC97 matched best with the pSK67 plasmid (NC_019010) with a query coverage ranging between 99% and 100% and similarities between 99.85% and 100%. The plasmid length ranged between 27,266 bp and 28,827 bp with a median of 27,735 bp being close to the length of pSK67 (27,439 bp). All the pSK67 plasmids carried the repA gene, 5 genes coding for plasmid replication proteins, together with the acuI, cadC, and cadD genes, as well as the enterotoxin genes sed, sej, and ser. Rapid Annotation using Subsystem Technology of the plasmids further revealed 3 genes with assigned FIG numbers (Figure 01109056, Figure 01109057, Figure 01109060) and 19 genes coding for hypothetical proteins. Analysis of the bla Operon A total of 45 strains were studied by WGS for bla operon analysis. In 23 strains, the operon was located on a plasmid, in 11 strains on the chromosome, and in 5 strains there was an operon on both, the chromosome and a plasmid. For the 6 randomly selected penicillinsensitive strains, all negative for blaI, blaR1, and blaZ by mPCR, no bla operon could be detected. The operon structure was always the same (5′-3′): blaI (antisense) -blaR1 (antisense) -promoter -blaZ (sense); see Figure 1. All 45 BlaI proteins appeared complete and functional (Table 5). The same was true for 42 BlaR1 and 43 BlaZ proteins. Putative nonfunctionality was caused by frame shifts due to a deleted adenine nucleotide (A) within a poly A repeat, leading to a premature translational stop (Table 5). The overall AA similarity of BlaI compared with the appropriate reference protein ranged between 99.2% and 100% (Table 5). For functional BlaR1 (n = 42), the similarity was between 98.1% and 100%; for functional BlaZ (n = 43) it was between 98.9% and 100%. For all the analyzed proteins with a similarity less than 100%, the observed mutations never affected the key AA that were required for correct protein functioning. The plasmid-located operons always belonged to 1 of 2 forms: the wild-type form as present on the original pSK67 plasmid with a length of 3,080 bp and a shorter version with a length of 3,049 bp (Table 5). All short operons lacked a 31-nucleotide fragment that was in its entirety located in the promoter region (Figure 1). The deletion that started at bp 2,175 and ended at bp 2,205 of the pSK67 bla operon wild-type sequence resulted in a mutated bla promoter. Indeed, it lacked almost the complete blaR1 dyad, the complete Pribnow box for blaZ, the transcription starting point for the blaZ mRNA, and the beginning of the blaZ dyad (Figure 1). The mutated form of the bla promoter was exclusively observed in S. aureus CC8 strains. In all other CC, the operon, if present, showed the wild-type form ( Table 5). The chromosomal bla operons were all located on a transposon identical to the one of S. aureus O217 (CP038461; TnO217) ( Table 5). Comparing pSK67 and TnO217, the nucleotide similarity for the bla operon was 93.3%; the AA similarity for BlaI, BlaR1, and BlaZ was 96.1%, 88.1%, and 91.1%, respectively. If at least 1 operon of a strain was complete, the corresponding MIC values for penicillin were always greater than or equal to 1.0 µg/mL, independent of whether the operon was located on the pSK67 plasmid or on the chromosomal TnO217 transposon (Table 5). If only a mutated bla promoter was present, the MIC values were always less than or equal to 0.25 µg/mL, independent of the presence of functional Bla proteins or the genomic location of the bla operon (Table 5). Nitrocefin Test To confirm the DDC results, strains were subjected to a nitrocefin test (Table 2 and Supplemental Table S1). All 106 strains that tested sensitive for penicillin by DDC showed a negative nitrocefin test, indicating that no β-lactamase was induced. All except 1 of the resistant strains showed a positive nitrocefin test. Sequence similarity compared with corresponding protein of the indicated bla operon type. Premature stop = abortion of translation because of a frame shift, leading to a nonfunctional protein. 2 Strain that served as control, as it was known not to host a bla operon. DISCUSSION The present study shows that discrepancies among different phenotypic and PCR methods for testing penicillin resistance in bovine MSSA can be fully explained by the combination of phenotypic, PCR, and WGS analyses. In particular, these examinations demonstrated that the genetic structure of the bla operon was highly associated with the level of the MIC values and with penicillin resistance. Compared with the MIC assay, however, DDC, DDE, or mPCR for the 3 bla genes revealed either inappropriate sensitivities or specificities. Technical Aspects The success of phenotypic penicillin resistance prediction by genomic methods was based on the following steps: (1) proven in vitro knowledge about the genes and their regulation in penicillin AMR; (2) reliable detection of the plasmid carrying penicillin resistance; (3) high number of WGS reads; and (4) bioinformatic protein and promoter analysis. As for (1), this knowledge is given by previous studies (Rowland and Dyke, 1989;Lewis et al., 1999;Clarke and Dyke, 2001;Lowy, 2003) and is key to link in silico data to a phenotypic outcome. As for (2), reliable detection of plasmids by WGS is essential to rule out false-negative results when compared with phenotypic methods. To do so, we used a kit dedicated to plasmid DNA extraction (Macherey-Nagel). As chromosomal DNA is only partially removed by these methods, it is copurified with the plasmid DNA, resulting in a molecular chromosome-to-plasmid ratio of approximately 1:2. As for (3), the present WGS approach using 1 flow cell per strain (Illumina) was selected because it generates a high number of reads so that reads of low abundance are also sufficiently represented. This procedure avoids, therefore, assembly interruptions in target regions that are only covered by a few or no reads if reads are not evenly distributed among the targets. A further advantage is that a high site coverage was obtained, enabling unambiguous statements about mutations and the genetic structure of the bla operon. And as for (4), phenotypic penicillin resistance in an S. aureus strain is only present if, in addition to the presence of all the bla operon genes, the proteins BlaI, BlaR1, and BlaZ are expressed and functional. To address these questions, the 3-dimensional structures of these proteins after in silico translation were analyzed using CATH/Gene3D v4.3 (http: / / www .cathdb .info) and their functionally relevant sites checked manually. Furthermore, the bla promoter was investigated by bioinformatic methods. Importance of the bla Operon and bla Promoter on Phenotypic Penicillin Resistance Bioinformatic promoter analysis showed a mutated promotor to be present in most CC8 strains (Table 5): it lacked almost the entire blaR1 dyad, the entire Pribnow box for blaZ, the transcription starting point for the blaZ mRNA, and the beginning of the blaZ dyad ( Figure 1). The mutated operon was always located on the pSK67 plasmid and never on the TnO217 transposon. The present study demonstrates that the deletion within the bla promoter plays a crucial role in phenotypic penicillin resistance in S. aureus, as expected from current knowledge (Clarke and Dyke, 2001;Lowy, 2003;Llarrull et al., 2011): if the promoter was present and complete, the MIC values were always high (≥1.0 µg/ mL; Table 5), whereas if the mutated promoter alone was present, the values were always low (≤0.25 µg/ mL; Table 5). High MIC values were independent of Ivanovic et al.: PENICILLIN RESISTANCE IN STAPHYLOCOCCUS AUREUS Figure 1. The sequence of the bla operon between the start sites for translation of BlaZ and BlaR1 is shown for a wild-type (WT) and a short, mutated bla promoter. The annotation of the WT form was performed according to Clarke and Dyke (2001). The transcription starting sites for blaR1 and blaZ begin at position 1 and are labeled "mRNA." The start of each coding sequence is presented as an angled arrow. The Pribnow boxes at positions −10 and the ribosomal binding sites (RBS) for both genes are indicated. The R1 dyad and Z dyad with their inverted repeats are marked by 2 opposed arrows. The dyads serve as the binding sites for BlaI, which blocks the transcription of blaR1 and blaZ (Clarke and Dyke, 2001). the promoter's localization (chromosome or plasmid) and were not affected by the minor operon changes as observed between complete pSK67 and TnO217. That the bla operon with its promoter is key for phenotypic penicillin resistance in S. aureus is also shown by the fact that, if the strains were negative for the bla genes, the MIC values were always less than or equal to 0.125 µg/mL. The present study does not allow locating the precise elements within the promoter deletion that account for the low penicillin MIC values. Perhaps they are associated with the missing Pribnow box for blaZ, which is known to be vital for efficient transcription (Pribnow, 1975). To answer this question, a separate promoter study is needed that constructs transcriptional fusions between various blaZ and blaR1 promoter regions and a reporter gene (Sadykov et al., 2019). These analyses will then allow a dynamic inspection of the fusion activities and discovery of the relevant promoter elements. Bla Operon Diversity and Its Consequences According to the present study, only 2 different genetic elements of the bla operon were observed in bovine S. aureus strains: the pSK67 plasmid and the transposon TnO217. Furthermore, with an overall similarity of 93.3% at the nucleotide level, the diversity of the bla operons was small. These results are in clear contrast to those by Olsen et al. (2006) suggesting a much higher diversity. The reason for this discrepancy may be that in the previous study, the Sanger method and overlapping amplicons were used to sequence part of blaZ and blaR1, whereas in the present study, the complete bla operon was sequenced by WGS, choosing a high site coverage. This allowed erroneous sequencing results to be reduced to a minimum. Other reasons such as restriction to only a few dairy herds or countries can be ruled out, as all our isolates analyzed by WGS were obtained from different herds that were located in various European countries. The small number of bla-carrying genetic elements is unexpected. Apparently, bovine strains of S. aureus are very selective in their acquisition of these mobile elements, although a less restricted handling might have been beneficial for their survival. Even more striking is the fact that in most strains harboring the pSK67 plasmid, the bla promoter was mutated, resulting in penicillin susceptibility or resistance at low penicillin concentration (0.25 µg/mL). These findings indicate that there are more important mechanisms than penicillin resistance involved in enabling the survival of S. aureus in the udder. One mechanism might be the internalization of S. aureus into bovine mammary epithelial cells (Caldeira et al., 2019;Frutis-Murillo et al., 2019;Geng et al., 2020), which not only facilitates their replication (Wang et al., 2019) but may also protect them from being attacked by penicillin. Intracellular survival is enabled by the staphylococcal property to block the fusion of the phagosome with the lysosomes (Neumann et al., 2016;Geng et al., 2020), creating a safe environment for the pathogen. Diagnostic Importance For a patient infected by S. aureus and other bacteria, it is vital that an effective AB is used to curb the infection. It is, therefore, essential that the isolate is tested for AMR to select an appropriate drug. To do so, various methods are currently used, including MIC assay, DD, PCR, and in rare cases WGS. The present study shows, however, that for S. aureus and penicillin not all of these tests produce equally reliable results. The study also shows that these discrepancies can be fully explained by the combination of phenotypic, PCR, and genomics methods. As WGS demonstrated (Table 5), all strains with at least 1 functional bla operon were resistant to penicillin, with MIC values always greater than or equal to 1 µg/ mL, whereas all strains except 4 showing a mutated bla operon were penicillin sensitive. The 4 exceptional CC8 strains all exhibited MIC values of 0.25 µg/mL. On the other hand, all strains negative for the bla genes by mPCR showed MIC values of less than or equal to 0.125 µg/mL (Table 2 and Supplemental Table S1). Taken together, these observations indicate that MIC values equal to 0.25 µg/mL are most likely the result of an additional, bla operon-and PBP2a-independent, low-level mechanism for penicillin resistance. A possible candidate responsible for this kind of resistance is PBP4. In fact, this protein has recently been shown to provide resistance to the entire class of β-lactam AB (da Costa et al., 2018), particularly if its expression was increased (Basuino et al., 2018). Own genomic analyses using all 45 WGS strains (data not shown) and the PBP4 gene of S. aureus N315 (NC_002745) demonstrated that the gene was present in all strains. With a median of 99.7% (minimum = 98.6%, maximum = 100%), the similarities at the nucleotide level were all very high. For CC8/CLB, all strains except 1 (5 mutations) showed 4 mutations, always at the same sites and not affecting the active and β-lactam binding sites of PBP4. Based on these observations, PBP4 hardly accounts for the low-level mechanism for penicillin resistance. Further in vitro and WGS studies, however, are required to elucidate the role of PBP4 and possibly other proteins. Based on our analyses, the true percentage of penicillin-resistant strains is equal to the value obtained by the Ivanovic et al.: PENICILLIN RESISTANCE IN STAPHYLOCOCCUS AUREUS MIC assay, which was 40%. Compared with the MIC assay, mPCR for bla genes detected penicillin-resistant strains with a 100% sensitivity but a low specificity of 68% (Table 4). The considerable number of false positives (32%) observed by mPCR can be explained by WGS analysis. In fact, all strains in which a bla operon was detected by WGS also showed a positive mPCR result. The mPCR method, however, only detected the presence of the specific gene and did not consider its regulation and functionality. For DDC and DDE, the results were also substantially divergent. Compared with the MIC assay, the sensitivity and specificity for DDC were 68% and 100%, respectively; for DDE they were 100% and 74%. With kappa equaling 0.715 and 0.692, respectively, the values were just moderate. Normally DDC gave a positive result if the corresponding MIC values were greater than or equal to 1.0 µg/mL (Table 2 and Supplemental Table S1). In contrast, DDE, as the mPCR method, generated too many false-positive results. This shows that DDE and mPCR methods generated very comparable results, suggesting that S. aureus strains with positive PCR results for blaZ and negative results for mecA and mecC had possibly been selected to create the phenotypic DDE protocol. According to EUCAST (EUCAST, 2022a,b), a fuzzy zone edge together with a zone diameter greater than or equal to 26 mm is to be reported as penicillin susceptible, whereas a sharp zone edge and zone diameter greater than or equal to 26 mm is reported to be penicillin resistant (EUCAST, 2022b). Compared with the MIC assay results, however, the DDE protocol generated too many false-positive results, indicating that the criterion of the zone edge form should be reconsidered, at least for bovine strains of S. aureus. On the other hand, the DDC protocol should be reevaluated too, as its sensitivity is inappropriate. In fact, with a disk content of 10 IU (CLSI, 2018a, 2020), it uses a content that is 10 times higher than the DDE protocols (EUCAST, 2022a,b), obviously inflicting a high penicillin pressure on strains with MIC values between 0.125 and 1.0 µg/mL. Interestingly, among the DD methods given by CLSI and EUCAST, those for penicillin are among the most divergent ones, indicating that working out DD protocols for penicillin resistance in S. aureus is a difficult task. Applying additional WGS analyses as performed in the present study, however, will significantly contribute to establishing new DD protocols that can overcome the limitations of the current methods. CONCLUSIONS Penicillin resistance in S. aureus of bovine origin is highly dependent on the functionality of the bla operon promotor. When functional, all strains showed values of MIC ≥1 µg/mL and were penicillin resistant, whereas, when mutated, they were penicillin susceptible except in those rare cases with a putative, low-level, and bla-operon-independent mechanism for penicillin resistance. Our analyses also demonstrated that penicillin resistance in bovine S. aureus is truly assessed by the MIC assay. In contrast, the concordance between MIC assay, DD, and PCR for the bla genes was only moderate, demonstrating that DD and PCR analyses for clinical use need to be interpreted with great caution. In the present study about bovine S. aureus, a mutated promotor was exclusively found in (bovine) S. aureus of CC8 and was always plasmid based. A transfer of this discovery to S. aureus of human origin is possible and needs further investigation.
2022-11-23T16:20:49.136Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "c28f1839e17ab9e8a606d5bc5c5fec23d71696ed", "oa_license": "CCBY", "oa_url": "http://www.journalofdairyscience.org/article/S0022030222006786/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cf278f21ed61120954d5dfe0bcfc73b307ba74ec", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247160150
pes2o/s2orc
v3-fos-license
Kinematic Modeling at the Ant Scale: Propagation of Model Parameter Uncertainties Quadrupeds and hexapods are known by their ability to adapt their locomotive patterns to their functions in the environment. Computational modeling of animal movement can help to better understand the emergence of locomotive patterns and their body dynamics. Although considerable progress has been made in this subject in recent years, the strengths and limitations of kinematic simulations at the scale of small moving animals are not well understood. In response to this, this work evaluated the effects of modeling uncertainties on kinematic simulations at small scale. In order to do so, a multibody model of a Messor barbarus ant was developed. The model was built from 3D scans coming from X-ray micro-computed tomography. Joint geometrical parameters were estimated from the articular surfaces of the exoskeleton. Kinematic data of a free walking ant was acquired using high-speed synchronized video cameras. Spatial coordinates of 49 virtual markers were used to run inverse kinematics simulations using the OpenSim software. The sensitivity of the model’s predictions to joint geometrical parameters and marker position uncertainties was evaluated by means of two Monte Carlo simulations. The developed model was four times more sensitive to perturbations on marker position than those of the joint geometrical parameters. These results are of interest for locomotion studies of small quadrupeds, octopods, and other multi-legged animals. INTRODUCTION Legged locomotion is the most common form of terrestrial animal movement (Christensen et al., 2021). Even if quadrupedal and hexapodal forms of locomotion have evolved independently (Blickhan and Full, 1987), they present similarities. Both quadrupeds and hexapods can adapt their locomotive patterns according to their objective (Hoyt and Taylor, 1981;Nirody, 2021). Like quadrupeds, hexapods exhibit a wide variety of locomotor strategies (Nirody, 2021), e.g., walking, running, and jumping (Musthak Ali et al., 1992) or even swimming (Schultheiss and Guénard, 2021) and gliding hovering (Yanoviak et al., 2005). As some quadrupeds do, insects change smoothly the inter-leg coordination patterns based on their locomotion speed (Ambe et al., 2018). In the metachronous gait (or direct wave gait), hexapods propagate swinging movements from the hind legs to the forelegs, similarly as quadrupeds do in the walking gait (Ambe et al., 2018). In tripod gait, hexapods move their diagonal legs in phases, as quadrupeds do in the trotting gait (Ambe et al., 2018). These equivalences in the locomotion mechanics generate similar ground reaction force patterns in quadrupeds and hexapods, as demonstrated experimentally by Full et al. (1991). In that study, the authors demonstrated that at constant average speed, cockroaches function as a spring-mass system in which three legs add up to function as one leg of a biped or two legs of a quadruped. As opposed to bipedal and quadrupedal locomotion, hexapodal locomotion is characterized by its plasticity. For instance, hexapods can adopt quadrupedal or bipedal gaits to increase speed, as has been shown in cockroaches (Full et al., 1991). The bipedal posture adopted when the insect stands up allows for a longer stride length while maintaining the same stride frequency, thus raising the speed. In stick insects, the coordination of the middle legs and hind legs is similar to the typical regular gait of quadrupeds (Grabowska et al., 2012). The emergence of quadrupedal gaits on hexapod robots has also been demonstrated when a sudden fault event occurs to one leg (Yang and Kim, 1998). However, these adaptations deserve further analysis to better understand the plasticity and dynamics of multi-legged gait. The hexapodal gait has been first described as an alternative tripod gait that ensures high static stability (Hughes, 1952) regardless of the support. Yet studies estimating ground reaction forces demonstrate different functions of the rear, median, and front legs (sustain, propel, push, or drag) (Cruse, 1976;Full et al., 1991;Grabowska et al., 2012;Reinhardt and Blickhan, 2014;Wöhrl et al., 2017). Other studies, dedicated to the effects of the ground substrates or load carried, demonstrated the plasticity of the tripod gait in response to mechanical constraints (Bernadou et al., 2011;Pfeffer et al., 2019;Merienne et al., 2020). These studies suggest that hexapodal gait is more complex than a mere alternating tripod one. Furthermore, the small scale and lack of a precise description of the architecture of the musculoskeletal system could explain why the hexapodal gait is less documented than the quadrupedal or bipedal gaits. Learning how insects adapt their locomotion strategies to their environment (motor and neural control), how each body segment moves for a given locomotion strategy (kinematics), and how forces are generated (muscle actuation) and transmitted (joint dynamics) could help answer biological questions and develop engineering applications. For instance, kinematic, dynamic, and motor control data regarding animal locomotion proved indispensable for bio-inspired robotics development. Particularly, some examples of applications include bioinspired robot architecture (Lu et al., 2018), bio-inspired control strategies for legged robots (Dupeyroux et al., 2019;Ouyang et al., 2021), and bio-inspired actuation systems (Ahn et al., 2019), among others. Computational modeling of animal movement can help us better understand the emergence of locomotive patterns and their mechanics by means of musculoskeletal models. A musculoskeletal model is composed of a kinematic model coupled to a dynamic model. The kinematic model, which represents the skeletal system, is a set of body segments connected by joints (i.e., a multibody system). A dynamic model, which represents the muscular system, is a set of actuators attached to the skeletal system. The proper development of the kinematic model is essential for predicting later muscle and joint forces (Dunne et al., 2021). In kinematic modeling, constrained inverse kinematics, as opposed to with unconstrained inverse kinematics, leads to a more realistic prediction of joint kinematics. Conversely, unconstrained inverse kinematics, which permits a fast exploitation of experimental data using stick models, can generate unrealistic behaviors, such as a model's body segment changing length (Dunne et al., 2021). This kind of behavior is unsuitable for musculoskeletal simulations. In constrained kinematic modeling, which is conducted using multibody models, the position and orientation of each segment of the kinematic chain are derived from the trajectories of experimental markers. This is done by optimizing procedures that minimize the weighted least-squares distance between experimental markers and the corresponding markers placed on the kinematic model (Lu and O'Connor, 1999). The position and orientation of each segment of the kinematic chain, together with their first-order derivatives, can be used for further muscle and joint force estimation. In the case of vertebrates, the development and use of musculoskeletal models are mainly motivated by medical applications (REFS). In the case of insects, motivations are mostly related to biology, ecology, and evolution. Ramdya et al. (2017) developed a multibody model of Drosophila to study fast locomotor gaits. Guo et al. (2018) proposed a neuromusculoskeletal model for insects to study control strategies in gait patterns. David et al. (2016) and Blanke et al. (2017) developed musculoskeletal models of the dragonfly's mandible to study bite forces. A kinematic model of stick insects was developed by Theunissen and Dürr (2013). In the case of ants, locomotion studies mostly focus on experimental procedures. Examples are video-based kinematic analysis Moll et al., 2010;Pfeffer et al., 2019), stepping pattern analysis (Zollikofer, 1994), center of mass tracking (Reinhardt and Blickhan, 2014;Merienne et al., 2020;Merienne et al., 2021), quantification of ground reaction forces (Reinhardt et al., 2009;Wöhrl et al., 2017), and mandible forces (Zhang et al., 2020), among others. Despite the aforementioned examples, the use of musculoskeletal models at the insect scale is not yet widespread, probably due to the technological barriers to acquire experimental data (kinematic, dynamic, and morphometric data). When we compare the relative resolution of motion capture systems vs. the subject size, it can be argued that motion capture at the human scale is far more accurate than at the insect scale. In human motion analysis using reflective markers, the measuring uncertainty can reach 0.33 mm in a volume of 5.5 × 1.2 × 2.0 m 3 (Eichelberger et al., 2016) (0.0275% in the smallest dimension). Motion analysis by means of physical markers is not easy in small insects. A pattern-matching procedure based on video films is a feasible solution for the moment. With the use of this technique at the small scale, our setup reached, on average, 3% resolution in each dimension of the calibrated volume (including tracking errors and pattern recognition errors). The difficulty with small scales lies in keeping the depth of field of the camera at a reasonable size when zooming in to get a clear whole-body image. This problem FIGURE 2 | Definition of the joint geometrical parameters and coordinate systems. For all coordinate systems, the x-axis is represented in red, the y-axis in green, and the z-axis in blue. For ball-and-socket joints, a sphere fitted to the articular surface was considered as the center of the rotation of the joint. For hinge joints, the rotation axis was defined as the line passing through the center of two spheres fitted to the condyles of the joint. Fitted spheres are represented in red, and rotation axes are represented in yellow. (A) Representation of the sagittal plane. (B) Geometrical elements used to define the sagittal plane, the coordinate system of the thorax (x t , y t , z t ), and the coordinate system of the middle left coxa. The point P SP was defined as the mid-point of the line segment passing through the center of the two propodeal spiracles. The points P h and P a correspond to the center of the spheres fitted to the thorax/head and thorax/abdomen joints, respectively. The point P ml_co corresponds to the center of the sphere fitted to the articular surface of the middle left thorax/coxa joint. (C-E) Geometrical elements used to define the rotation axes and the coordinate systems of the coxa/trochanter, trochanter/femur, femur/tibia, and tibia/metatarsus joints. In respective order, the points P ml_to , P ml_fe , P fe_ti , and P ti_mt were defined as the mid-points of the line segments representing the rotation axis of the joints. (F) Geometrical elements used the rotation center and the coordinate systems of the metatarsus/tarsus joint. The point P ml_ta corresponds to the center of the sphere fitted to the articular surface of the metatarsus/tarsus joint. is not encountered in larger subjects because the lenses are far from the objective. Similar difficulties are faced in morphometric data acquisition in small insects, which is required for the definition of joint locations in musculoskeletal modeling. This implies that the effect of uncertainties in musculoskeletal modeling at the insect scale must be considered and evaluated to understand the limits of this tool in locomotion analysis. Estimation of uncertainties in kinematic modeling has been widely addressed at the human scale (see, for example, Groen et al., 2012;El Habachi et al., 2015;Martelli et al., 2015). At the insect scale, however, it is unclear how modeling assumptions affect predicted results in kinematic modeling. The present work therefore evaluated the effects of modeling assumptions in kinematic analysis at the small insect scale, particularly on a Messor barbarus ant. To achieve this objective, (1) a whole-body kinematic model of the Messor barbarus ant was developed (Section 2.1), (2) an inverse kinematics simulation of the ant gait was reproduced using the developed model and experimental kinematic data (Section 2.6), and (3) the sensitivity of the predicted results regarding model parameter uncertainties was evaluated (Section 2.7). METHODS The global research methodology followed in this work is illustrated in Figure 1. Specimens 1 and 2 belong to the medium-sized caste of the Messor barbarus species (more details in Section 2.1). Specimen 1 was used to build a 3D model from micro-computed tomography (Section 2.2). 3D models of body segments were used to extract joint geometrical parameters and to create a multibody model (Section 2.3 and Section 2.4). Specimen 2 was used to acquire experimental kinematic data and to extract marker trajectories (Section 2.5). Experimental kinematic data were used to scale the multibody model and to run an inverse kinematics simulation (Section 2.6). To evaluate the impact of the propagation of model parameter uncertainties on joint angles, two Monte Carlo (MC) simulations were conducted (Section 2.7). Model parameters subjected to uncertainty are represented by a Gaussian distribution icon in Figure 1. Experimental Model We used workers from a colony of Messor barbarus collected in April 2018 in Saint-Hippolyte (42°78 north; 2°97 east, Pyrénées-Orientales, France). Messor barbarus is a seed-collecting ant whose mature colonies can harbor tens of thousands of individuals (Hölldobler and Wilson, 1990). The body mass of the scanned subject was 8.92 mg. The main colony was kept in a box (L: 50 cm × W: 30 cm × H: 15 cm) with walls coated with Fluon ® to prevent ants from escaping. The ants could shelter inside nests formed with test tubes (length: 20 cm; diameter: 2.5 cm) covered with opaque paper. They had access to water and a mixture of bird seeds. The experimental room was maintained at a constant FIGURE 3 | Kinematic chain representing half of the ant locomotor system. Anatomy: th (thorax), pet (petiole), abd (abdomen), cox (coxa), tro (trochanter), fe (femur), ti (tibia), mt (metatarsus), ta (tarsus). Type of joint: hinge (example: mt/ta) or ball and socket (example: head/thorax). Micro-Computed Tomography Following the procedure used by Peeters et al. (2020), specimen 1 was stored in 90% ethanol, then stained in a 2 M iodine solution for a minimum of 24 h, and transferred into micro-tubes filled with 99% ethanol. It was then transferred to the Okinawa Institute of Science and Technology Graduate University (OIST, Japan) to be scanned using micro-computed tomography (µ-CT). This was performed using a Zeiss Xradia 510 Versa 3D X-ray microscope operated by the Zeiss Scout-and-Scan Control System software (version 11.1). A vertical stitching enabled a three-times scanning along a head-trunk-gaster axis, each with a resolution of 933 × 1,013 × 988 pixels (providing a voxel of 5.7 µm). These scans were compiled to increase the resolution of the whole ant body to 3,159 × 1,013 × 988 pixels. The DICOM images of the µ-CT scan were used to build the 3D models of the body segments. A segmentation was done using ITK-SNAP (version 3.6.0) (Yushkevich et al., 2006) to differentiate the body segments as follows: head, thorax, abdomen, coxa, trochanter, femur, tibia, metatarsus, and tarsus. The four tarsal segments were lumped all into a unique rigid segment called tarsus in this work. Extraction of Joint Geometrical Parameters Defining the types of joints was done from both literature and morphometric data (Liu et al., 2019). From the 3D models of the body segments, joint geometrical parameters were estimated from the articular surfaces of the exoskeleton using a CAD software (3D EXPERIENCE, Dassault Systèmes, France). For ball-and-socket joints, the center of a sphere fitted to the articular surface was considered as the center of rotation of the joint (see Figures 2B,F). For hinge joints, the rotation axis was defined as the line passing through the center of two spheres fitted to the condyles of the joint (see Figures 2C-E). The procedure to determine joint geometrical parameter was also used in insect biomechanical modeling by Blanke et al. (2017). Because of low perceived motion and to facilitate the convergence of the inverse kinematics algorithm, the internal rotation of the metatarsus of each leg was not considered [it was assumed as a blocked degree of freedom (DOF)]. Creation of the Multibody Model A multibody model was created, representing the whole-body locomotor system of the Messor barbarus. According to the recommendations of the ISB (Wu et al., 2002(Wu et al., , 2005), a coordinate system was defined for each body segment and for the ground. All coordinate systems were defined as righthanded and orthogonal, as follows (see Figure 2): • Definition of the sagittal plane: plane perpendicular to the line passing through the center of two spheres fitted to the propodeal spiracles and containing the point P SP . Point P SP was defined as the mid-point of the line segment defined by the two propodeal spiracles, see Figure 2B. • Global coordinate system (x g , y g , z g ): The z g -axis points upward, parallel to the field of gravity. The x g -axis points in the direction opposite the direction of travel. The y g -axis was defined as the common axis perpendicular to x g -and z g -axes. • Thorax coordinate system (x t , y t , z t ): the origin of this coordinate system was defined as the mid-point of the line segment passing through the center of the spheres fitted to the thorax/neck joint and thorax/abdomen joints, points P h and P a in Figure 2B, respectively. The y t -axis was defined parallel to the line segment P h P a and pointing anteriorly. The x t -axis was defined as the common axis perpendicular to the normal vector of the sagittal plane and to y t . The z t -axis was defined as the common axis perpendicular to x t -and y t -axes. • For hinge joints, the origin of the coordinate system was chosen as the mid-point of the line segment representing the rotation axis (for example, points P ml_to and P ml_fe in Figure 2C, P ml_ti in Figure 2D, and P ml_mt in Figure 2E). The z-axis was defined parallel to the rotation axis and pointing medially. The y-axis was defined perpendicular to the z-axis and pointing to the origin of the coordinate system of the previous segment. The x-axis was defined as the common axis perpendicular to yand z-axes. • For ball-and-socket joints, the origin of the coordinate system was chosen as the center of the sphere fitted to the articular surface (for example, points P ml_co and P ml_ta in Figures 2B,F, respectively). The y-axis was defined parallel to the line passing through the origin of the coordinate system and the origin of the coordinate system of the previous segment and pointing proximally. The x-axis was defined as the common axis perpendicular to the normal vector of the sagittal plane and to y. The z-axis was defined as the common axis perpendicular to x-and y-axes. According to the previous definitions of the coordinate systems, the following convention for rotations was adopted: abduction, positive rotation about the x-axis; adduction, negative rotation about the x-axis; internal rotation, positive rotation about the y-axis; external rotation, negative rotation about the y-axis; flexion, negative rotation about the z-axis; and extension, positive rotation about the z-axis. The model was composed of 39 segments and 65 DOFs. Segments were considered as rigid bodies, and joints were considered without clearance. Half of the kinematic chain of this model is presented in Figure 3. Forty-seven virtual markers were placed on the model according to the tracked anatomical landmarks (see Figure 4). The model was created using the software tool NSM Builder (version 2.1) (Valente et al., 2017) and finally exported in an OpenSim format. The range of motion of the joints was constrained to feasible values to aid the convergence of the inverse kinematics algorithm. These values were determined in OpenSim by articulating each DOF of the model until some structures of the joint segments touch each other. Obtained values are presented in Tables 1 and 2. Kinematic Data Acquisition and Treatment Kinematic data of a free walking ant (mean speed over the length of the calibrated walkway: 3.4 mm s −1 ) were acquired using high-speed synchronized video cameras (AI GO-5000M- PMCL). The experimental setup was composed of a wide walkway where the ant walked through, with five cameras (one on the top and two for each side or the walkway) and three infrared spots (see Figure 5). The shutter time was 1/3,333 s, and the acquisition time was set to 10 s with a sampling frequency of 300 Hz. The infrared spots were added to compensate this short shutter time. The resolution of the camera sensor was 2,560 × 2,048 pixels. Using the Hiris software of R&D Vision (version 5.2.0), the active sensor window was adjusted to the ant size in a 2,000 × 418 pixel rectangular area. The average field of vision of the cameras was 15.8 × 4.9 × 7.8 mm that gives a spatial resolution of 0.0096 mm/pixel. Obtained raw videos are available from the project repository. Following a similar protocol as Merienne et al. (2020), the filming procedure was as follows. (1) The ant was randomly collected from the colony and left in a box for 15 min in order to reduce the stress of the capture. (2) The ant was located at the beginning of the walkway and the recording started when it entered in the calibrated volume. The temperature of the room was 26 ± 0.2°during the filming procedure. Only one gait cycle was studied to avoid the variability of the motor control during different gait cycles (change of the walking speed, balance management, and change of movement direction). Video recordings were processed afterwards with the Vicon Peakmotus (version 10) software tool. Segment extremities were tracked semi-automatically during a gait cycle using a patternmatching technique. The gait cycle was defined when the left middle leg leaves the ground and lifts, and it ends when that same leg leaves the ground again. Kinematic data were filtered with fourth-order Butterworth low-pass filters with a cutoff frequency of 5 Hz. It was then resampled from 300 to 100 Hz to decrease computation time. Spatial coordinates of the anatomical landmarks (those represented in Figure 4) were exported on a c3d format file. This file is available from the project repository. Model Scaling and Inverse Kinematics Analysis Spatial coordinates of the anatomical landmarks were used to scale the multibody model and to run inverse kinematics simulations. A scaling procedure was carried out to fit the model (originally created from the morphology of specimen 1) to the morphology of specimen 2. This was performed using the open-source software tool OpenSim (version 4.0) (Seth et al., 2018). Using the scaled model, inverse kinematics simulations were also performed in OpenSim. Joint angles as well as root mean square errors (RMSEs) were obtained from these simulations. Propagation of Model Parameter Uncertainties In order to evaluate the sensitivity of the calculated kinematic data to model parameter uncertainties, two MC simulations were conducted. A similar procedure was used by Martelli et al. (2015) and Myers et al. (2015). In the first MC simulation, the position of model markers was randomly perturbed according to their uncertainty. Random values were assumed to have a uniform distribution (i.e., all outcomes were considered as equally likely). Variations were assumed to be the same in all directions of the measurement volume. Therefore, the uncertainty zone for the model markers was assumed to be spherical. The radius of these spherical uncertainty zones was chosen as a common residual value for the camera calibration process for the used experimental setup: 0.4 mm. In the second MC simulation, joint geometrical parameters (location and orientation) were randomly disturbed. The uncertainty in location and orientation of joints is mainly related with operator-dependent variability of the treatment and identification of the articular surfaces. In order to define perturbation magnitude (translation and rotation) introduced to the joint geometrical parameters, several procedures of identification of articular surfaces were carried out. Cylindrical uncertainty zones were assumed for hinge joints, while spherical uncertainty zones were assumed for ball-and-socket joints. The radius of the cylindrical and spherical uncertainty zones was considered to be the same for all the joints and equal to 0.2 mm. These MC simulations were implemented and run by means of the OpenSim API. One thousand iterations were carried out for each MC simulation, which were enough to guarantee a stabilization of average values. Average values of joint angles at each time step were calculated from the obtained results. Coverage intervals were defined as twice the standard deviation. A graphical representation of these results is presented in Figure 6. The sensitivity of the kinematic results regarding model parameter uncertainties was defined as the signalto-noise ratio (SNR) of the joint angles during the gait. The SNR was calculated as the maximum amplitude of the signal (also called power of the signal, P s ) divided by the maximum coverage interval (also FIGURE 7 | Kinematic results obtained from the simulation of the ant model and the experimental kinematic data for flexion angle of the middle right leg of (A) thorax/cox; (B) cox/tro; (C) tro/fe; (D) fe/ti; (E) ti/mt, and (F) mt/ta. The recorded and simulated gait cycles lasted 1.39 s. These results are a sample of the whole set of results available from the project repository. Solid lines indicate the mean values from the Monte Carlo simulations from the marker perturbation (green) and from the axis perturbation (blue). For marker and axis perturbations, respectively, the green-and blue-shaded regions represent the confidence interval (calculated as twice the standard deviation). The dashed vertical lines (37% and 87%) indicate when legs of both tripods were on the ground. The SNR of the thorax/cox flexion angle obtained from the axis perturbation simulation is illustrated in (A). P s (standing for power of the signal) corresponds to the peak-to-peak amplitude of the signal. P n (standing for power of the noise) corresponds to the maximal coverage interval of the joint angle during the gait. Frontiers in Bioengineering and Biotechnology | www.frontiersin.org March 2022 | Volume 10 | Article 767914 called power of the noise, P n ) of the joint angle during the gait. Therefore, an SNR value was obtained per degree of freedom for the analyzed gait cycle. RESULTS In order to determine how modeling assumptions affect inverse kinematic results at the ant scale, a multibody model of the Messor barbarus was developed together with a simulation framework to evaluate its sensitivity. Both the model and the simulation framework are freely available on the SimTK repository: https:// simtk.org/projects/barbarus. From the experimental kinematic data, an inverse kinematic simulation was conducted. The results of this simulation, representing a gait cycle of free locomotion of the Messor barbarus, are summarized in Tables 3 and 4. A video of the simulated kinematics is available from the project repository. These results correspond to the range of motion of the joint angles. The whole set of results is available from the project repository and can also be reproduced from the model and the experimental kinematic data. It can be noticed that the trochanter/femur (tr/fe) joint is the one with the wider range of motion, while the thorax/coxa joints exhibit the smallest one. The average RMSE of the inverse kinematic simulation was 0.21 mm, which corresponds to 3.2% of the specimen size. The sensitivity of the kinematic results regarding model parameter uncertainties was evaluated by means of the SNR. These results are summarized per set of joints, from marker perturbation as well as from axis perturbation, in Table 5. High SNR values indicate that the power of the signal (computed joint angle) is representative with respect to the power of the noise (confidence intervals). SNR values near or lower than 1 indicate that the dynamics of the signal of interest might be hidden by noise. It can be noticed that the computed kinematics is more sensitive to marker perturbation compared to joint axis perturbation ( Table 5). The perturbation applied to the markers generated an SNR of 2.2 in average for all the joints. This means that the dynamics of the studied signal (computed joint angles) can be observed despite possible variations during the motion analysis process. The SNR from axis perturbation was almost four times higher than that from marker perturbation. No significant differences in sensitivity were found between the joints of the legs on the right side of the body with respect to those on the left side. No tendency can be inferred from the sensitivity of the joints with respect to their anterior-posterior position: front, middle, and rear. The joint that showed the highest SNR values (consequently a lower sensitivity) was the fe/ti joint, and this was the case for both marker and axis perturbations. Figure 7A. From these results, it can be noticed that the confidence intervals of the joint angles when disturbing the axis location and orientation were smaller than the confidence intervals obtained from the marker position perturbation. DISCUSSION AND CONCLUSIONS In this paper, the propagation of parameter uncertainties in kinematic modelling has been evaluated at the small scale. This work demonstrates the feasibility of using biomechanical models to study locomotion in relatively small animals. Because of their scale, motion analysis techniques for hexapods are less developed compared to those for quadrupeds and bipeds. In relatively big animals, the use of several reflective markers per segment allows a good precision of the kinematic data. However, the use of physical markers is not easy in motion analysis in small insects. This implies that the capabilities of the smallscale biomechanical modeling techniques must be well evaluated. To do so, a multibody model of a Messor barbarus ant was developed. It is available in open source from the project repository and can be used and enhanced by the scientific community. Besides, the model could allow biologists to study function/structure relationships of Messor barbarus. The whole set of experimental and simulated kinematic data is also available from the project repository. In spite of the differences in morphology of the studied species, the obtained joint angles were in the same order of magnitude as those reported in the literature about ant kinematics (see Table 6). The difference between angle range of left and right legs comes from the fact that the ant did not walk perfectly straight. Obtained kinematic data are valuable for roboticians to implement bioinspired gaits in robots (see Ouyang et al. (2021) for example). A possible error source in the conducted kinematic simulation could be linked to the use of two different specimens for acquiring experimental data (one for the geometrical 3D model and one for the experimental kinematic data). When using two subjects to perform a constrained kinematics simulation, a scaling procedure is required, which is naturally an additional source of errors. This might be one of the main reasons for the obtained RMSE values. In comparison to human locomotion simulations, the obtained normalized RMSE values for ant locomotion simulation were greater. In human simulations, it is recommended not to exceed 0.6% relative RMSE regarding body size (in contrast to a normalized RMSE of 3.2% obtained in this work). This difference can also be related to the fact that the ant body is composed of more segments than the human one. Thanks to the developed model, the impact of the propagation of model parameter uncertainties in inverse kinematic simulations at the insect scale was evaluated. Obtained SNR values indicate that the geometric and kinematic measurement techniques used are feasible for the development of multibody models at the ant scale. The fact that the model is more sensitive to marker perturbations indicates that efforts in kinematic modeling at the ant scale must be centered around the kinematic acquisition (marker definition, placement, tracking, etc.) rather than geometric acquisition (µ-CT, segmentation, joint parameter definition, etc.). The fact of experiencing lower sensitivity at the fe/ti joint can be explained by the large range of motion of this joint and, also, because it is composed of the two longest segments of the limb. Long segments are easier to track, plus the perturbation of the measurement process has a lower impact than in the case of short segments. The fact of having no significant differences in sensitivity between the joints of the legs on the right side of the body compared to those on the left side can be associated to the symmetry of the video acquisition system regarding the walkway. This study presents several limitations, however. From an experimental point of view, the following aspects can be improved. Each body segment was tracked by only two markers. The number of tracked markers per segments could be increased to improve the quality of the simulation. Additionally, emerging automatic tracking techniques (i.e., deep-learning-powered motion tracking) must be explored as an alternative to reduce tracking time and to increase the number of tracked points per segment. Finally, the four tarsal segments were all lumped into a unique rigid segment. This was due to the configuration and the capacity of the experimental setup (camera resolution, number of cameras, camera position, etc.), which did not provide enough resolution to track the tarsal segments individually. On the other hand, from a modeling point of view, the segments of the ant were considered as rigid bodies because of the complexity of taking body deformation into consideration. This assumption merits a profound analysis in order to determine the effects of segment compliance in insect locomotion, which seems to play an important role (Blickhan et al., 2021). Finally, future work is required to develop a dynamic model of the ant gait. This requires determining muscle parameters (geometrical and force-generating parameters), segment mass and inertia properties, and ground reaction forces. This study contributes to the construction of a musculoskeletal model of ants which can be useful in the study of evolution, neural control, and biomimetic applications. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary Material. AUTHOR CONTRIBUTIONS SA-T made significant contributions to the conception, design, execution, and interpretation of the findings being published and drafting and revising of the manuscript. JD made significant contributions to the biological studies, extractions of data, interpretations of the findings being published, and drafting and revising the manuscript. AK made significant contributions to the execution and interpretation of the findings being published. PM and J-ML made significant and substantial contributions to the conception, design, and interpretation of the findings being published as well as revising the manuscript. FUNDING This work was partially subsidized by the CNRS AO MITI Biomim funding. Work by JD was supported by a scholarship from the Collectivité Territoriale de Martinique. Unit, Okinawa Institute of Science and Technology, Japan) who performed the scans that enabled determination of the segment and joint geometries. They would also like to acknowledge Moran Le Gleau's contribution to the treatment of the µ-CT under the supervision of Adam Khalife and Tanguy Puluhen for his help during the development of the multibody model. Thanks also to the CNRS GDR 2088 Biomim who encouraged this work.
2022-03-01T14:08:29.827Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "5b0eed302e2ecb49ac42b7a8efe948298f9a1c50", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fbioe.2022.767914/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "5b0eed302e2ecb49ac42b7a8efe948298f9a1c50", "s2fieldsofstudy": [ "Biology", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
230640612
pes2o/s2orc
v3-fos-license
Rental equivalence, owner-occupied housing and inflation measurement In this paper, we use unique supervisory property-level rental data to estimate a rental equivalence (RE) measure for owner-occupied housing (OOH) for the Irish housing market. Our data from the o cial, domestic rental regulator allow us to simultaneously address three signi cant issues which have arisen in the empirical application of rental equivalent measures. First, we are able to consider the di erences in using data on both new and existing rent levels in the analysis, we can also control for other utility costs and nally we are able to estimate a RE measure in the absence of rent controls. To better approximate the OOH structure of the Irish residential market, we also avail of regional data to estimate 32 separate hedonic rent models and use the results to reweight the RE index. We nd that our subsequent estimate of RE results in a reduction in the Irish headline rate of consumer price in ation by 0.4 percentage points. Furthermore, we show there are considerable di erences in the in ation rate if new relative to existing rents are used in the rental equivalence measures with measures based purely on existing rents biasing downwards both the rental equivalence measure and the overall consumer price index. This suggests that considerable care is required for policymakers in using rental equivalence methods in the presence of data gaps. Introduction Measuring the cost of owner-occupied housing (OOH) for inclusion in the consumer price index (CPI) has been an area of considerable academic and policy debate (Dougherty and Van Order, 1982). Given home-ownership constitutes the majority tenure type in many Western economies, and that housing is one of the largest cost items facing households, the approach chosen to measure the cost of housing for homeowners in the CPI is likely to have a non-trivial impact on the ocial rate of ination. Across many national statistical agencies, a number of dierent measures are used to measure the cost of housing; these include the payments, net acquisitions, rental equivalence and the user cost approach. While there is considerable debate as to the merits of these dierent methodologies (Hill, Steurer, and Waltl, 2019;Diewert, Nakamura, and Nakamura, 2009;Diewert and Shimizu, 2019), the rental equivalence approach has long been used by the statistical agencies of a signicant number of countries (such as the US, Japan, Denmark, Norway, Switzerland, and South Africa). The approach is also advocated in ILO et al. (2004). By directly measuring the opportunity cost to the homeowner of their property (or what they could expect to pay for the consumption services of living in their home), the rental equivalence measure is theoretically suited to the consumption focus of consumer price index measurement (e.g not based on asset values) and it is also directly comparable to the cost of housing for renters which is incorporated into the CPI. Despite its attractiveness from a theoretical perspective, a number of issues have been cited in the literature concerning the rental equivalence approach, mostly to do with its empirical application. Three data-based critiques have been put forward. First, rental data on the stock of existing rents, not the ow or new rental price, has been historically used Ambrose, Coulson, and Yoshida (2018). This is problematic as, if a homeowner were to put their property on the market, then it is the new or marginal rent they would receive as opposed to the average or stock rent level. Using data on existing as opposed to new rents may then underestimate the cost of OOH housing in the CPI. Second, the services the household obtains from renting the property are not equivalent between OOH and renters. For example, if rental prices include certain utility costs then rents will diverge from the shelter cost of housing which the rental equivalence approach seeks to capture (Verbrugge and Poole, 2010). Third, rental equivalence cannot be estimated correctly in residential markets where rent controls prevails as the market prices are clearly impacted by price controls (Hill, Steurer, and Waltl, 2019). A further methodological criticism has been put forward by Arévalo and Ruiz-Castillo (2006) who notes the selection bias between homeowners housing stock and renters housing stock can distort any estimated index and must be fully accounted for in index design. In this paper, we use a unique tenancy level dataset in the absence of rent price controls to address some of these concerns. More specically, we draw on supervisory tenancy-level data from the Irish rental regulator on all newly registered rental agreements in Ireland to estimate a rent index on the ow price of new rents similar to Ambrose, Coulson, and Rental equivalence and ination Yoshida (2015). To develop an OOH-adjusted rent index and address the selection bias noted by Arévalo and Ruiz-Castillo (2006), we estimate 32 separate hedonic regressions to develop adjusted regional-housing type rent prices. We then use the OOH regional housing structure from the Irish Census of Population to re-weight the 32 indices to create a rental equivalence measure for new rents that mirrors the OOH structure in the Irish market. We then incorporate this re-weighted index into the Irish CPI to estimate the counterfactual impact on consumer price ination. Our data also allows us to strip out the impact of other utility costs on rental price variation which ensures the services value of rents is as close as possible to the shelter cost concept which equates to the OOH housing opportunity cost, a point noted by (Verbrugge and Poole, 2010). Our data also allow us to quantify the impact of using stock rental data (existing rental agreements) relative to ow rental data (new rents) in estimating rental equivalence measures. As our microdata provide both stock (existing rent) and ow (new rent) registrations, we can directly estimate OOH-adjusted rent indices based on new and existing rents in the Irish market. This provides a direct comparison on a long debated issue of whether to use stock or ow rent data in estimating rental equivalence measures. The literature on the practical implementation of the rental equivalence approach is somewhat ambiguous on the issue of stocks versus ows. As noted by Bentley (2018), Lewis and Restieaux (2015) argue that the use of stocks data is best practice citing IMF, ILO, OECD, Eurostat, UNECE, and Bank (2004). However, IMF, ILO, OECD, Eurostat, UNECE, and Bank (2004) do not appear to favour one approach versus the other. IMF, ILO, OECD, Eurostat, UNECE, and Bank (2004) conclude that the rental equivalence approach is based on estimating how much owner-occupiers would have to pay to rent their dwelling. Johnson (2015) contends that arguments could be made for using the marginal cost of renting depending on what the exact purpose of the rental equivalence approach is. Ambrose, Coulson, and Yoshida (2015) argue that their repeat rent index, which uses only new contracts with new tenants, is better for studies of the housing market, while they acknowledge that the indexes compiled by the Bureau of Labour Statistics (BLS) may be more appropriate for measuring cost-of-living indexes because they represent rents of the typical household. The BLS indexes tend to reect rents that are up to a year old. Consequently, given this uncertainty, we think our contribution is important in quantifying the subsequent impact on general ination rates of approaches based on either a stocks or ows approach. A number of papers are close to our research. First, Ambrose, Coulson, and Yoshida (2018) uses the newly developed repeat rent index (RRI) by Ambrose, Coulson, and Yoshida (2015) and estimates the impact on the US CPI of using new rental prices rather than existing rents. They then link the estimates to interest rate setting through the estimation of the counterfactual Taylor rule under dierent CPI calculations. However, in that study, the authors do not re-weight the RRI to take the structure of owner-occupied housing into account in order to deal with the selection bias from dierent housing stock characteristics. We therefore extend their work to develop a OOH new rent index and show that the impacts on the CPI are non-trivial in this adjustment. The benets of microdata estimates of rental equivalence are highlighted by Garner and Verbrugge (2009) Two main ndings emerge. First, we demonstrate that adjusting for the structure of owner-occupied housing, controlling for other utilities costs and using new rental tenancies data leads to an ination estimate for OOH that is approximately two percentage points lower than the equivalent for a sample of renters. This in turn leads to a clear impact on the overall measurement of consumer price ination. Using our OOH-adjusted index relative to that based purely on a sample of renters results in the rate of consumer price ination (average monthly annualised ination rate) being lower by nearly 0.4 percentage points. Second, we quantify the impact of using rent levels for new versus existing tenancies in estimating an OOH rental equivalence measure. The OOH index using existing rents is materially lower than the index based on new rents with a resulting, considerable impact on the rate of consumer price ination: the annualised change in ination was 0.6 percentage points lower using the existing rents relative to new rents. From a broader policy perspective, these results suggest that policymakers and statistical agencies who are deploying the rental equivalence approach should attempt to address some of the associated data gaps as they have a considerable impact on the associated measurement of ocial ination rates. The rest of the paper is structured as follows: section 2 presents the methods and data. Section 3 estimates the main OOH rent index and the impact on ination. Section 4 considers the dierences between using stock (existing rent) versus ow (new rent) data. Data and Background Ireland has traditionally had a very high share of home ownership; in the latest census of population nearly 68 per cent of households were reported as homeowners (either outright or with a mortgage). This had fallen from a peak of nearly 80 per cent in 1991 due to a multitude of factors including aordability issues Corrigan, Foley, McQuinn, O'Toole, and Slaymaker (2019). 1 Given the concentration of households in OOH, it is critically important in Ireland as in other economies as to how OOH pricing is treated in the measurement of the CPI. Figure 2 presents the trend in Irish CPI as well as the pricing of housing and utilities. The ocial measurement of housing cost presently adopted in the Irish market by the Central Statistics Oce (CSO) for OOH is the payments approach. The payments approach uses a combination of data on house prices, interest rates and loan-to-value ratio assumptions. Consequently, the approach results in relatively large uctuations in the measurement of the housing cost series, as house prices are typically much more volatile in 1 Irish census data can be found at www.cso.ie. the Irish market than private rents (as presented in panel (b) of Figure 2). The variation between renters cost of housing and the cost of OOH due to issues around mortgage interest and dwelling maintenance etc can have signicant implications for the measurement of the CPI. For example, using mortgage related pricing for non-mortgaged homeowners does not give a very accurate costing for this group as the type of accommodation and the systemic dierences in costs (as interest rates and other mortgage costs may dier over time) faced by those with and without a mortgage may be substantial. Given this context, we draw on a unique, extensive micro data set to estimate a rental equivalence measure of OOH. One of the particular novelties of this paper is the use of the supervisory micro data at a tenancy level provided by the Residential Tenancies Board (RTB), the Irish private rental market regulator. In Ireland, every new and part IV renewal tenancy must be registered by law with the RTB. 2 The obligatory legal submission by the landlord provides information on the level of the contracted rent (in e), the frequency of the rent payment, the duration of the contract and the extent to which the tenant pays other utility costs. Other utility information captured is whether the tenant pays electricity, oil, TV licence, waste, gas and other charges in addition to the rental payment. Information is also provided on the property including the address, the oor area (in sq metres), the dwelling type (e.g. house, apartment, bedsit, part of house, maisonette), property type (semi-detached), detached, terraced, number of bedrooms, number of occupants. As the submission of these forms is mandated under law as part of the Residential Tenancies Act 2004, an extensive database is available for analysis. These data have been used to produce a regular index monitoring the Irish rental price trends (Lawless, McQuinn, and Walsh, 2018). For the purposes of this paper, we use the registered tenancies covering the period August 2012 to December 2016 inclusive. Two legislative changes in Ireland dictate the 2 Part IV renewals are tenancies that have been in existence for between 4 and 6 years and the landlord is required to re-register this tenancy with the RTB to indicate that it is still active, as well as provide updated rental and property characteristics. choice of this period. In January 2017, the rst rent control measures on private rental prices were introduced in the market. These Rent Pressure Zones caped rental price increases for the two largest cities at 4 per cent per annum. 3 As our interest is in estimating rental equivalence measures without rent controls, we end the sample just before the rent regulations began. The starting period, 2012, was chosen to ensure sucient new and existing tenancy registrations were included as the database does not contain all existing (renewal) tenancies before this point. The RTB dataset contains information on both the ow and the stock of rents. The dataset contains mainly ow data, since it consists of primarily new tenancies (i.e. dened as registered tenancies of those who begin a new lease at any given quarter). On the other hand, the stock of rents measures the pool of rents for ongoing tenancies by tenants who began their lease in the past. The RTB dataset also contains a small proportion of renewed tenancies, which correspond to tenants who hold the same lease continuously for 4/6 years, at which point the tenancy agreement must be legally re-registered. We begin by considering the data on new rental agreements. These data are best placed to proxy the opportunity cost to homeowners by representing the rental price they would receive if they placed their property on the market at the present time. The summary gures for the sample used in this paper are presented in table . (2019) average tenancy length was just under 13 months in duration, but with considerable variation. The standard deviation tenancy length is approximately 6 months, with max and minimum tenancy lengths of 4 and 48 months. The average number of bedrooms per property was approximately 2.4 but ranged from 1 to 5. The number of bedspaces was somewhat larger at over 3.7 suggesting multi room occupancy in many cases. In terms of the structure of dwellings, 10 per cent of properties were detached houses, 23 per cent were semi-detached houses, and a further 14 per cent were terraced houses. Apartments accounted for 46 per cent of the total. An important aspect considered in this paper is controlling for the cost of other utilities that could force a wedge between the appropriate opportunity cost to a homeowner and other renters i.e. the rent could be higher (lower) than the opportunity cost if it included other costs. It can be seen that 80 per cent of renters also paid electricity which suggests one-in-ve did not and are likely to have this cost included in their rent. A further 25 per cent paid their oil bills, 74 per cent a TV licence, 50 per cent their waste charges, and 50 per cent their gas bills. The high share of households not paying any other charges is a clear indicator that landlords are pricing some of these costs into the rent and therefore this must be controlled for when developing an owner-occupied housing cost. Finally, Figure 2 presents the structure of registered tenancies in terms of their geographic location. This is a critically important component as it is likely that renters have a dierent housing location structure throughout the country than owner-occupiers. This, again, must be accounted for in any estimate of a rental equivalence measure. Nearly 40 per cent of rental tenancies are registered in Dublin, the capital city. Hedonic Rental Estimation and Rental Trends As a rst step, we estimate a range of hedonic models for rental prices which assess the impact of various housing type, regional indicators and variables capturing other utilities and costs on rental pricing. The aim of these models is to demonstrate the impact of these variables on new rents. Within these models, we also include a serious of time dummies for the month-year of the data. The set of coecients on these dummies represents the ination trend which can be used in calculations for the CPI. Our baseline specication for the hedonic model is as follows: where ln(R i,t ) is the monthly rent price of property i in period t. Please note that these data are repeated cross sectional datasets so the notation R i,t contains a comma to distinguish these data from panel data which would follow the same property over time. We include three vectors of control dummies to purge the rental data of variation not relating to the trend in the market value of rents. All control variables that are in continuous format are included in logs, unless otherwise noted. The vector of tenancy controls, X, includes variables on payment frequency, number of tenants and tenancy length; the vector D of dwelling characteristics include the oor area of the property, the number of bedrooms, the number of bedspaces, dummies for the dwelling type (detached house, semi-detached house, terraced house, other ats, apartments or sub divided part of a house). The vector U includes dummies for whether the tenant pays other utilities. We include a separate dummy for the payment of electricity, oil, TV licence, waste, gas and others. Our empirical estimation strategy will therefore be to estimate a series of hedonic models which ensure that the variation in rental trends is not aected by variation in the included covariates. These trends are taken as the coecients (τ t ) for each time period on the vector of time dummies T t and are used as the rate of ination for rents in our various scenarios. The coecients are taken as an exponent to get the non-log trend in rental prices. 2.2.2 Adjusting for Owner-Occupied Housing One contribution of this paper is to ensure that the rental equivalence measure for OOH is closely tailored to the structure (in terms of housing types and regions) of housing for owner-occupiers. The dierence between the housing structure of homeowners and renters (due to dierences in demand and supply factors in accessing homeownership and valuing characteristics) is noted as a serious source of selection bias by Arévalo and Ruiz-Castillo (2006). For example, the composition and location of the housing stock is likely very dierent for renters and thus any rental equivalence measure must be adjusted to look like the OOH structure. To approximate the structure of owner-occupier housing, our approach is as follows. We rst obtain Irish census data on the structure of the owner-occupied housing stock for the year 2016. The Irish census provides data on a regional, housing-type basis which allows us to identify 32 dierent housing type-area indicators. The data is presented in table 2 above. They can be interpreted as the percentages of each type of housing present in each region (the overall sum of these shares is 1). For example, the largest concentration of semi-detached and terraced houses, and apartments is in Dublin (11.6 per cent, 7.5 per cent and 2.2 per cent, respectively), while the largest concentration of detached houses is in the South-West region (9.2 per cent). To ensure that any rental equivalence measure adopted approximates the structure of OOH, our estimation strategy is as follows. First, we estimate 32 separate hedonic rental models for each region and housing type with a similar structure to equation (1): where r and h denote the 32 housing type region groupings as presented in table 2 where Z above includes all variables noted from equation (1) from matrices X i,t , D i,t , and U i,t with the exclusion of variables for housing type. These are excluded as each model is estimated for a separate housing-type thus allowing variation across housing types to be picked up across all the variables in the regression. The nal estimate of OOH based on rents is taken as the weighted average of the coecients on the time dummies from the regional housing type regressions from equation (2) combined with the weights from table 2. As above, the nal level index is taken as the exponent of the above measure of ination as the dependent rent variable is measured in log levels. 8 3 Estimating an OOH Rent Index and the Impact on Ination 3.1 Hedonic Rent Indices: Exploring the Factors Impacting Rents We rst estimate a series of models which test the relationship between our property characteristics and other variables on the rent prices. Table 3 contains three columns. The rst column controls for standard tenancy and property characteristics, the second column includes regional dummies for the NUTS 3 regions in Ireland the nal column contains the controls for the other utility costs that renters face which may be dierent in pricing for owner-occupied housing and would likely distort the overall rental price series as a measure of the opportunity cost of home ownership if included. In column (1), the ndings suggest that rents are increasing in tenancy length, the number of bedrooms, and in the oor area. Rents are also increasing in the number of tenants in the property. These ndings are intuitive and associate larger, longer, and more densely concentrated tenancies with higher rental prices. Considering the variables covering property type, we nd that, relative to detached houses, the rents for semi-detached, terrace and apartments are higher. These factors also likely capture the geographic location which we control for in column (2). For example, most apartments are located in Dublin, the capital city which also is the area with the highest rents in Ireland. In controlling for regions (with Dublin being the omitted category), it can be seen that there is considerable variation in rents (with rents substantially lower than Dublin). Many of the coecients on the other variables also drop in magnitude which suggests the variation across regions in the dierent characteristics may matter considerably. In some regions (such as the Border or Midlands) rents are nearly 80 to 90 per cent lower than in Dublin when other factors are controlled for. Finally, in column (3) we introduce the series of dummy variables which control for the various other utilities costs. The interpretation of these indicators is the extent to which rent levels are higher or lower depending on whether the household has to pay these costs. For example, rent is approximately 3 per cent lower for those tenancies who pay electricity, 6 per cent lower for those who pay for oil, 3 per cent higher for those who pay a TV licence, and 7 per cent lower for those who pay for waste charges. While it may seem counter intuitive, the lower prices for those tenancies paying additional costs can reect the fact that the base rent may be adjusted depending on whether the landlord or tenant pays these outgoings. If the landlord pays, then the rent is likely to be higher ceterus paribus to capture this and vica versa. Unlike other observational econometric examinations of the drivers of rents, we are actually not concerned with the endogeneity of these factors or indeed the direction of the coecients. What is important from the perspective of our particular study is that the variation which is left in the time dummies in the model is purged of variation across + p < 0.10, * p < 0.05, * * p < 0.01, * * * p < 0.001 these factors. This ensures that the rental trends, as indicated by the time dummies, are not aected by the variation in tenancy, regional, utility pricing and property factors. To explore the impact of this, we use the exponentiated coecients from the time dummies in columns (1) to (3) in table 3 and create three monthly indices of rent prices. These are presented in Figure 6. The simple index uses the time dummies associated with the regression in column (1) of table 3, the regional controls index is taken from the regression in column (2) and the region and utilities index from column (3). The rst chart presents the index, the second chart (b) presents the year on year growth and the third chart is the three month rolling average to provide a more smoothed trend. It can be seen that controlling for the utilities and regional factors has a quite considerable impact on the ination rate generated with the time dummies. The average gures for the series across the time period are presented in table 4. It can be seen that controlling for region and utilities would have increased the overall ination rate by approximately 1 percentage point which is substantial in economic terms. This highlights the importance of using our rich data to strip out these factors from the trends in the hedonic models. OOH Adjusted Renters Index Source: Authors calculations using RTB data. Testing the Impact on Ination The nal aspect of this section is to test the impact of using the rental equivalence approach on the consumer price index. In order to carry out this research, Ireland's national sta- Appendix 2 provides a list of all the items removed as part of this process. Having removed the payments approach items, the next step is to calculate the weight that the cost OOH should be given within the CPI under the rental equivalence approach. Generally, the weights ascribed to a given item or group of items within a CPI should correspond to the share of total household expenditure that is spent on those items. The items included in the CPI consumption basket are classied into various groups and sub-groups using the COICOP system 5 . In Ireland, CPI weights are updated every year in December using national accounts data down as far as the 4-digit COICOP level of classication. Below this level, the Household Budget Survey (HBS) is used to allocate a share of these weights to each of the items included within a group. The HBS shares are only updated when the results from a new HBS become available (usually every 4 to 5 years). In general terms, the CPI weights used for year t+1 are sent in December of year t using the national accounts data from year t-1 (the most recent national accounts data available in December of year t). In order to ensure that the weights used in year t+1 approximate as closely as possible the expenditure patterns of previous year (year t) the national accounts data This is to ensure consistency between the method used to give OOH its weight and its ination rate under the rental equivalence approach. An implicit assumption in this step is that the stock of owner-occupied housing was the same in 2012 and 2016. This 2012 value of imputed rents is then price uprated to December 2013. Total spending from the 2012 national accounts on another item within the CPI basket (that derives its CPI weight from the national accounts) is also price uprated up December 2013 (we use breads and cereals). The relationship between these price uprated national accounts gures is used to generate a scalar that is applied to the existing December 2013 CPI weight for breads and cereals. This gives us an appropriately sized weight for the rental equivalence index in the CPI. All of the other items in the CPI basket are then reweighted to take account of this addition and the removal of the payments approach items. These steps are repeated for each year the CPI under rental equivalence is presented. An important point to note is that the weight OOH receives in the CPI diers substantially between approaches. This is evident in Table 6 which shows the weight allocated to OOH in December 2015 under the payments approach and under the rental equivalence approach. To provide some comparison, we also include the renters index which does not make the OOH adjustment and the actual Irish CPI gures for context. The CPI indices and annual year-on-year growth rates are presented in gure 5. The rst dierence which is very noticeable is that the rental equivalence measure has a dramatic impact on the overall CPI.This is unsurprising given the larger weight allotted to OOH under RE and the larger price ination trend in the rental data used in the generation of the RE index when compared to the weight and price ination of the payments approach items. Ireland has a very high share of owner-occupied housing (nearly 70 per cent) and this is reected in the value of imputed rents from which this larger CPI weight is derived under RE. Including either the renting index or the owner-occupied rental equivalence measure both cause a dramatic rise in the rate of consumer price ination. There are also very clear dierences between the growth rates for the CPI when the renters and OOH indices are included separately. The OOH ination level is lower reecting the lower rate of price ination for the adjusted RE series relative to the series based on rental only data. The variation between these two series is solely due to dierences in our ination measures from the microdata and can be seen as the clean impact of the dierent rental equivalence measures on ination. This can be very clearly seen in table 7. The average rate of the CPI when including the OOH index for the period under examination was just over 0.01. The CPI with the renters index was nearly a half a percentage point higher at 0.014. This is quite a dramatic change in the overall rate of consumer price ination solely due to the transformation of the rental data to approximate the owner-occupied housing stock. To explore whether this dierence is statistically signicant we undertake a simple paired t-test of the mean dierences. The results indicate a signicant dierence at the 1 per cent level of 0.4 percentage points (the OOH weighted series is 0.4 percentage points lower). 7 (2018) is that the use of stock (existing contract) rental data to measure the opportunity cost of housing for owner-occupiers is incorrect as stock rents are often lower than the new market rents. This then underestimates the impact of what owner-occupiers could earn if they were to include their property on the market for rent. Ambrose, Coulson, and Yoshida (2018) demonstrate the impact of these changes on ination by substituting a new rental series for the BEA simple series for the US which includes existing rent. While this substitution is highly informative, a more direct comparison which appropriately adjusts both series for the owner-occupier housing structure and adjusts the rental trends with a common hedonic rental transformation is warrented to ensure that any variation between existing and new rents is purely down to dierences in trends and not to dierences in property types or the regional mix of building structures across both markets. To provide a more direct test of the impact of using stock (existing) versus ow (new) rental indicators on any rental equivalence measure for OOH, we draw on data collected 7 Mean for OOH-weighted new series is approximately 1 per cent with plain vanilla renter new index at 1.4 per cent, the t statistic value is -4.34 with 24 degrees of freedom. as part of the supervisory tenancy returns on existing tenancies. This section documents the impacts on the CPI of using the two dierent measures when controlling for common hedonic characteristics and adjusting both series to the OOH housing structure. Data and Measurement of Existing Stock Rental Prices To measure existing rental data in Ireland, we draw on a series from our supervisory dataset which relates to long term rental renewals or part IV tenancies as discussed above. In Ireland, if a tenancy runs to over four years in duration, the landlord is required by law to submit an updated registration of the rental agreement with the Residential Tenancies Board as a Part IV renewal. These renewals have expanded tenancy rights relative to shorter duration tenancies. For the purpose of our analysis, these data provide an ideal existing rental series to present as a counterweight to our new ow rents data. The renewal tenancies registration requires all the information of the properties to be re-submitted as well as updated information on the rent levels and tenancy details thus the data are directly comparable with our new rental series and the database provides common variables to hedonically estimate ination series across the two series. Our series for these data is somewhat smaller than for new tenancies and contains 31,000 records. This is due to the fact that only a limited number of Irish rental agreements become long term in nature with the domestic private rental market being a much more transitory tenure type than in other countries. Summary statistics for the renewal tenancies are presented in table 8. It is clear that the level of rent is lower per month for renewal tenancies relative to new tenancies. Another notable dierence is the housing type with a considerably lower share of apartments in the renewal tenancies. To estimate a OOH adjusted series for existing rents, we follow the process outlined above in section 2.2. This entails rstly estimating hedonical regressions for the existing rental series for each of the 32 regional housing type areas as documented in section 2.2.2. The hedonic model used on the renewal data is as set out in equation (2). Before moving to this step and to provide a more simple consideration of the dierences in the trends between new and existing rental trends, we re-estimate equation (1) tenancies. In Ireland, for the period in question, this is not driven by the existence of rent controls. The gap is therefore likely to be driven by other considerations such as nominal rigidity, relationship factors and tenancy turnover costs as indicated by Aysoy, Aysoy, and Tumen (2014) and Shimizu, Nishimura, and Watanabe (2010). To move to the OOH adjusted indices for both new and renewal tenancies, we estimate the model in equation (2) for renewal data then undertake the re-weighting as in equation (3) to create an existing rents OOH-adjusted rental equivalence measure. A comparison between the new and existing OOH RE measures is presented in gure 7. Very clear dierences are evident in the trends between the two series. Indeed, the renewal series is in fact much more volatile in the early part of the sample period. The mean dierences are presented in table 9. The average annualised ination rate for the OOH adjusted existing rent series is only 1.6 per cent which is considerably lower than the 4.8 per cent ination rate for new rental agreements. Conclusion This paper has attempted to address a number of measurement issues in relation to the estimation of rental equivalence measures for owner-occupied housing. We use novel supervisory data from the Irish rental regulator to address a number of data gaps in the existing studies such as new rental data, the inclusion of other utilities costs and the absence of rent controls. Furthermore, we deal with the selection bias that addresses dierences in the structure of housing between owner-occupiers and renters. Our research points to very clear impacts of addressing these issues on the measure of OOH housing cost and the overall level of the consumer price index. First, we demonstrate that adjusting for the structure of owner-occupied housing, controlling for other utilities costs and using new rental tenancies data leads to an ination estimate for OOH that is approximately two percentage points lower than the equivalent for renters. This in turn leads to a very clear impact on the overall measurement of consumer price ination. Using our OOH-adjusted index relative to a renters sample lowers the overall level of consumer price ination by nearly 0.4 percentage points. Second, we demonstrate a clear impact of using new versus existing rents in estimating an OOH rental equivalence measures. The OOH index using existing rents is materially lower than the new rent index with a considerable impact on the overall level of consumer price ination: the annualised change in ination was 0.6 percentage points lower using the existing rents relative to new rents. In summary, there are very clear trade-os for statistical agencies and policymakers in setting and measuring OOH in the consumer price index. Our research shows that when using a rental equivalence measure, policymakers should be very mindful of data gaps and measurement issues and ensure that these are minimised so as to limit the impact of such issues on the overall rate of consumer price ination. Appendix 2: Payments Approach Items The table below lists the items removed from the CPI basket as they were deemed payments approach specic in line with the approach of Ahrens et al. (2020).
2021-01-06T05:09:34.757Z
2020-11-18T00:00:00.000
{ "year": 2021, "sha1": "80203e432f645fd5aa6cbd7349189256b149ca4f", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1111/1540-6229.12360", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "800a47946bf532ab4d3db7acf4fc23e7b538b0e5", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
23717273
pes2o/s2orc
v3-fos-license
Parasitic nematodes of Polychrus acutirostris ( Polychrotidae ) in the Caatinga biome , Northeastern Brazil We present data on nematode infracommunity of the arboreal lizard Polycrhus acutirostris in the semiarid Caatinga biome, northeastern Brazil. Twentytwolizard specimens collected in the municipality of Várzea Alegre in Ceará State and in the municipality of Exu in Pernambuco State were analyzed. Two species of nematodes were found, an Oxyuridae,Gynaecometra bahiensis, which had amean intensity of infection 23.5 ± 5.8 (prevalence 22%) and a Physalopteridae, Physaloptera retusa which had infection intensity of 21 (prevalence 9%). There were no significant differences between the parasitism rates of male or female lizards. Polychrus acutirostris demonstrated low richness of nematode parasites, but high levels of infection with G. bahiensis. Polychrus acutirostrisis reported here as a new host for P.retusa. Parasitological studies are fundamental to interpreting parasite/host interactions and better understanding related biological communities (Rocha et al., 2003), by indicating environmental stress levels, aspects of the ecological web, and elucidating the characteristics of local biodiversity (Marcogliese, 2005). The arboreal polychrotid lizard Polychrus acutirostris (Spix, 1825) is found in open vegetation formations in Argentina, Bolivia, and Brazil (Garda et al., 2012) in areas of Cerrado and Caatinga vegetation (Kawashita-Ribeiro and Ávila, 2008;Ribeiro et al., 2012b), being often found near human habitations (Vanzolini, 1974).It is a medium-sized diurnal lizard, uses a sit-and-wait foraging strategy and feeds predominantly arthropods (Coleoptera and Hymenoptera), and plant material (leaves, seeds, and flowers),with reproduction between the months of September and October (Vitt and Lacher, 1981).The present study analyzed the parasitic nematodes of P. acutirostris in two Caatinga vegetation localities in northeastern Brazil. Material and Methods Specimens of P. acutirostris were collected in two areas of Caatinga vegetation, one in the municipality of Várzea Alegre (6°53'S and 39°13'W) in Ceará State, and the other in the municipality of Exu (7°33'S and 39°44'W) in Pernambuco State.Both sites are located in the semiarid region of northeastern Brazil.The Caatinga vegetation at the Exu site was predominately hypoxerophilous deciduous forest, with average total annual rainfall between 700 and 900 mm (CPRM, 2005).The Caatinga vegetation at the Várzea Alegre site comprises a mixture of dense shrubs, Cerrado, spiny deciduous forest, and tropical pluvial subdeciduous forest, with an average annual precipitation rate of 965.3 mm, and average annual temperatures between 26° and 28 °C (IPECE, 2011). The lizards were collected by hand between May/2011 in January/2012.They were subsequently euthanized by applying a lethal injection of Lidocaine, weighed using a spring scale (Pesola®), and their snout-vent lengths (SVL) measured using a digital caliper (0.1 mm precision).The specimens were subsequently fixed in 10% formaldehyde and conserved in 70% ethanol.Voucher specimens were deposited in the Coleção Herpetológica da Universidade Regional do Cariri.Individuals were necropsied and their body cavity, lungs and digestive tract were analyzed under a stereoscopic microscope for the presence of helminths.Nematodes encountered were placed in vials of 70% ethanol for latter identification.For species identification, nematodes were cleared using lactophenol, mounted on temporary slides, and analyzed under a light microscope.The nematodes were subsequently deposited in the Coleção Parasitológica da Universidade Regional do Cariri. Infection rates as well as ecoparasitological terminology follow the definitions of Bush et al (1997). We examined the relationships of the host mass (g) and snout-vent length (SVL) to the numbers of nematode using Pearson's linear correlation analysis.The differences in infection rates between males and females, as well as differences in their SVL, were examined using the Mann Whitney test (test -U), using Bioestat 5.0 software. Another Caatinga lizard, Tropidurus hispidus Spix 1825, from the states of Ceará and Piauí in northeastern Brazil, was reported to have a prevalence of 33.3% (Ávila et al., 2012) with P. retusa, and the low prevalence found in the present study (9%) may have been influenced by the number of individuals examined or some aspect of the nematode community analyzed. Five specimens (4 adult females and one adult male) of P. acutirostris had infection with the Oxyuroidea nematode Gynaecometra bahiensis, with high intensity of infection (23.5 ± 5.8; and prevalence of 22%). The nematode G. bahiensis has only been described as a parasite in P. acutirostris, and additional studies will be necessary to better understand this association (Ávila et al., 2010, 2011).Food habits and foraging modes may influence the composition of host helmintho faunas, and omnivorous and herbivorous lizards are known to have wider and more diverse nematode than carnivorous lizards (Roca, 1999). Polychrus acutirostris was found to have a low diversity of parasitic nematodes, which could reflect its simple intestinal system, ectothermic metabolism, and/or generalist feeding habits (Goater et al., 1987). Little is currently known about the lifecycle of P. retusa, although studies with other Physaloptera spp.(such as Physaloptera hispida Petri (1950), Physaloptera maxillaris Molin, 1860, Physaloptera praeputialis von Linstow (1889), and Physaloptera rara Hall and Wigdor, 1918) have shown that infection are initiated by the ingestion of crickets, grasshoppers, and cockroaches contaminated with third stage larvae (Schell, 1952;Lincoln and Anderson, 1975).Polychrus acutirostris is recorded here as a new host for P. retusa.The lifecycle of G. bahiensis has not been well investigated, although other members of Oxyuridae have strictly monoxenic lifecycles (Anderson, 2000).The arboreal habit and omnivorous diet of P. acutirostris may have influenced the low number of nematode species infecting this lizard and the high infection intensities encountered.
2018-04-03T04:45:39.493Z
2014-11-01T00:00:00.000
{ "year": 2014, "sha1": "bdb711b6b9f2fbdb2936689233c41c62b055ad6b", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/bjb/a/GgjdP3xtbPXrtM69pp5DXqP/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "df607e223f6de3ee89730ffa7b58d895c4ff4b81", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
33794602
pes2o/s2orc
v3-fos-license
Hydrophobin Fusion of an Influenza Virus Hemagglutinin Allows High Transient Expression in Nicotiana benthamiana, Easy Purification and Immune Response with Neutralizing Activity The expression of recombinant hemagglutinin in plants is a promising alternative to the current egg-based production system for the influenza vaccines. Protein-stabilizing fusion partners have been developed to overcome the low production yields and the high downstream process costs associated with the plant expression system. In this context, we tested the fusion of hydrophobin I to the hemagglutinin ectodomain of the influenza A (H1N1)pdm09 virus controlled by the hybrid En2PMA4 transcriptional promoter to rapidly produce high levels of recombinant antigen by transient expression in agro-infiltrated Nicotiana benthamiana leaves. The fusion increased the expression level by a factor of ∼2.5 compared to the unfused protein allowing a high accumulation level of 8.6% of the total soluble proteins. Hemagglutinin was located in ER-derived protein bodies and was successfully purified by combining an aqueous-two phase partition system and a salting out step. Hydrophobin interactions allowed the formation of high molecular weight hemagglutinin structures, while unfused proteins were produced as monomers. Purified protein was shown to be biologically active and to induce neutralizing antibodies after mice immunization. Hydrophobin fusion to influenza hemagglutinin might therefore be a promising approach for rapid, easy, and low cost production of seasonal or pandemic influenza vaccines in plants. CITE THIS VERSION Le dépôt institutionnel DIAL est destiné au dépôt et à la diffusion de documents scientifiques émanents des membres de l'UCLouvain. Toute utilisation de ce document à des fin lucratives ou commerciales est strictement interdite. L'utilisateur s'engage à respecter les droits d'auteur lié à ce document, principalement le droit à l'intégrité de l'oeuvre et le droit à la paternité. La politique complète de copyright est disponible sur la page Copyright policy DIAL is an institutional repository for the deposit and dissemination of scientific documents from UCLouvain members. Usage of this document for profit or commercial purposes is stricly prohibited. User agrees to respect copyright about this document, mainly text integrity and source mention. Full content of copyright policy is available at Copyright policy Introduction Influenza infections are of major concern for public health. Pandemics have caused more than 50 million deaths and cumulatively more have been caused by seasonal infections [1]. Vaccination has been widely used since the early 1960's to prevent contamination. Millions of doses are being produced yearly based on the well established egg-based system developed in 1941 [2]. Nevertheless, emerging problems have pointed out the need for new platforms for influenza vaccine production [3]. These alternative systems should be flexible regarding strain changes and rapid with respect to pandemics. Plants were recently proposed as a platform for vaccine production with the benefits of low cultivation costs, rapid biomass availability, ease of scaling-up, and limited risks of pathogen contamination [4]. Moreover, transient expression in leaves allows for the rapid and high-level production of pharmacological proteins [5]. Hemagglutinin (HA), the most immunogenic protein of the influenza virus, is the main target for recombinant influenza vaccine development. In plants, Influenza antigens were first expressed transiently in Nicotiana benthamiana leaves as a chimeric protein composed of an HA fragment fused to a fragment of neuraminidase, both from an H5N1 influenza strain, as well as to a thermostable lichenase [6]. Two approaches based on the expression of the HA protein alone were investigated. Firstly, full-length HAs from H1N1 (A/New Caledonia/20/99) and H5N1 (A/Indonesia/05/05) viruses were expressed transiently in N. benthamiana as virus-like particles (VLP) that bud from the plasma membrane [7]. A successful phase II clinical trial was achieved by Medicago with HA-VLPs from an avian H5 influenza strain [8]. Secondly, expression of a soluble truncated HA construct from A/Wyoming/03/03 (H3N2) was achieved by removal of the transmembrane domain and the addition of a KDEL retention signal [9]. This approach was used to express the HA from three 2008-2009 seasonal strains as well as the 2009 swine pandemic H1N1 (A/California/04/09) strain, an avian H5N1 (A/Indonesia/05/05) strain [10][11], and a low pathogenic avian H7N7 strain [12]. Recently, the soluble truncated HA from the pandemic A/California/ 04/09 (H1N1) was shown to be safe and immunogenic in a phase I clinical trial [13]. Both the VLP and the truncated HA approaches were shown to be a feasible response strategy to pandemics in developing countries, by the stable and transient expression of full-length or truncated HA from an avian H5 influenza strain in Nicotiana tabacum plants [14]. To enhance the accumulation level and to simplify the downstream purification procedure, the recombinant protein can be fused to a protein-stabilizing partner such as zein from plants, elastin from animals, and hydrophobin from fungi (reviewed in [15]). Hydrophobin I (HFBI), a small (,10 kDa) surface-active protein secreted by filamentous fungi, possesses the ability to alter the hydrophobicity of the fusion partner, which can therefore be purified by an aqueous two-phase system (ATPS) [16]. This approach has been successfully used in N. benthamiana agro-infiltrated leaves and N. tabacum BY-2 cells for the expression of GFP in ER-derived protein bodies (PB) [17][18]. However, fusion of HFBI to the HA ectodomain from the Influenza A/Hatay/2004 (H5N1) virus in transgenic N. tabacum plants did not improve its expression compared with nonfused HA [19], while the fusion of elastin-like polypeptide (ELP) to the same HA ectodomain increased its accumulation level by 10-fold without compromising its functionality [19][20]. In the present study, we investigated hydrophobin fusion as a tool to obtain high-level expression of the recombinant HA ectodomain from the Influenza A/ Texas/05/2009 (H1N1) virus by transient expression in N. benthamiana leaves. High expression levels of H1-HFBI in ER-derived protein bodies were obtained. H1-HFBI was easily and efficiently purified by ATPS. The immunogenic properties and the potency to induce neutralizing antibodies of the purified antigen were demonstrated by immunological studies in mice. Transient expression of H1-HFBI The sequence encoding the HA ectodomain (codons 18-529) of the A/Texas/05/ 2009 (H1N1) influenza virus was fused to the Arabidopsis thaliana endochitinase signal peptide sequence at the 59 end, and to the endoplasmic reticulum (ER) retention KDEL sequence at the 39 end giving H1 (Fig. 1). In addition, the sequence encoding HFBI was fused downstream of the HA ectodomain, via a GGGSGGGS linker, to generate H1-HFBI ( Fig. 1) and promote PB formation in the ER. The two resulting sequences were plant codon-optimized (S1 Fig.), and cloned into the binary vector pEAQ-specialK-HT [21], except that the CaMV 35S promoter had been replaced by the En 2 PMA4 promoter, a hybrid promoter made of the Nicotiana plumbaginifolia PMA4 promoter [22] reinforced by two copies of the CaMV 35S enhancer [23]. We indeed found that the latter allowed higher GFP expression than the former when transiently expressed in N. benthamiana leaves (S2 Fig.). The resulting binary vector was electroporated into Agrobacterium tumefaciens LBA4404virG, a strain that constitutively expresses virG and allows T-DNA transfer in the absence of the phenolic inducer, acetosyringone [24]. In a preliminary test, the effect of acetosyringone in the infiltration medium on the transient expression of H1-HFBI in N. benthamiana leaves was quantified by Western blotting analysis of 16 independent samples collected 6 dpi (S3 Fig.). A 30% decrease of H1-HFBI from 9.7% of total soluble proteins (TSP) to 6.9% TSP was observed in the presence of acetosyringone. Taking into account the HFBI contribution to the H1-HFBI size (11.7%), the actual expression level of H1 was 8.6% TSP and 6.1%, in the presence or absence of AS, respectively. N. benthamiana leaves were therefore agro-infiltrated with the H1 and H1-HFBI constructs in the absence of acetosyringone, and the expression level of both proteins was analyzed by SDS-PAGE of the soluble protein extracts (Fig. 2a). High expression levels were observed by Coomassie blue staining for both proteins in four individual infiltration experiments. To confirm this observation, Western blotting analysis was performed with polyclonal anti-influenza A antibodies ( Fig. 2b). For H1-HFBI, two bands were detected, the major band at an apparent size of 80 kDa and a less abundant band at a size corresponding to untagged H1. This suggests that the H1-HFBI protein was partially cleaved, possibly close to the linker. Regarding their relative abundance, dilution series of three samples containing H1-HFBI were quantified and the resulting signals were compared with the signal of the undiluted samples containing untagged H1 derived from the same leaf (S4 Fig.). This quantification showed that the HFBI fusion enhances the HA expression level by a factor of ,2.5. H1-HFBI accumulates in ER-derived protein bodies The HFBI fusion is reported to induce the formation of PBs in plants and in plant suspension cells [15]. To determine whether this was the case for H1-HFBI, we sought to visualize such structures in agro-infiltrated leaves by in situ immunolocalization, however the negative control already gave too strong of a fluorescence background. We thus transformed N. tabacum BY-2 cells with both constructs and performed in situ immunolocalization of H1-HFBI and untagged H1 in transgenic cells. Confocal microscopy analysis indicated a signal with a reticulate pattern possibly corresponding to the ER (Fig. 3b). Small spherical particles were detected with a size ranging from 0.2 to 0.5 mm (Figs. 3c and d), reminiscent of protein body structures recently described for GFP-HFBI expressed in N. tabacum BY-2 cells [18]. To support the ER localization of H1-HFBI, we relied on Endoglycosidase H (EndoH) which specifically cleaves high-mannose N-glycans added in the ER, but not complex N-glycans typically found in glycoproteins that reach the Golgi apparatus. EndoH digestion of a leaf extract with H1-HFBI significantly decreased its apparent size (Fig. 4), indicating that H1-HFBI was glycosylated and efficiently retained in the ER. Purification of H1-HFBI protein by aqueous two-phase separation H1-HFBI was purified from agro-infiltrated leaf extracts by ATPS using Triton X-114 as a surfactant [18,25]. In ATPS, hydrophobin fusion partners are concentrated within micellar structures and partitioned in a surfactant-rich phase, while the majority of endogenous proteins remain in the aqueous phase and can be discarded. Hydrophobin-fused proteins can be back-extracted by the addition of a non denaturing organic solvent such as isobutanol. To assess the purity of H1-HFBI following ATPS purification, the leaf extract and the ATPS phases were analyzed by SDS-PAGE (Fig. 5). The majority of soluble proteins, including the Rubisco large subunit (,55 kDa), the most abundant protein in plant leaves, were discarded during the first separation phase. H1-HFBI was found to concentrate in the lower phase with an estimated recovery of about 70% and an overall purity of 50% (as calculated from the signal quantification). The bands corresponding to the five residual contaminating proteins (, 35,26,23,22, and 20 kDa) were excised, trypsin digested, and analyzed by MALDI-TOFTOF mass spectrometry. Their identity, which was confirmed by directly analyzing an ATPS-purified sample by LC-MALDI-TOFTOF mass spectrometry analysis, is given in S1 Table. H1-HFBI oligomerizes and exhibits hemagglutination activity A routine method to demonstrate the biological activity of HA is the hemagglutination assay, which tests its ability to agglutinate red blood cells (RBCs) in vitro by binding to sialic acids on surface proteins. We compared cell extracts derived from leaves agro-infiltrated with the H1 or H1-HFBI constructs in a hemagglutination assay and found that the cell extract that contained H1-HFBI was able to agglutinate chicken RBCs, while a cell extract that contained untagged H1 was not (Fig. 6a). A prerequisite for hemagglutination is the formation of oligomeric HA structures that can crosslink cells, and this result indicates that untagged H1 is probably present in a monomeric state while fusion with HFBI allows its oligomerization. As recombinant HA has been observed as monomers, dimers, trimers, and/or high molecular weight oligomers (HMWO) [26], ATPS-purified H1-HFBI was analyzed by size exclusion chromatography to determine its quaternary structure (Fig. 7). The first peak eluted slightly after the void volume of Blue dextran and corresponds to a size higher than the 669 kDa standard. Western blot analysis indicated the presence of H1-HFBI in this peak at an elution volume of 15-17 ml. Elution of the second peak took place between the 158 and 44 kDa standards. Purification of H1-HFBI by ATPS. A TSP extract from H1-HFBI-expressing leaves was subjected to purification by ATPS as described in the Experimental procedures. Samples (40 ml) of TSP, the upper phase discarded after the first phase separation, and the lower phase recovered after the second phase separation, were analyzed by SDS-PAGE. The identification of bands 1-5 by mass spectrometry is reported in S1 Table. doi:10.1371/journal.pone.0115944.g005 Western blot analysis indicated the presence of H1-HFBI at an elution volume of 28-30 ml, which probably corresponds to a monomeric form (the expected size was 68,1 kDa excluding the contribution of glycosylation). We can therefore conclude that H1-HFBI forms both HMWO and monomers. The anti-HA signal detected in the fractions collected for the two forms was quantified after Western blotting and a ratio of HMWO/monomer of approximately 2 was determined. The ATPS-purified H1-HFBI sample was also subjected to a hemagglutination test with bovine serum albumin (BSA) as a negative control and inactivated A/ Texas/05/2009(H1N1) virus as a positive control (Fig. 6b). Hemagglutination was observed with H1-HFBI concentrations of 0.06 mg/well or higher, as well as with the inactivated virus. No hemagglutination was observed with lower H1-HFBI concentrations or with BSA. The HA titer was calculated to be 64 (2 6 ). This test demonstrates that hydrophobin-fused HA has retained its receptor binding activity after ATPS purification. Immunogenicity of H1-HFBI Prior to mice immunization, ATPS-purified H1-HFBI required further purification to remove the remaining contaminants. Varied concentrations of ammonium sulfate were tested for selective precipitation by salting out. Most of H1-HFBI precipitated at 5% saturation of ammonium sulfate while the five contaminating proteins remained in the supernatant (Fig. 8). Increasing the ammonium sulfate concentration to 10% saturation allowed for the complete precipitation of H1-HFBI, which appeared as a single band by SDS-PAGE (Fig. 8), indicating that the protein was purified to apparent homogeneity in the final pellet (for this last step, a recovery of ,90% with a purity .95% was determined by quantification of the signals). The H1-HFBI precipitate was dissolved in PBS and dialyzed in order to remove excess salts. H1-HFBI exhibited the same hemagglutination activity as shown previously (S5 Fig.) and the same profile as that obtained after size exclusion chromatography (Fig. 7). H1-HFBI immunogenicity was evaluated by subcutaneous vaccination of 10 female CD1 mice with 50 mg of purified H1-HFBI formulated with Freund's adjuvant. Pre-immune sera were collected before the first injection, and blood samples were collected after the 4 th and the 6 th boost. The three samples were analyzed for their ability to recognize a recombinant Influenza A/Texas/05/2009 (H1N1) ectodomain produced in mammalian cells by endpoint titer ELISA (Fig. 9a). Sera of mice immunized with H1-HFBI displayed significantly higher anti-HA titer than the pre-immune sera (p53.8.10 24 ). An average HA-specific antibody titer of 25,600 was obtained for the samples collected after the 4 th boost, and no statistical difference was observed between the samples collected after the 4 th or the 6 th boost (p50.18) (Fig. 9b). The immune response to H1-HFBI results in neutralizing activity The last step consisted of demonstrating the neutralizing properties of sera from H1-HFBI-vaccinated mice. A reliable test is the hemagglutination inhibition test. Serum samples were incubated with inactivated A/Texas/05/2009 (H1N1) virus, and potential neutralizing antibodies were expected to bind viral receptor binding domains and prevent attachment of the virus to chicken RBCs (Fig. 10). Therefore hemagglutination is prevented when antibodies are present. The highest serum dilution that prevents hemagglutination is designated as the hemagglutination inhibition titer of the serum. None of the pre-immune sera inhibited hemagglutination. Sera collected after the 4 th and 6 th boost presented a hemagglutination inhibition mean of 83 and 70, respectively. This difference was not statistically significant. We can therefore conclude that H1-HFBI expressed in N. benthamiana is able to induce an immune response with neutralizing activity. Discussion Plants have been reported to be an alternative and reliable expression system for seasonal and pandemic influenza vaccines [27]. Transient expression is the most suitable production method regarding influenza pandemics or zoonotic out- breaks. It has been demonstrated that a time period of 3-4 weeks is sufficient to produce a large dose of vaccines [10,28]. We successfully transiently expressed the hemagglutinin ectodomain of Influenza A/Texas/05/2009 (H1N1) in N. benthamiana leaves and obtained a high accumulation level of 8.6% TSP. This was obtained by combining transient transformation, which allows yields well over those obtained in stable transgenic plants, and the pEAQ-HT binary vector, which contains the CPMV UTR's (acting as translation enhancers) as well as the P19 gene, which prevents silencing [21]. However, three further improvements were made in this work which resulted in enhanced H1 expression. First, the utilization of an A. tumefaciens strain which constitutively expressed virG made phenolics unnecessary to activate transformation ( [24]. As a consequence, the accumulation of H1-HFBI was enhanced by 30% when acetosyringone was omitted from the inoculation medium (S3 Fig.). Second, using the En 2 PMA4 promoter instead of the CaMV 35S promoter was probably an additional asset, since this exchange allowed a 50% increase of GFP expression in transient expression (S2 Fig.). Third, fusion of H1 to HFBI increased expression by ,2.5 fold (S4 Fig.). The effect of HFBI fusion on HA accumulation is consistent with the 2 fold and 2-3 fold increase of GFP-HFBI reported by Joensuu and co-workers [17] and Gutierrez and co-workers [29], respectively. However, our results contrast with those of Phan and co-workers [19], who showed no yield improvement by fusing HFBI to an H5 ectodomain [19]. This discrepancy might be explained by the weak sequence identity between the H1 and H5 HAs (63%), and the possibility that the effect of HFBI on the accumulation level is protein dependent, as this was observed for ELP fusions [30]. Taken together, the removal of AS, the use of the En 2 PMA4 promoter and the HFBI fusion led to a 330% increase of H1 expression. Direct comparisons between our data and those previously obtained for the expression of HA in plants is not straightforward for several reasons: the HA Fig. 10. Neutralizing properties of antibodies induced by H1-HFBI. Sera from the ten vaccinated mice were serially 2-fold diluted and incubated with inactivated virus for 30 min, and then RBCs were added. Preimmune sera were used as a negative control. Hemagglutination inhibition titers were determined. The mean for each test group was calculated, and bars represent SD. origin (virus strain), the portion of HA that is expressed, and the yield calculation (% of TSP or % leaf fresh weight) differ. However, in our hands, since TSP represents ,6 mg protein/g leaf fresh weight, a H1-HFBI yield of 9.7% TSP is equivalent to ,600 mg H1-HFBI/kg leaf fresh weight or ,510 mg H1/kg leaf fresh weight after subtracting the HFBI counterpart. This value exceeds those reported for the transient expression of the complete HA (50 mg/kg; [7]) and is within the same range of HA soluble portions expressed with a launch vector system which involves components of a plant RNA virus (400-1300 mg/kg; [9]). It is also much higher than the 0.05% TSP observed for an H5-HFBI fusion or the 0.5% TSP seen for H5 fused to an elastin-like protein [19]. However, these results were obtained with stable transformation which is known to exhibit reduced performance when compared with transient expression. Unlike H1, H1-HFBI was located in protein bodies (Fig. 3). This might explain why the HFBI fusion resulted in increased accumulation, as it has been suggested that protein bodies prevent proteolytic degradation [31]. Untagged H5 was partly found in protein bodies, but to a lesser extent than H5-HFBI [19]. HA-HFBI deglycosylation by EndoH strongly supports the ER localization of protein bodies, as EndoH specifically cleaves high-mannose N-glycans added in the ER, but not complex N-glycans typically found in glycoproteins that reach the Golgi. As a decrease of about 10 kDa of H1-HFBI was observed upon EndoH treatment (Fig. 4), and since N-glycosylation increases by ,2.5 kDa the size of a protein [32], we can guess that four out of the six predicted glycosylation sites [33] are glycosylated. ATPS purification was shown to be effective for H1-HFBI recovery from plant leaf extracts (Fig. 5). However, a few proteins contaminated the purified fraction. They correspond to abundant TSP proteins and were identified by mass spectrometry (S1 Table). Three out of five identified contaminants are chloroplastic proteins which belong to the oxygen-evolving complex [34]. This complex is one of the three sub-complexes that compose the plant photosystem II, the function of which is to harvest light energy. H1-HFBI without visible contaminants was obtained by combining ATPS purification with differential ammonium sulfate precipitation (Fig. 8). Combining these two approaches also allowed for an increase in the amount of surfactant used during ATPS. As the recovery efficiency is proportional to the concentration of surfactant used [17], using 8 to 10% instead of 4% of Triton X-114 might improve H1-HFBI recovery that was about 70% in this study. The higher surfactant concentration implies a larger volume of the lower phase, but this is unimportant as the following step (ammonium sulfate precipitation) concentrates the purified proteins. This combined purification has the advantage to be scalable, as ATPS has been shown to be efficient up to 20 L [18]. Recombinant hemagglutinin monomers can aggregate in HMWO but this varies according to the viral strain, the expression system, the genetic modifications, and the purification method [26]. Protein aggregates are of interest in vaccination because they are more immunogenic than monomers, although recent concerns about safety were raised [35]. Purified plant-produced H1, H5, and H5-ELP ectodomains have been expressed as monomers [20,36], but another example from the literature showed plant-produced HMWO [26]. Also, HA from the same influenza strain used in the present study was expressed in Escherichia coli as a HMWO [37]. In this study, we observed that the H1 ectodomain alone was probably expressed as monomers, as no hemagglutination was observed while the H1 ectodomain fused to HFBI was expressed as both monomers and oligomers, with a preponderance of the latter. This suggests that the formation of those oligomeric structures is due to HFBI interactions, which is another advantage of this carrier. Yet, this effect may be protein and expression dependent, as it has been shown that HFBI self-interacts at a given concentration [38]. Additionally, in order to increase the oligomeric form and enhance the immunogenicity of H1-HFBI, a trimerization motif such as GCN4-PII could be inserted between HA and HFBI. A fraction of H1-HFBI had a size close to that of H1 (Fig. 2), suggesting that cleavage occurred in the linker region. Changing the latter could further improve the yield of H1-HFBI. The hemagglutination assay was chosen to assess the biological activity and consequently the proper folding of H1-HFBI. This test was made possible because of the presence of oligomeric structures, whereas monomers are not able to agglutinate RBCs. A positive response using a dose as low as 0.3 mg/ml was obtained. This demonstrates that hydrophobin fusion and the purification method used did not impact the biological activity. H1-HFBI was shown to induce a significant serum antibody titer in vaccinated mice (Fig. 10). This was demonstrated by ELISA with an HA ectodomain produced in mammalian cells in order to eliminate a potential response coming from anti-hydrophobin antibodies and to confirm the specificity of the induced anti-HA antibodies. H1-HFBI is also expected to be protective in mice, as the calculated hemagglutination inhibition titer was 83, while the minimal titer required for a vaccine to be protective in humans is 40 [39]. Blood samples were collected after the 4 th and the 6 th booster immunization but no significant differences were observed between the two samples. Regarding pandemics, when part of the population is immunologically naïve to the emerging viral strain, two vaccine doses may be required, while one dose should be enough for people with preexisting immunity against the virus lineage. Therefore, four booster immunizations is too high and a dose-ranging study has to be performed to investigate the potential of this vaccine candidate. This study should also include different adjuvants such as Qiul A or AbISCO. One might wonder whether injection of the fungal hydrophobin to animals and humans could trigger deleterious effects. Hydrophobins have a nontoxical nature [40] and prevent immune recognition of fungal spores, suggesting that they are not immunogenic [41]. This suggests that hydrophobin fusion is safe for antigen design even though further investigation must be performed. As an exploratory experiment, the presence of anti-HFBI antibodies in a vaccinated mouse serum was assessed by Western blotting of HFBI fused to another protein, the Green Fluorescent Protein (GFP) as well as of unfused GFP as a control, both expressed in N. benthamiana (S6 Fig.). The serum did not show HFBI recognition. However, Nakari-Setälä and colleagues were able to produced anti-HFBI antibodies by immunization of rabbits [42], indicating that the protein might be immunogenic. Therefore, further investigations are required to determine the HFBI immunogenicity and its impact on the immunogenicity of the fused protein. In case of a negative impact or the formation of anti-HFBI antibodies that might be deleterious for the patient, the addition of a proteolytic cleavage site between H1 and HFBI could be considered. This cleavage could also be useful to remove a potential trimerization motif added to stabilize the oligomeric form into a more homogenous product. Many efforts have been carried out to find an alternative to the current eggbased vaccine technology. In this study, we investigated a hydrophobin fusion to a recombinant hemagglutinin ectodomain. This fusion was shown to enhance the accumulation level and to allow rapid, easy, and scalable purification while the fused protein remained biologically active, was immunogenic, and induced neutralizing antibodies in mice. Transient expression of H1-HFBI is therefore a promising approach to produce seasonal and/or pandemic influenza vaccines. Ethics statement The experiments, maintenance and care of mice complied with the guidelines of the European Convention for the Protection of Vertebrate Animals used for Experimental and other Scientific Purposes (CETS n˚123). The protocol was approved by the Committee on the Ethics of Animal Experiments of the University of Liège, Belgium (Permit Number: 06-594). All efforts were made to minimize suffering. Construction of the H1 and H1-HFBI expression vectors The pEAQspecialK-HT plasmid [21] was used as an expression vector. The initial p35S promoter was replaced by the Nicotiana plumbaginifolia H + -ATPase PMA4 promoter reinforced with two copies of the 35S enhancer [23]. The DNA sequence corresponding to codons 1 to 566 of the influenza A/Texas/05/2009(H1N1) HA gene was optimized for plant expression and synthesized by GenScript (Piscataway, USA). The sequence encoding the HA ectodomain (codons 18-529) was amplified from this sequence using the primers HA-Chit (59 TATCCTCGGCCGAAGATACCCTCTGCATTGG 39) and HA-HFBIR (59 CGAGTGAACCACCACCCTGATAGATCCTGGTACTC 39). This was fused by overlap extension PCR to the signal peptide sequence (codons 1 to 21) of the Arabidopsis thaliana basic endochitinase (Accession number: P19171) amplified from pSK-chit-OspA [44] using the primers chitAgeI (59AACACCGGTATGAAGACTAATCTTTTTCTC 39, AgeI site underlined) and Chit-HA (59 CCAATGCAGAGGGTATCTTCGGCCGAGGATAATGAT 39). The resulting chit-HA fragment was cloned into the pGEM-T Easy vector and sequenced. A DNA sequence corresponding to a GGGSGGGS linker, codons 23 to 97 of the HFBI gene from Trichoderma reseei (P52754), a GGGG linker, and the KDEL ER-retrieval motif was optimized for plant expression and synthesized by GenScript (Piscataway, USA). The sequence was amplified by PCR with the primers HA-HFBIF (59 GAGTACCAGGATCTATCAGGGTGGTGGTTCACTCG 39) and HFBIXhoI (59 TTGCTCGAGTCATAACTCATC39, XhoI underlined). The HFBI-KDEL amplicon was fused by overlap extension PCR to the chit-HA fragment. The resulting PCR product was introduced into pEAQ-HT using AgeI/ XhoI to generate the pEAQ-H1-HFBI binary plasmid. The untagged H1 gene construct was obtained by PCR from the H1-HFBI construct using the primers chitAgeI and KDELXhoI (59 TTGCTCGAGCTGATAGATCCTGGTACTCTC 39, XhoI underlined) and cloned into pEAQ-HT to give pEAQ-H1-HT. The nucleotide and amino acid sequences of H1-HFBI and H1 are displayed in S1 Fig. Plant transient transformation The pEAQ-H1-HT and pEAQ-H1-HFBI-HT binary plasmids were introduced into A. tumefaciens LBA4404 virG by electroporation. The A. tumefaciens strains were grown overnight at 28˚C in 2YT medium (1.6% bacto-tryptone, 1% bactoyeast-extract, 0.1% glucose, 0.02% MgSO 4 ) supplemented with 20 mg/ml rifampicin, 40 mg/ml gentamycin, and 50 mg/ml kanamycin. The cells were harvested by centrifugation (5,000 g, 5 min, 15˚C), washed three times in infiltration medium (10 mM MES monohydrate, 10 mM MgCl 2 , pH 5.3 (KOH)), and resuspended in infiltration medium at a final OD 600 of 0.6. N. benthamiana leaves were then infiltrated on the abaxial side through the stomata using a syringe. The plants were incubated for 6 days under routine culture conditions. N. tabacum BY-2 cell stable transformation N. tabacum BY-2 suspension cells were transformed by co-cultivation with A. tumefaciens as described previously [44]. Transgenic calli were selected on MS medium supplemented with 100 mg/ml kanamycin. Protein electrophoresis and Western blotting The protein content of the different samples was determined using the Bradford method [45]. For Western blotting, proteins were electrotransferred onto a polyvinylidene fluoride membrane, then the membrane was saturated, incubated first with goat anti-influenza A (1:1,000, OBT1551, Abdserotec, UK) for 1 h at room temperature, and incubated a second time with anti-goat HRP-conjugated (1:10,000, A5420, Sigma, St-Louis, MO) for 1 h at room temperature. The membrane was incubated for 2 min with Lumi-light (Roche, Switzerland) and the signals were quantified using the Kodak Image Station 4000R (Eastman Kodak company, Rochester, NY). To obtain a rough estimation of the expression level of recombinant H1 and H1-HFBI proteins, an immunoblotting technique was applied using the extracellular domain of a recombinant Influenza A/Texas/05/2009(H1N1) hemagglutinin produced in human cells (11085-V08H, Sino Biologicals, China) as a standard. The soluble fraction obtained after homogenization of agro-infiltrated leaves was serially diluted to obtain band intensities that were similar to the band intensity of the standard protein used at varied amounts (50, 100, 200, 500 ng). Band intensities were quantified using Kodak Image Station 4000R software. For H1-HFBI quantification, 16 independent samples were analyzed. Endoglycosidase H treatment Leaf protein (5 mg) samples containing H1-HFBI were diluted in 50 mM sodium citrate pH 5.5 (HCl), 0.5 mM PMSF. Then, 0.2 U/ml Endoglycosidase H (Roche) was added. After incubation at 37˚C for 0, 15, or 60 min, the reaction was stopped by the addition of SDS loading buffer. H1-HFBI extraction and purification Agro-infiltrated leaves were frozen in liquid nitrogen and homogenized using a mortar and a pestle. The resulting powder was then resuspended in 10 volumes of ice-cold PBS (137 mM NaCl, 2.7 mM KCl, 10 mM Na 2 HPO 4 , 2 mM KH 2 PO 4 ) supplemented with 1 mM PMSF, and homogenized in a potter or a shaker for small or large volumes, respectively. The homogenate was clarified by centrifugation at 15,000 g for 20 min at 4˚C. Membrane proteins were discarded by ultracentrifugation at 125,000 g for 30 min at 4˚C. For ATPS, TSP were pre-warmed in a water bath until the temperature reached 28˚C. TSP were then mixed vigorously with 4% (w/v) of Triton X-114 and introduced into a separation funnel. The two phases were allowed to separate for 15 min at 28˚C. The bottom phase was recovered and detergent was removed by the addition of isobutanol (10 times the Triton X-114 volume). After centrifugation at 5,000 g for 5 min at room temperature, the new bottom phase containing H1-HFBI was recovered. ATPS-purified H1-HFBI was further purified by adding solid ammonium sulfate up to 5% or 10% of the final concentration. The solution was then stirred for 1 h at 4˚C and centrifuged at 20,000 g for 15 min at 4˚C. The supernatant was discarded and the remaining pellet was dissolved in PBS. Protein analysis by mass spectrometry The bands corresponding to proteins were excised from the gel, treated with trypsin, and analyzed by MS/MS, as described in [47] and as detailed in S1 Table. Gel filtration chromatography For size exclusion chromatography, a Superdex G200 column (GE Healthcare, UK) (106300 mm) was used coupled to an Ä kta Explorer (GE Healthcare) purification system. After column equilibration with PBS, 100 ml of gel filtration standards (Bio-Rad, #151-1901) were injected and a flow rate of 1 ml/min was applied until the last standard was eluted. Then, a 500 ml ATPS-purified sample was applied to the column under the same conditions. Effluents were collected in 1 ml fractions. Hemagglutination assay and hemagglutination inhibition assay Functional activity of H1-HFBI was evaluated using a hemagglutination assay according to standard procedures [48]. Recombinant H1-HFBI was diluted to a final concentration of 4 mg/ml in PBS, and 50 ml aliquots were serially two-fold diluted in U-bottom 96-well plates. After the addition of 50 ml of 1% (w/v) RBCs, the plates were incubated for 30 min at 20˚C. Bovine serum albumin was used as a negative control. The hemagglutination inhibition assay was based on standard procedures as well [46]. Influenza A/Texas/05/2009 virus purified from allantoïc fluid and formol-inactivated was diluted to a final concentration of 8 HAU/50 ml after quantification by a hemagglutination assay. Sera from immunized mice were diluted with three volumes of Cholera filtrate containing Receptor Destroying Enzyme (Sigma, C8772-1VL) and incubated for 16 h at 37˚C, then heat inactivated for 30 min at 56˚C. Two-fold dilution of inactivated sera was followed by incubation with 4 HAU of A/Texas/05/2009 virus for 30 min at 20˚C. Chicken erythrocytes (1%) were added and incubated for an additional 40 min at 20˚C. The HAI titer was calculated as the reciprocal of the highest dilution that produced complete hemagglutination inhibition. Mice immunization A group of 10 8 week-old female CD1 mice were blood sampled before vaccination. The mice were then vaccinated intraperitoneally with 50 mg of purified H1-HFBI every two weeks for a total of 6 hyperimmunizations. The vaccine was formulated with Complete Freund's adjuvant for the first two immunizations, and with Incomplete Freund's adjuvant for the booster immunizations. Blood samples were collected after 4 boosts using a slight incision in the mice tails. Two weeks after the 6 th boost, the mice were killed humanly by an overdose of pentobarbital and an exsanguination. The sera collected were used for indirect ELISA and hemagglutination inhibition assay. Indirect ELISA Microtiter plates were coated with 100 ml/well of 5 mg/ml (in PBS) of extracellular domain of recombinant Influenza A/Texas/05/2009 (H1N1) hemagglutinin produced in human cells (Sino Biologicals, 11085-V08H) and incubated for 16 h at 4˚C. The plates were then washed three times in PBST (0.1% Tween 20 in PBS) and saturated for 1 h at room temperature with 200 ml/well of PBS supplemented with 5% dried non-fat milk. After three additional washing steps, 100 ml of a 1:100 dilution of sera from the immunized mice were serially two-fold diluted and incubated for 90 min at 37˚C. The plates were then washed three times with PBST and incubated for 1 h at room temperature with 100 ml/well of a 1:10,000 dilution of anti-mouse HRP-conjugated IgG (Biognost Millipore, AP308P). After four washes, 100 ml/well of o-phenylenediamine peroxidase substrate (Sigma) in citrate buffer (0.05 M Na 2 HPO 4 , 0.025 M citric acid) was added. The reaction was stopped after 15 min with 50 ml of 1 M H 2 SO 4 and the absorbance measured at 490 nm (Model 550, Microplate Reader; Bio-Rad, Hercules, CA). tumefaciens strain containing the pEAQ-HT vector with the GFP gene driven by the p35S or En 2 PMA4 promoter. A leaf transformed with an empty pEAQ-HT vector was used as a negative control. A TSP fraction was prepared at 6 dpi. (a) Twenty mg of TSP were analyzed by SDS-PAGE and the gel was stained with colloidal blue. The large Rubisco subunit is indicated (*), and the GFP is indicated by an arrow. (b) The GFP content of six independent samples (50 mg TSP) for each promoter was quantified by fluorimetry (excitation at 395 nm and emission at 508 nm). doi:10.1371/journal.pone.0115944.s002 (PDF) Samples containing H1-HFBI, GFP-HFBI or GFP (15 mg TSP) transiently expressed in N. benthamiana were analyzed by Western blotting using a 1:200 diluted serum from mouse 6 immunized with H1-HFBI and a 1:5000 dilution of a HRP-conjugated anti-mouse secondary antibody. The membrane was then stripped in 0.4N NaOH for 3 min and then incubated with a polyclonal anti-GFP antibody and a polyclonal anti-rabbit secondary antibody. Samples coming from a leaf infiltrated with an empty vector and a commercial recombinant H1 expressed in mammalian cells (rHA(+)) were used as negative and positive controls, respectively. Note thatadditional bands were detected with the mouse serum at a size similar to unfused GFP. They probably do not correspond to GFP as the fused GFP-HFBI is not recognized. doi:10.1371/journal.pone.0115944.s006 (DOCX) S1 Table. Identification of the proteins contaminating the ATPS-purified sample 1 . 1 Method: The acquired spectra were analyzed using the Applied Biosystems GPS Explorer (version 3.6) and the Matrix Science MASCOT algorithm in the NCBI N. benthamiana database and the NCBI N. benthamiana EST database, as described in Duby et al. (2010). Reverse phase separation of peptides was completed on an Ultimate 3000 chromatography chain (ThermoFisher Scientific) using a C18 PepMap 100 analytical column (150 mm, 3 mm i.d., 100 Å ), (ThermoFisher Scientific). Previously the sample was dissolved in 0.025% (v/v) TFA and 5% (v/v) ACN and desalted using a C18 Pep Map 100 pre-column (10 mm, 5 mm i.d., 100 Å ). Peptides were backflushed onto the analytical column with a flow rate of 300 nL/min by a 180 min linear gradient from 8 to 76% (v/v) ACN in water containing 0.1% (v/v) TFA in buffer A and 0.085% (v/v) TFA in buffer B. The eluted peptides were mixed with a-cyano-4hydrocinnamic acid (4 mg/mL in 70% ACN/0.1% TFA) and spotted directly onto a MALDI target using a Probot system (ThermoFisher Scientific). 2 The band numbers correspond to those annotated in Fig. 5. doi:10.1371/journal.pone.0115944.s007 (PDF)
2018-04-03T06:15:38.435Z
2014-12-26T00:00:00.000
{ "year": 2014, "sha1": "eed080f7dae50f8e1b9ed878716c8da7ed05d742", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0115944&type=printable", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "044d27d26c04538982a5af609c74e57d86c72d4a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
4380768
pes2o/s2orc
v3-fos-license
Elevated (Pro)renin Receptor Expression Contributes to Maintaining Aerobic Metabolism in Growth Hormone Deficiency Abstract Context Growth hormone deficiency (GHD) leads to obesity and may induce tissue hypoxia. As (pro)renin receptor [(P)RR] is reported to contribute to the aerobic metabolism by stabilizing pyruvate dehydrogenase (PDH), it may play a substantial role in GHD. Objective We aimed to investigate serum soluble (P)RR [s(P)RR] concentration, the origin of s(P)RR, and significance of (P)RR in GHD. Design, Setting, and Participants Serum s(P)RR concentration was examined in 72 patients with pituitary diseases, including 32 patients with severe GHD (SGHD) and after GH replacement in 16 SGHD patients. Leptin-deficient ob/ob obese mice were treated with pegvisomant, a GH receptor antagonist, to explore the source of elevated serum s(P)RR in GHD. Adipocytes were cultured with 5% O2 to examine the effects of hypoxia. Results Serum s(P)RR concentration was higher in patients with SGHD than in those without SGHD. Obesity was the important determinant of s(P)RR concentration. Serum s(P)RR concentration significantly decreased after GH replacement in SGHD patients. (P)RR mRNA expression was increased specifically in the adipose tissue (AT) of pegvisomant-treated obese mice compared with that of control obese mice. Hypoxia in cultured adipocytes increased (P)RR expression without affecting the PDH E1 β subunit (PDHB) expression; however, with (P)RR knockdown by small interfering RNA, hypoxia significantly decreased the expression of PDHB. Conclusion GHD patients showed increased serum s(P)RR concentration, possibly caused by obesity and hypoxia. (P)RR expression in AT of GHD patients may be elevated to help maintain aerobic metabolism under hypoxia. Thus, the elevated serum s(P)RR level may reflect hypoxia in ATs. Growth hormone deficiency (GHD) is the most common hormone deficit in hypopituitarism. GHD not only causes obesity [1], glucose intolerance [2], and dyslipidemia [3] but also decreases aerobic exercise ability [4,5], as a result of restricted oxygen delivery capacity [6]. A current method of diagnosis for GHD depends on multiple GH stimulation tests, such as by insulin and glucagon [7], and these tests place a burden on patients in terms of economy and time. Therefore, an easier and more effective biomarker of GHD has been desired. Previous studies showed that (pro)renin receptor [(P)RR] contributes to tissue reninangiotensin system-related pathogenesis, such as hypertension and diabetes [8][9][10]. Furthermore, there is a report that (P)RR is expressed in pituitary adenoma cells and regulates secretion of GH [11]. In addition, a recent study demonstrated an important role of (P)RR under hypoxic condition [12], in which (P)RR binds to pyruvate dehydrogenase (PDH) protein and maintains the PDH activity through inhibition of its degradation. In fact, it has been reported that serum concentration of the N-terminal domain of (P)RR, called the soluble form of (P)RR [s(P)RR], is elevated in various hypoxic conditions, such as hypertension [13], sleep apnea syndrome [14], and chronic heart failure [15]. Furthermore, hypoxia in adipose tissue (AT) of obese mice [16][17][18][19] and increased expression of (P)RR in AT of obese mice have also been reported [20]. This study was designed to evaluate if the serum s(P)RR level is elevated in GHD and if so, to assess the origin of s(P)RR and determine the significance of elevated s(P)RR concentration in GHD. A. Study Participants In this prospective observational study, 72 adult patients with pituitary or hypothalamic diseases at Tokyo Women's Medical University Hospital were enrolled between December 2011 and December 2013. The study protocol was approved by the Ethics Committee of Tokyo Women's Medical University (2303-R5) and registered in the University Hospital Medical Information Network Clinical Trial Registry (UMIN000006222) on 1 October 2011, and all patients provided written, informed consent. Medical records were used to obtain patient characteristics (such as age, sex, body weight, and history of hypertension) and metabolic parameters (such as creatinine). All patients with hypopituitarism or diabetes insipidus received appropriate supplementation of hydrocortisone, levothyroxine, gonadal steroids, and/or desmopressin. Obesity was defined as body mass index (BMI) of higher than 25 kg/m 2 , according to the criteria by the Japan Society for Study of Obesity [21]. All patients underwent the GH-releasing peptide 2 (GHRP-2) test to diagnose adult severe GHD (SGHD), defined as peak GH concentration ,9 ng/mL [22]. A GH cutoff value of 9 ng/mL with GHRP-2 corresponded to a GH value of 1.8 ng/mL with the insulin tolerance test when the GH value was calibrated with recombinant World Health Organization 98/574 standard [22]. Although the GHRP-2 test is not included in the Endocrine Society Guidelines [23], the GHRP-2 test is considered to be safe [24] and widely used to diagnose SGHD in Japan because of its high sensitivity and specificity compared with the insulin tolerance test [25]. The peak GH concentration after GHRP-2 stimulation was $9 ng/mL in 40 patients (SGHD 2 ) and ,9 ng/mL in 32 patients (SGHD + ). Etiologies of the SGHD 2 patients included 11 nonfunctioning pituitary adenoma, 10 Rathke cyst, eight prolactinoma, two Cushing disease, two acromegaly, and others. Etiologies of the SGHD + patients included 17 nonfunctioning pituitary adenoma, five craniopharyngioma, four Rathke cyst, and others. In all patients with hormonally functioning tumor, their remissions were confirmed. Sixteen of the SGHD + patients received GH replacement therapy for an average of 8.4 months (6 to 18 months). The GH doses were titrated to maintain their insulin-like growth factor 1 (IGF-1) levels to age-adjusted IGF-1 levels unless adverse effects manifested. B. Animals All procedures and animal care were approved by our Institutional Animal Research Committee and conformed to the animal care Guideline for the Care and Use of Laboratory Animals of Tokyo Women's Medical University. Male C57BL/6-Ham-Slc ob/ob and +/+ mice were purchased from Japan SLC (Hamamatsu, Japan). All mice were fed a normal diet, and at 10 weeks of age, 40 mg/kg pegvisomant (Pfizer, Tokyo, Japan), a GH receptor antagonist, or equivalent volume of saline was injected subcutaneously for 5 consecutive days. One day after the final injection, the mice were killed by decapitation under isoflurane inhalation anesthesia (Intervet, Tokyo, Japan). The serum, liver, kidneys, perigonadal fat as white AT (WAT), and gastrocnemius muscles were obtained. WAT weight was measured, and the obtained organs were quickly frozen in liquid N 2 and stored at 280°C. At 48 hours after transfection, the medium was replaced to serum-free maintenance medium, and cells were placed into the Hypoxic Incubator Chamber (Stemcell Technologies, Vancouver, Canada) to be incubated under an hypoxic or a normoxic condition at 37°C for 8 hours. To maintain an hypoxic condition, the chamber was filled with 5% oxygen, 5% CO 2 , and 90% nitrogen. D. Assays Creatinine, plasma glucose, [hemoglobin A1c (HbA1c) National Glycohemoglobin Standardization Program (NGSP)], low-density lipoprotein (LDL)-cholesterol, high-density lipoprotein (HDL)-cholesterol, and triglyceride were measured by standard laboratory methods at our clinical laboratory center. Estimated glomerular filtrating rate (GFR) was calculated using the formula developed by the Japanese Society of Nephrology [27]. Serum GH concentrations of patients were measured using enzyme immunoassay (Tosoh Bioscience, Tokyo, Japan), calibrated with recombinant WHO 98/574 standard. Serum IGF-1 concentrations of patients were measured using the immunoradiometric assay "Daiichi" (Fujirebio, Tokyo, Japan). IGF-1 standard deviation (SD) score was calculated based on the age-and sexspecific normative data of IGF-1 in the Japanese population [28]. Serum IGF-1 concentrations of mice were measured using the insulin-like growth factor I enzyme-linked immunosorbent assay (ELISA) kit (R&D Systems, Minneapolis, MN). Serum s(P)RR concentrations of patients and mice were measured using the Soluble (Pro)renin Receptor ELISA Assay Kit (Takara Bio, Shiga, Japan) [29]. Each ELISA assay was performed according to the manufacturer's protocol, and absorbance was measured by Chameleon V (Hidex, Turku, Finland) for s(P)RR and Spectra Max i3 (Molecular Devices, San Jose, CA) for IGF-1. E. Quantitative Real-Time Polymerase Chain Reaction Total RNA was extracted from mouse tissue and cell lysate using TRIzol (Thermo Fisher Scientific, Waltham, MA). Reverse transcription was performed with total RNA using the High Capacity cDNA Reverse Transcription Kit (Life Technologies, Tokyo, Japan). Expressions of (P)RR-furin, and a disintegrin and metalloprotease (ADAM)19-mRNA were determined by quantitative real-time reverse transcriptase-polymerase chain reaction using TaqMan Gene Expression Assay (Life Technologies, Tokyo, Japan) on StepOnePlus (Life Technologies, Tokyo, Japan). Target mRNA expression was corrected by 18S ribosomal mRNA expression, shown as relative expression ratio. All samples were analyzed in duplicate. F. Statistical Analyses All data were shown as means 6 SD except for peak GH response to GHRP-2 which was expressed as median (range). Baseline characteristics of the patients were compared between SGHD 2 and SGHD + patients by unpaired t test, whereas levels of GH peak response to GHRP-2 were by Mann-Whitney U test. Categorical variables were compared using the Pearson's x 2 test. Serum s(P)RR and IGF-1 concentration and IGF-1 SD score of the patients were compared between before and after GH replacement therapy by paired t test. Multiple comparisons were analyzed using Tukey-Kramer test. Because of skewed distribution, levels of GH peak response to GHRP-2 were log transformed for regression analyses. In multivariate regression analysis, explanatory factors were selected from the results of univariate regression analysis. The threshold for significance was P , 0.05. All statistical analyses were performed using JMP Pro 12 (SAS Institute, Tokyo, Japan). A. Characteristics of Patients Characteristics of the patients with (SGHD + ) or without (SGHD 2 ) GHD are shown in Table 1. BMI and serum levels of creatinine and triglyceride were significantly higher, and estimated GFR and serum HDL-cholesterol level were significantly lower in the SGHD + patients than in the SGHD 2 patients. Number of deficit anterior pituitary hormones and frequency of diabetes insipidus were higher in the SGHD + patients than in the SGHD 2 patients, and IGF-1, IGF-1 SD scores, and GH peak response to GHRP-2 were significantly lower in the SGHD + patients than in the SGHD 2 patients. B. Serum s(P)RR Concentration in Obese and Lean Patients With or Without SGHD Serum s(P)RR concentration was significantly higher in the SGHD + patients (23.2 6 2.3 ng/ mL) than in the SGHD 2 patients [21.7 6 3.5 ng/mL; Fig. 1(a)]. When the patients were divided into four groups by obesity and SGHD, the serum s(P)RR concentration in the SGHD + patients with obesity (Obese SGHD + ; 25.2 6 3.2 ng/mL) was significantly higher than that in the SGHD + patients without obesity [Lean SGHD + ; 21.9 6 2.9 ng/mL; Fig. 1(b)]. No significant effect of obesity on the serum s(P)RR concentration was observed in the SGHD 2 patients (Obese SGHD 2 23.0 6 6.2 ng/mL vs Lean SGHD 2 21.5 6 3.0 ng/mL). Serum s(P)RR concentration in Lean SGHD + was not significantly different from that of Lean SGHD 2 . Obese SGHD + patients showed an insignificant tendency for an increased serum s(P)RR level than Obese SGHD 2 patients. In the SGHD + patients who received GH replacement therapy (average 2.3 6 0.5 mg/week), the GH replacement therapy significantly increased their serum IGF-1 concentrations (87.6 6 42.1 to 166.6 6 51.8 ng/mL) and IGF-1 SD scores (22.3 6 1.6 to 0 6 1.4) and decreased their serum s(P)RR concentrations [30.3 6 2.7 to 26.9 6 4.6 ng/mL; Fig. 1(c)-1(e)]. There was no significant change in body weight after GH replacement (before GH: 77.6 6 13.1 vs after GH: 78.6 6 13.3 kg). Changes in serum s(P)RR concentration did not significantly correlate with GH dose or changes in BMI [ Fig. 1(f)]. C. Regression Analyses of Serum s(Pro)Renin Receptor Level Univariate regression analyses of serum s(P)RR concentration showed significant positive correlations with BMI and creatinine and negative correlations with HDL-cholesterol and natural logarithm (ln) of GH peak response to GHRP-2 ( Table 2). BMI was the only significant explanatory variable in a multivariate regression analysis testing BMI, HDL-cholesterol, creatinine, and ln (GH peak response to GHRP-2; Table 3). D. (P)RR in Obese and Lean Mice With or Without Growth Hormone Receptor Blockade Five-day treatment of pegvisomant significantly decreased serum IGF-1 levels compared with treatment with saline in both lean and obese mice [ Fig. 2(a)]. The IGF-1 levels were similar between the pegvisomant-treated lean and obese mice. Blood glucose level [ Fig. 2(b)], Data are shown as means 6 SD except for GH peak response to GHRP-2 which is shown as median (range). Abbreviation: BP, blood pressure. Expression of (P)RR mRNA in the gastrocnemius muscle, liver, or kidneys was not significantly different among the four groups. The mRNA expression of (P)RR processing enzymes, furin and ADAM19, in WAT was similar among the four groups [ Fig. 3(b)]. E. Effects of Hypoxia on (P)RR Expression and PDH Activity in 3T3-L1 Cells In the differentiated 3T3-L1 cells, 5% hypoxia treatment of 8 hours significantly increased (P)RR protein expression by 52% without altering (P)RR mRNA expression [ Fig. 4 Discussion In this study, we found an elevated serum s(P)RR concentration in SGHD patients, especially with obesity and elevated (P)RR expression in the WAT of obese mice with GH receptor blockade. We also showed the possibility of tissue hypoxia as an additional factor contributing to elevated (P)RR expression in the AT of GHD patients. SGHD patients showed higher serum s(P)RR concentration than those without SGHD, and GH replacement therapy ameliorated the elevated s(P)RR concentration. Higher serum s(P)RR concentration was observed in obese SGHD patients than in lean SGHD patients, and regression analyses revealed that BMI was the only substantial explanatory variable for serum s(P)RR concentration. As AT weight is increased in GHD patients [30], we considered AT as a possible source of high-serum s(P)RR in obese SGHD patients. To mimic GHD, we performed GH receptor blockade in lean and obese mice. Serum s(P)RR concentration was increased in obese mice, with and without GH receptor blockade, compared with lean mice. Whereas the mRNA expression of (P)RR was comparable in the muscle, liver, and kidneys among control and obese mice regardless of GH receptor blockade, (P)RR mRNA expression in AT after treatment with the GH receptor antagonist was significantly higher than that with saline in obese mice. The mRNA expression of (P)RR processing enzymes-furin [31] or ADAM19 [32]-was unchanged by GH receptor blockade in AT. These results suggested that increased expression, but not cleavage of (P)RR in AT, may be the cause of elevated serum s(P)RR concentration in GHD patients. Hypoxia in ATs of obese model animals has been reported [16][17][18][19]. GHD also can cause hypoxia in AT. Patients with GHD have impaired aerobic exercise capacity, as decreased cardiac function [33], lung volume [34,35], and red cell mass [6] reduce oxygen delivery to muscles. These suggest that AT of obese patients with GHD may also be hypoxic. Previous studies showed that hypoxia stimulates PDH kinase (PDK) expression [36,37]. Generally, hypoxia increases PDK activity and decreases PDH activity. As illustrated in Fig. 5, a recent study showed that (P)RR is capable of maintaining PDH activity through stabilizing a PDHB protein [12]. In the current study, knockdown of (P)RR decreased expression of PDHB only in Western blot image and protein levels for (P)RR and b-actin expression after normoxia or 5% hypoxia treatment of 8 hours. *P , 0.05 vs normoxia. (b) Relative mRNA expression of (P)RR after normoxia or 5% hypoxia treatment of 8 hours. (c) Relative mRNA expression of (P)RR after the 48-hour treatment with scrambled or (P)RR siRNAs. *P , 0.05 vs scrambled siRNA. (d) Representative Western blot image for (P)RR, PDHB, and b-actin after the 48-hour treatment with scrambled or (P)RR siRNAs, followed by 8-hour treatment of normoxia or 5% hypoxia. the hypoxic condition. These data suggest that the hypoxic condition in AT may be a contributing factor for elevated s(P)RR in obese GHD patients. Potential clinical implications from this study may be illustrated as follows. First, the tissue renin-angiotensin system may be activated in AT in GHD patients. It has been shown that hypertensive patients have elevated serum s(P)RR, likely as a result of the activated intrarenal renin-angiotensin system [13]. In GHD patients, the prorenin-to-renin ratio was found to be increased [38], and we observed elevated (P)RR expression in the WAT of GHD model mice. As circulating prorenin can bind to (P)RR and acquire enzymatic activity without proteolytic cleavage [9], increased (P)RR in AT and prorenin in blood may synergistically activate the tissue renin-angiotensin system in AT in GHD patients. Second, (P)RR may contribute to the maintenance of PDH activity in adipocytes. The elevation of serum s(P)RR Figure 5. Hypothetic regulation of PDH activity by (P)RR in ATs. Under hypoxic condition and GHD, PDH activity tends to decrease through the hypoxia-induced stimulation of PDK. However, hypoxia also enhances the (P)RR expression. As (P)RR binds to PDHB and thus maintains the PDH activity, PDH activity is actually unchanged even under hypoxic conditions if the expression of (P)RR is intact. CoA, coenzyme A; TCA, tricarboxylic acid. concentration in GHD patients was likely a result of the increased (P)RR expression in AT. As (P)RR binding to PDH protein blocks phosphorylation and degradation of PDH, increased expression of (P)RR may preserve efficient energy metabolism but may also lead to increased oxidative stress in obesity and GHD, the effects of which remain to be determined. This study has several limitations. First, it is unclear whether elevated serum s(P)RR concentration in GHD patients is a result of an increase in the volume of ATs or (P)RR expression in ATs. The changes in serum s(P)RR concentration did not correlate with changes in BMI after GH replacement, but the current study did not examine (P)RR expression in ATs in the patients. Second, the mechanism for the increased expression of (P)RR with hypoxia remains to be determined. Hypoxia induces glycolysis [39], adipogenesis [40], and adipokine production [41]. Whether these metabolic pathways affect the (P)RR expression in AT should be examined in future studies. Third, SGHD was diagnosed by a single test, the GHRP-2 test, in the current study. As GH cutoff values for mild to moderate GHD have not been determined for the GHRP-2 test, SGHD 2 patients in the current study include those with mild to moderate GHD. Whether serum s(P)RR concentration in the patients with mild to moderate GHD are increased compared with those without GHD cannot be determined from this study. In conclusion, the current study showed that serum s(P)RR concentrations were elevated in obese patients with GHD. The origin of the elevated serum s(P)RR concentration was suggested to be the AT from the animal studies, and in vitro studies suggested that hypoxia may be one of the causes of elevated (P)RR expression in AT. The increased expression of (P)RR may contribute to the maintenance of energy metabolism in AT. Thus, the elevated serum s(P)RR levels may reflect hypoxic ATs caused by GHD and obesity. Further studies are needed to clarify the regulation of (P)RR expression under hypoxic conditions.
2018-04-04T00:05:07.663Z
2018-02-09T00:00:00.000
{ "year": 2018, "sha1": "183730aa7a515cf9afaca8343ebfcaaf847c65a7", "oa_license": "CCBYNCND", "oa_url": "https://academic.oup.com/jes/article-pdf/2/3/252/24226828/js.2017-00447.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "183730aa7a515cf9afaca8343ebfcaaf847c65a7", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
213435020
pes2o/s2orc
v3-fos-license
WNT and inflammatory signaling distinguish human Fallopian tube epithelial cell populations Many high-grade serous carcinomas (HGSCs) likely originate in the distal region of the Fallopian tube’s epithelium (TE) before metastasizing to the ovary. Unfortunately, molecular mechanisms promoting malignancy in the distal TE are obfuscated, largely due to limited primary human TE gene expression data. Here we report an in depth bioinformatic characterization of 34 primary TE mRNA-seq samples. These samples were prepared from proximal and distal TE regions of 12 normal Fallopian tubes. Samples were segregated based on their aldehyde dehydrogenase (ALDH) activity. Distal cells form organoids with higher frequency and larger size during serial organoid formation assays when compared to proximal cells. Consistent with enrichment for stem/progenitor cells, ALDH+ cells have greater WNT signaling. Comparative evaluation of proximal and distal TE cell population’s shows heightened inflammatory signaling in distal differentiated (ALDH−) TE. Furthermore, comparisons of proximal and distal TE cell populations finds that the distal ALDH+ TE cells exhibit pronounced expression of gene sets characteristic of HGSC sub-types. Overall, our study indicates increased organoid forming capacity, WNT/inflammatory signaling, and HGSC signatures underlie differences between distal and proximal regions of the human TE. These findings provide the basis for further mechanistic studies of distal TE susceptibility to the malignant transformation. Results ALDH activity distinguishes organoid-forming cells. All Fallopian tubes used in our experiments were removed from donors not afflicted with ovarian cancer and not carrying mutant BRCA1/2 alleles, and who were between the ages of 32 and 51. Proximal and distal regions of the TE were divided as indicated in Fig. 1A. To test if there are regional differences in TE organoid formation, we prepared organoids from proximal and distal Fallopian tube regions and propagated them for 4 passages. Consistent with previous report 14 , primary TE cells from both distal and proximal regions were able to form organoids (Fig. 1B). However, distal TE cells consistently formed organoids at a significantly higher frequency than their proximal region counterparts (Fig. 1C). Furthermore, organoids grown from the distal TE region tend to be significantly larger than their proximal counterparts (Fig. 1D). Both distal and proximal organoids contained ciliated (AcTub+) and secretory (PAX8+) cells, as well as cells expressing stem/progenitor cell marker ALDH1A1. Based on previous observations that ALDH activity is frequently observed in stem/progenitor cells, we hypothesized that ALDH+ epithelial (EpCAM+) cell populations have increased organoid formation as compared to ALDH−/EpCAM+ cells. Therefore, we FACS isolated viable EpCAM+/ALDH+ and EpCAM+/ ALDH− cell populations from the proximal and distal regions of Fallopian tubes. After determining that sample storage time and Fallopian tube region do not seem to significantly affect each sample's epithelial cell composition (Supplementary Figure 2), we conducted organoid formation assays as diagramed in Fig Table 2). Organoid formation is generally indicative of stem/progenitor cells ex vivo. Thus, these findings suggest that ALDH activity is a suitable means of enriching for stem/progenitor cells in human TE isolates. Proximal and distal TE cell populations display distinct gene expression patterns. Having determined that ALDH activity is a suitable criterion for enriching for stem/progenitor cells, we created mRNA-seq libraries for 34 samples (7 proximal EpCAM+/ALDH−, 9 proximal EpCAM+/ALDH+, 10 distal EpCAM+/ ALDH−, 8 distal EpCAM+/ALDH+, Supplementary Table 3) from 12 generally healthy donors. Following data pre-processing (see Methods) we applied NGS checkmate 25 to verify that each library originated with the individual indicated by our records (Supplementary Figure 4). To validate our FACS strategy and identify any contamination present in our mRNA-seq samples, we performed a deconvolution analysis using the R package BSEQsc 26 with recently published distal Fallopian tube single-cell mRNA-seq data 27 . We found that contamination from non-epithelial cells was minimal (Supplementary Figure 5). Even so, we tested whether the extent of T-cell or smooth muscle cell (the two contaminating cell types detected) contamination explained a statistically significant amount of variation in the expression of any genes. We found a significant affect in only 4 genes (Supplementary Figure 6). Therefore, we conclude it is unlikely that contamination by non-epithelial cell types confounds out results. As a final quality check, we performed Gene Set Enrichment Analysis (GSEA) 28 using expressed ALDH family proteins and found that ALDH gene expression is significantly up-regulated in EpCAM+/ALDH+samples (Supplementary Figure 7). Principal component (PC) analysis has found the 4 cell populations segregate into visually distinct groups (Fig. 3A). To determine the extent to which variation in the mRNA-seq samples is associated with our experimental design, or potentially confounding factors, we checked the significance of the association of the first 3 PCs with the Fallopian tube region each sample originated in, ALDH activity of that sample, as well as with the individual each sample came from. We chose to examine Fallopian tube region and ALDH activity, as these were the criteria on which we FACS isolated the cells. We chose to examine individual because the individual each sample came from seemed the factor most likely to confound our analysis. Of the potential covariates we tested, ALDH activity and region of origin correlated most strongly with PC1 and PC2. Importantly, the individual that donated sample material was not significantly correlated with any of the first PCs (Fig. 3B). Having observed the high correlation between ALDH activity and region of sample origin with PC1 and PC2, we performed differential expression analysis using the DESeq2 29 R package. As expected, stem and differentiated cell enriched populations recover greater numbers of differentially expressed genes than comparisons between proximal and distal populations (Fig. 3C, Supplementary Tables 4-7). An overview of expression differences, which display expression trends distinguishing proximal and distal TE, is given in Fig. 3D. www.nature.com/scientificreports www.nature.com/scientificreports/ Stem cell enriched populations exhibit increased Wnt signaling compared to differentiated cell enriched populations. To contextualize our differential expression results, we conducted gene ontology enrichment analysis using genes that are upregulated in EpCAM+/ALDH+ populations compared to their EpCAM+/ALDH− counterparts (Fig. 4A). Stem/progenitor cells can play a role in malignant transformation and so we have begun by searching genes up regulated in EpCAM+/ALDH+ populations for enrichment in Disease Gene Network [330 . We have found top hits relating to metastatic disease (Fig. 4B, Supplementary Table 8). Querying GO Biological Process also recovered 'cell-cell signaling by wnt' as a prominent, statistically significant result (Fig. 4C, Supplementary Table 9). We continued by performing GSEA on EpCAM+/ALDH+ vs. EpCAM+/ALDH− cell enriched populations (See Supplementary Table 10 for all gene sets used in this study). GSEA identified enrichment of the Hallmark Wnt/β-Catenin Signaling gene set (Fig. 4D). β-catenin and TCF family transcription factors are important mediators of canonical WNT signaling, which is an important pathway in maintaining SC self-renewal and cancer. We have found that distal EpCAM+/ALDH+ cell populations also show significant up regulation of CTNNB1 (often referred to as β-Catenin, Fig. 4E) and TCF7 (Fig. 4F) compared their EpCAM+/ALDH− counterparts. To see if WNT signaling may distinguish proximal from distal EpCAM+/ ALDH+ populations, we examined the expression fold changes between genes annotated as involved in the WNT Signaling GO Biological Process (Fig. 4G). Inflammatory signaling is more pronounced in distal TE cell populations. The same gene set enrichment analysis that identified up-regulation of Wntβ-Catenin signaling in EpCAM+/ALDH+ compared to EpCAM+/ALDH− samples also found down regulation of genes in the 'Hallmark Inflammatory Response' and 'Hallmark TNFα signaling through NF-κB' gene sets in EpCAM+/ALDH+ cells compared to EpCAM+/ ALDH− cells (Supplementary Figure 8). This led us to wonder how extensively activation of inflammatory pathways might differ between different TE populations. GSEA identifies significant enrichment of the Hallmark Inflammatory Response in differentiated cell populations from both the proximal and distal TE (Fig. 5A,B). In our differential expression analysis, we noted expression changes in genes associated with malignant disease. Among these genes are ROS1, which is upregulated in distal EpCAM+/ALDH+ and EpCAM+/ALDH− populations compared to proximal EpCAM+/ALDH+ and EpCAM+/ALDH− populations (Fig. 5C). ROS1 is a proto-oncogene involved in inflammatory myofibroblastic tumors 31 and certain lung cancers 32 . We also note that IGF2 is more highly expressed in distal EpCAM+/ALDH+ and EpCAM+/ALDH− populations compared to proximal EpCAM+/ALDH+ and EpCAM+/ALDH− population (Fig. 5D). A recent study indicates IGF pathway activity along with follicular fluid in malignant transformation of TE cells 33 . Intrigued by the possibility that inflammatory signaling varies across the proximal Fallopian tube regions, we decided to perform weighted gene co-expression network analysis (WGCNA 34 ) to see if coordinated changes in signaling pathways could be identified between TE cell populations. We observed 22 groups of genes displaying concerted changes in expression across the 4 conditions. We found one network ('black') particularly interesting due to its having the strongest correlation any cell type (Fig. 6A). The genes comprising the 'black' module displayed a significant affinity for distal EpCAM+/ALDH− samples and a negative correlation with proximal EpCAM+/ALDH+ samples (Supplementary Figure 9). Pathway enrichment analysis indicates that the genes which comprise this co-expression network, are somewhat enriched for NF-κB signaling, as well as cytokine and toll-like receptor signaling (Fig. 6B). To corroborate findings, we decided to perform an orthogonal enrichment analysis using Qiagen's Ingenuity Pathway Analysis Tool (IPA). Consistent with our GSEA and GO enrichment analysis NF-κB signaling was up-regulated in distal EpCAM+ samples (Fig. 6C). www.nature.com/scientificreports www.nature.com/scientificreports/ Distal Fallopian tube epithelium is enriched for gene sets characteristic of HGSC. As has been mentioned, there is mounting evidence that a large fraction of HGSC originates in the distal region of the TE. HGSC encompasses at least four main molecular subtypes, but it is not clear if particular subtypes of HGSC are specifically associated with distal TE cell populations. Thus, we conducted differential expression analysis on each of the four main subtypes (1 vs. the other 3) for each molecular subtype using HGSC count data available from TCGA 35 . GSEA finds each of these four gene sets is significantly up-regulated in the distal TE (Supplementary Figure 10). Finding that the distal TE displays an upregulation of genes associated with HGSC, we wondered if distal TE ALDH+/EpCAM+ populations might express the same four gene sets more than distal TE ALDH−/ EpCAM+ populations. Performing GSEA with the same four HGSC gene sets as above indicates that distal TE ALDH+/EpCAM+ populations tend to express the HGSC associated gene sets more highly, but only gene sets corresponding to the Immunoreactive and Proliferative HGSC subtypes have an FDR adjusted p-value below 0.05 (Fig. 7A-D). Discussion Recent work has provided substantial insight into the relationship between HGSC and the TE 27 . However, understanding the reason for the distal TE's susceptibility to malignant transformation remains challenging, and information about the proximal region of the TE remains sparse. Accordingly, we performed quantitative organoid and genomic studies comparing the proximal and distal regions of the TE. We observed a pronounced tendency towards organoid formation in distal compared to proximal bulk Fallopian tube patient samples. A cell population's organoid formation tends to reflect the capacity for self-renewal and proliferation of the stem/progenitor www.nature.com/scientificreports www.nature.com/scientificreports/ cells within that population. Thus, differences in organoid formation between proximal and distal Fallopian tube samples are likely indicative of differences between the stem/progenitor cells of the proximal and distal regions of the Fallopian tube. Our bulk organoid formation results therefore strengthen the notion that the distal TE's stem/ progenitor cells or their environment differ in some way from those of the proximal region. These findings are consistent with observations that the distal region of the Fallopian tube more frequently contains putative HGSC precursor lesion 3,6,10 . Isolating stem/progenitor cells from more differentiated cells is a necessary pre-requisite for understanding cell lineage dynamics in a variety of contexts. Using ALDH activity assayed by FACS/AldeRed, we have observed EpCAM+/ALDH+ populations contribute a larger fraction of a given tissue sample's organoids than the corresponding EpCAM+/ALDH− population. This leads us to conclude that ALDH activity is a reasonable heuristic for enriching TE cell isolates for putative stem/progenitor cells. We set out to understand how proximal and distal TE populations differ, and how these differences may help explain the evident tendency of the distal TE towards malignant transformation. Gene ontology and gene set enrichment analysis data indicate EpCAM+/ALDH+ populations (which we take to be enriched for stem/progenitor cells) employ canonical Wnt/β-Catenin signaling more extensively than cells in the (generally more differentiated) EpCAM+/ALDH− populations. Our confidence in this conclusion is strengthened by the presence of β-Catenin and TCF7 among the differentially expressed genes found between putative SC/progenitor and differentiated cell enriched populations. This conclusion is consistent with observations made by the Kessler group 14 . However, our observations of primary TE gene expression data did not find significant involvement of Notch signaling, which was previously identified as a requirement for maintaining long-term TE organoid cultures. This may indicate that human TE SCs rely on other mechanisms of Wnt signaling regulation in vivo. However, we cannot exclude the possibility that technical limitations inherent to our study obfuscated evidence of Notch signaling. We observe a significant enrichment of inflammatory genes in differentiated cell enriched populations from both the distal and proximal regions. IGF2 is present in follicular fluid and has recently been shown to promote malignant transformation in immortalized TE cell lines 33 . Follicular fluid is rich in inflammatory factors, and so we might expect NF-κB signaling, which frequently mediates the inflammatory response, to be upregulated in the distal TE. This expectation is fulfilled by our weighted gene co-expression network analysis and orthogonal Ingenuity Pathway Analysis, which both find increased NF-κB signaling in distal differentiated cell populations. NF-κB signaling is known to increase cellular proliferation and down-regulate P53 signaling 36,37 . Finding NF-κB signaling more active in primary human cell mRNA-seq data implicates NF-κB signaling in the distal TE's evident susceptibility to malignant transformation and provides new, observational, evidence supporting the incessant inflammatory hypothesis. The pronounced inflammatory/NF-κB signaling Increased NF-κB signaling in differentiated cell population may lead to formation of altered niche increasing the propensity of stem cells to malignant transformation. However, we also cannot exclude that more differentiated cell population of distal TE may also to succumb to malignant transformation instead of less differentiated cell types. The origins of HGSC are of considerable relevance to human health. We sought to assess gene expression patterns in primary human TE cell populations, to see if we might discern similarities between a particular molecular subtype of HGSC and a particular region of the TE. We find that the distal TE is significantly enriched for gene sets characteristic of HGSC subtypes. This is consistent with histological studies which find STICs occur more frequently in the distal region of the TE, though it does not help us determine for which subtype a given STIC is likely to give rise to. While we are excited by these findings, we wish to stress some important limitations to our study. Though TNF family ligands are established regulators of NF-κB signaling, yet we do not observe significant differential expression of any TNF family genes. This may be addressed by analysis of stromally located immune cells, which may play in influencing the TE's inflammatory response. A second peculiar finding is the absence of enrichment for cell cycle control genes. One would usually expect increased NF-κB signaling to be accompanied by a decreased DNA damage response activity and so eventual accumulation of mutations and genomic instability. We suspect our resolution is limited by the use of bulk mRNA-seq data, and the heterogeneity of epithelial cell populations in the TE. We believe that this same lack of resolution prevents us from discerning the extent to which cell number and/or cell response to genotoxic stress makes distal TE stem cell more prone to the malignant transformation. Future single cell studies will complement our current observations, garner important insight to HGSC's pathogenesis and facilitate development of new approaches for its diagnosis, prevention and treatment. A total of 5 × 10 4 cells were suspended in 3D media and mixed with growth factor reduced Phenol Red free Matrigel (Corning, catalogue #356231) in the ration of 30:70. This mixture was gently spread around the rim of a 12 well plate (rim assay). The plates were allowed to incubate for 20 minutes at 37 °C in a 5% CO 2 incubator. Once the Matrigel was solidified, 500 µl of 3D media was added to rim assays and incubated at 37 °C. 10 µM p38 inhibitor (p38i; MilliporeSigma, catalogue # S7067-5MG) was added to the 3D media for the first 4 days and discontinued thereafter. The media was changed every second day and depending on the culture density the rim assay was passaged every 12 days. Centrifugation was carried out at 4 °C and 300xg unless otherwise noted. Statistical analysis on organoid data was performed with unpaired student t-test. All data represented in the figures with mean ± SD. A difference was considered statistically significant at a value of P < 0.05. Immunofluorescence analysis of organoids. Immunofluorescence analysis of paraformaldehyde-fixed paraffin embedded or frozen organoids was carried out using modified previously established 38,39 . Briefly, at 22 °C the culture medium from individual organoid rim assays was removed without disturbing the organoid/Matrigel rim mixture. The assay plate was placed on ice and 1 ml of ice cold Fixation buffer was added for 3.5 hours. Fixation buffer consists of 4% paraformaldehyde in 1x PME buffer. A 10x PME buffer consists of 500 mM 1, 4-Piperazinediethanesulfonic acid (PIPES; Bioworld, Dublin, USA, catalogue #41620140-1), 25 mM Magnesium Chloride, and 0.5 M Ethylenediaminetetraacetic acid (EDTA; MilliporeSigma, catalogue #AM9260G). After the fixation, continue the work at 22 °C. The Fixation buffer was taken out from the middle of the wells, followed by addition of 0.5 ml PBS supplemented with 0.2% Triton X 100 (MilliporeSigma, catalogue #T8787-50ML and 0.05% Tween (MilliporeSigma, catalogue #T2700-100ML). The organoid suspension was collected in a 1.7 ml centrifuge tube. Wide bore yellow tips were used from this point. Organoids were centrifuged at 300xg at 4 °C, washed three times with PBS 0.2% Triton X 100 0.05% Tween, and once with PBS. The organoid pellet was suspended for dehydration in 600 µl 70% ethanol and incubated overnight at 22 °C. The next day the organoid pellet was dissolved (with taking out as much 70% ethanol as possible) in 50 µl of melted Histogel (ThermoFisher, catalogue #17985-50). The suspension forming a droplet was pipetted on a Parafilm lined petri dish and solidified at 4 °C for 10 minutes. The solidified Histogel droplet containing organoids was stored in 70% ethanol and later processed for paraffin embedding. The organoids were sectioned 4 µm thick and subjected to immunofluorescence staining using xylene deparaffinization and serial rehydration over a graded ethanol series. The antigen retrieval was performed using 10 mM sodium citrate buffer at pH 6.0 for 10 minutes. The primary antibodies Scientific RepoRtS | (2020) 10:9837 | https://doi.org/10.1038/s41598-020-66556-y www.nature.com/scientificreports www.nature.com/scientificreports/ against PAX8 (Abcam, Cambridge, UK, catalogue #ab189249), ALDH1A1 (Abcam, Cambridge, UK, catalogue #ab52492), and ACTUB (Sigma, St. Louis, USA, catalogue #T6793) were incubated in a humidified chamber overnight, followed by incubation with secondary antibodies (Donkey anti-Rabbit IgG (H + L) and Donkey anti-Mouse IgG (H + L) Alexa Fluor 488) for 1 hour at room temperature. Sections with no primary antibody served as negative controls. The stained sections were mounted in ProLong Diamond Antifade Mountant with DAPI reagent (Thermo Fisher). Confocal images were acquired using a Zeiss LSM 710 confocal microscope through the Cornell University Biotechnology Resource Center. The image data was merged and displayed with the ZEN software (Zeiss). Preparation and collection of human TE FACS samples. Human Fallopian tube samples were removed from liquid nitrogen and thawed at 37 °C for 3 minutes before being removed from the cryo-preservation vial and being rinsed 3 times with 15 ml 1x PBS. Each sample was then dissected and minced to reveal as much of the mucosa as possible, any coagulated blood was scraped away. Samples were then incubated at 37 °C for 45 minutes in Digestion Buffer, shaking every 10 minutes. Samples were then collected by centrifugation, placed in 2D Culture media and mechanically dissociated using a 5 ml serological pipette. Sample fragments were then ground with a mortar and pestle using a 300 µm filter before being further dissociated with 5 strokes of a loose Wheaton Dounce homogenizer. Samples were successively filtered through 100, 70, and 40 µm mesh filters before being collected by centrifugation and being re-suspended in 2D FACS media [Advanced DMEM/F12, supplemented with 1% N2, 2% B27, 1 mM Nicotinamid, 1 mM N-Acetyl-L-Cysteine (MilliporeSigma, catalogue # A9165-25G), 10 µM ROCKi, and 100 units ml −1 100 ug ml −1 PS]. Samples were successively filtered through 100, 70, and 40 µm mesh filters before being collected by centrifugation and being re-suspended in 2D FACS media. For detection of ALDH enzymatic activity, sample cells were suspended in the AldeRed Assay Buffer and processed for staining with the AldeRed ALDH Detection Assay (MilliporeSigma, catalogue #SCR150) according to the manufacturer's protocol. At this point roughly 250,000 cells were set aside for the Diethylaminobenzaldehyde (DEAB, ALDH inhibitor), EpCAM, and compensation controls. DEAB control was prepared according to manufacturer's instructions as well. Samples/isotype control were stained with EpCAM/conjugated isotype for 1 hour at 5 °C according to manufacturer's instructions. Appropriate sample suspensions were stained with SYTOX Blue prior to sorting on a BD FACS Aria III using 450/50, 610/20, and 696/40. Sorted cells were collected directly into 750 µl Trizol-LS (Fisher Scientific) as described 40 . ALDH activity segregated organoid culture. As above, each mRNA-seq library was prepared from cells originating in a single Fallopian tube fragment. Approximately 3 × 10 5 viable EpCAM+/ALDH+ and EpCAM+/ ALDH− collected by FACS into FACS media (described above) after the preparation described above. Collected cells were recovered by centrifugation at 300xg for 15 min at 4 °C. Most of the remaining liquid was decanted, and roughly 50x the remaining volume in Matrigel was added to the sample and gently mixed by pipetting. 20-30 µl droplets were then plated and allowed to sit for 30-40 minutes before the addition of 250 μl T-media. Media was changed ever two days. Cultures were passaged every week. Passaging was done by dissociating the organoids by pipetting in ice-cold 3D media containing p38i. Organoid cultures were then re-plated as described above, and dissociation to single cells was verified using bright field microscopy. mRNA-seq library preparation and data pre-processing. As above, each mRNA-seq library was prepared from cells originating in a single Fallopian tube fragment. 3′prime mRNA-seq libraries containing unique molecular identifiers (UMIs) were prepared using Lexogen's QuantSeq Kit (Lexogen, Vienna, Austria, catalogue # 015.24, #081.96) according to the low-input protocol. Optimal barcodes were assigned to each sample by Lexogen's Index Balance Checker webtool (https://www.lexogen.com/support-tools/index-balance-checker/). Libraries were pooled and sequenced on an Illumina NextSeq. 500 after undergoing QC by Agilient Fragment Analyzer. De-multiplexed FASTQ files were inspected for quality using FASTQC 41 . Reads were aligned to GRCh38 using the STAR two-pass method 42 . UMI-tools 43 was then applied to remove duplicate reads based on their UMI. Quality score and base re-calibration were then performed according to the Genome Analysis Toolkit best practices for mRNA-seq version 3.7. Sample identity was then verified using NGSCheckMate 25 . Bioinformatics analysis. For single cell analysis, a read count matrix was downloaded from Gene Expression Omnibus (GSE132149) and processed using Scanpy 44 using an approach similar to those previously described 45 . The data were batch corrected using BBKNN 46 (trim = 50) and visualized using Uniform Manifold Approximation and Projection 47 (Supplementary Figure 11). Louvain clustering 48-50 (r = 1.25) was used to segregate cell clusters. SingleR 51 and data from the Human Primary Cell Atlas 52 were used to identify the cell types corresponding to those clusters. Deconvolution of bulk mRNA-seq samples was performed using BSEQsc 26 and quasi-likelihood F-tests to determine T-cell or smooth muscle contamination accounted for a statistically significant amount of variation in any gene's expression were implemented in edgeR 53 . For bulk mRNA-seq, a raw read count matrix was generated using the featureCounts function of the Rsubread R package 54 . Background and technical noise were reduced using the RUV-seq R package 55 before differential expression analysis. Read count normalization and gene differential expression calls were made with DESeq. 2 29 . Gene and Disease Ontology enrichment analysis was carried out using the clusterProfiler and DOSE R packages 30,56 . Gene Set Enrichment Analysis (GSEA) was performed using the GSEApy python package 28,57 . Weighted Gene Co-expression Network Analysis (WGCNA) was performed using the Weighted Gene Co-expression Network Analysis R package 34,58 . Ingenuity pathway analysis was done using the 1,000 most divergently expressed genes between all proximal EpCAM+ samples and all distal EpCAM+ samples using 'epithelial pathways' as a background set. For TCGA OV Analysis Gene set's typifying the four main molecular sub-types of HGSC
2020-01-23T09:20:07.606Z
2020-01-16T00:00:00.000
{ "year": 2020, "sha1": "3d1158bb50d08890551e6e83ae7e5703080edafa", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-66556-y.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "876ff937838fdae35ec5e62d4d23a0a2069a21b0", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Chemistry" ] }
236726706
pes2o/s2orc
v3-fos-license
New generation protein supplement in combined feeds for broiler chickens . In experiments on broiler chickens of the SGC "Smena" selection, it was found that the partial replacement of fish meal with an additive based on a protein of microbial origin, containing 68.3% of crude protein, allows to provide a live weight of chickens comparable to the control with high poultry safety, due to good protein digestibility and availability of amino acids. A comprehensive assessment of the productivity of broilers according to the productivity index showed that the replacement of 50% of fish meal with microbial protein contributed to an increase in the productivity index of chickens of the second group by 4.78 points in comparison with the control. The complete replacement of fish meal in the third experimental group was less effective, as evidenced by a decrease in the productivity index to 303.91 points compared to 308.73 in the control. It is shown that the introduction of a protein supplement into the feed for broiler chickens in Analysis of the chemical composition of the liver of 35-day-old broilers showed that the protein content in the liver of chickens of the second and third groups was higher than the control by 1.99 and 0.58%, while there was no increase in the fat content in the groups with a protein supplement. The crude fat content was at the level of 11.38 and 12.20% versus 13.0% in the control, which indirectly indicates the absence of a cytotoxic effect on liver cells from the use of microbial protein in the studied dosages. Introduction Use in feed production of protein sources based on non-traditional feed means, such as lowalkaloid lupine, rapeseed, peas, etc. [1,2,3,4] allows for reducing the use of soybeans and products of its processing in poultry rations, to reduce the inclusion of animal feed in compound feeds [5,6]. Protein products obtained with the use of improved technologies of microbiological synthesis [6,7,8,9], which replaced traditional technologies that allowed the production of microbial protein from hydrocarbon raw materials, are also worth noticing. Despite the high production volumes of paprine, eprin and haprin for the needs of agriculture in the 90s, the industrial production of microbial protein obtained from hydrocarbon raw materials at Russian biochemical plants was stopped due to the negative impact on the environment due to imperfect purification technologies, which led to pollution of the environment and atmosphere with protein dust and cells of producer microorganisms. Currently, the production in Russia of haprin, meprin and eprin using traditional technologies is practically absent, and from the products of microbiological synthesis, the feed industry mainly uses fodder yeast, which is obtained by the microbiological method on various nutrient media from the waste of timber processing sulfite-cellulose and alcohol production [9]. The purpose of these studies was to study the possibility of using a functional protein supplement obtained from plant resources based on microbiological synthesis technologies to replace animal feed in compound feed for broiler chickens. Materials and methods To accomplish this task, an experiment was carried out on broiler chickens of the SGC "Smena" selection from one day old to 35-day-old in the vivarium of the SGC "Zagorskoye", Federal Scientific Center All-Russian Research and Technological Poultry Institute RAS. Broilers were kept in a R-15 cage battery. The groups were formed by the method of analogs by live weight without division by sex, 35 animals per group. Chickens received dry full-feed loose mixed feed with nutritional value according to the standards of Federal Scientific Center All-Russian Research and Technological Poultry Institute 2019, ad libitum, according to the scheme shown in Table 1. Table 1. Scheme of the broilers experiment. Group Feed characteristic 1 (c) Nutrient-balanced compound feed containing fish meal 2 Compound feed, balanced in all nutrients, replacing 50% of fish meal with protein supplement during the entire growing period 3 Compound feed, balanced in all nutrients, with replacement of 50% of fish meal with protein supplement from 3-21 days and 100% replacement of fish meal from 22 to 35 days of growing During the first three days, broilers of all groups received pre-starter compound feed in the form of granules with a particle diameter of 0.9 to 1.2 mm of the same nutritional value. Balance experiments to determine the digestibility and availability of nutrients in the diet were also carried out on males (n = 3 from each group) at 30-33 days. The mean values (М) and mean errors (± m) were calculated for the indicators of counting and measurements. The significance of differences was assessed by Student's ttest, the differences were considered significant at Р  0.05. The analysis of the chemical and amino acid composition of the protein product, feed and litter, calcium and phosphorus in the tibia of broilers at the end of the experiment was carried out by specialists of the Department of Physiology and Biochemical Analysis of the Federal Research Center "Federal Scientific Center All-Russian Research and Technological Poultry Institute" RAS. Results and discussion The investigated additive is a fodder protein mixture of fermentolysate of plant raw materials and biomass of lactic acid and propionic acid bacteria, obtained as a result of bioconversion using non-pathogenic strains of a consortium of microorganisms. Its chemical and amino acid composition is shown in Table 3. Due to the high content of non-protein nitrogen in the product -3.41%, the complete replacement of fish meal for broilers of the third group was carried out in the final growing period. The results of the experiment (Table 4) showed that at 14 days of age the live weight of the chickens of the second and third experimental groups, which received microbial protein instead of 50% fish meal, was higher than the control by 4.16 and 3.98% (the difference is significant at Р 0.05), at 21 days of age the advantage of the experimental groups in terms of live weight in comparison with the control was 1.06 and 0.71% (the difference is unreliable). In the final period of rearing, the chickens of the second experimental group, which received 50% (2% by weight of the compound feed) protein supplement instead of fish meal, retained a higher growth rate in comparison with the control poultry. The live weight of the chickens in this group was 1.3% higher than the control (the difference is not statistically significant). Furthermore, the live weight of the males was 2.4% higher than that of the control, and the females -by 0.5%. The use of a protein supplement contributed to a decrease in feed costs per 1 kg of live weight gain by 0.17% in comparison with the control. Complete replacement of fish meal with an additive in mixed feed for chickens of the third experimental group in the final period of growing provided a live weight of chickens comparable to the control, however, the backlog in live weight of males of the third group from the control was 1.2%, and that of females -1.4% (the difference is not reliable). The feed conversion also deteriorated by 1.36%. There were no significant differences in the viability of poultry in the studied groups. The safety of the flock in all groups at the end of cultivation was high and amounted to 100%. A comprehensive assessment of the productivity of broilers according to the productivity index showed that replacing 50% of fish meal with an additive made it possible to increase the productivity index of chickens of the second group by 4.78 points in comparison with the control. The complete replacement of fish meal in the third experimental group was less effective, as evidenced by a decrease in the productivity index to 303.91 points compared to 308.73 in the control. The data of the balance experiment presented in Table 3 are consistent with the obtained zootechnical results. Thus, the digestibility of protein, dry matter of feed, fat, the use of nitrogen, calcium, the availability of lysine and methionine were better in broilers of the second experimental group by 0.26; 1.72; 1.53; 1.1 and 0.7% compared to control. Fiber digestibility and methionine availability were at the control level. Complete replacement of fish meal in the final growing period in broilers of the third experimental group led to a slight decrease in protein digestibility, feed nitrogen use and lysine availability -by 0.37; 1.4 and 3.6%. The digestibility of fat, dry matter of feed, fiber, availability of methionine in chickens of the third group was comparable to the control. There were no significant differences between the groups in terms of the content of crude ash, the deposition of calcium and phosphorus in the skeleton of broilers from the control and experimental groups. The indicators corresponded to the physiological norm and the direction of poultry productivity. Analysis of the chemical composition of the liver of 35-day-old broilers showed ( Table 4) that the protein content in the liver of chickens of the second and third groups was higher than the control by 1.99 and 0.58%, while there was no increase in the fat content in the groups with a protein supplement. The crude fat content was at the level of 11.38 and 12.20% versus 13.0% in the control, which indirectly indicates the absence of a cytotoxic effect on liver cells from the use of microbial protein in the studied dosages. In terms of the storage of vitamin E in the liver, the chickens of the second and third experimental groups exceeded the control by 15.21 and 20.09% (the difference is significant), the content of vitamin B2 was at the control level. In the liver of chickens of the third experimental group, a significant deterioration in the deposition of vitamin A by 18.8% was noted. Poultry meat is mainly a protein food and one of the important sources of fat in the body. Analysis of the chemical and amino acid composition of the meat of experimental broilers showed ( Table 5) that partial or complete replacement of fish meal with an additive based on microbial protein in the feed of chickens of the second and third experimental groups could not fully compensate for the protein of fish meal. The meat of the experienced broilers in terms of protein content was inferior to the control by 4.57 and 5.07%. The content of non-essential and essential amino acids also decreased by 4.22 -4.26% and 1.19 -1.28%. It should be noted that the fat content in broiler meat was low, and was in the range of 2.04 -1.84%, versus 1.82% in the control. There were no significant differences from the control in terms of the content of crude ash. The conducted tasting evaluation did not reveal differences in the taste of meat and broth between the control and experimental groups. Conclusion Thus, based on the data obtained, it can be assumed that in terms of bioavailability and productive effect on poultry, a new generation protein supplement can be used to partially replace feed of animal origin in compound feed for broilers. The rational level of its inclusion in compound feed for broiler chickens up to 21 days of age is 2% by weight of compound feed, and from 22nd day of age it is possible to increase the input level up to 4% by weight of compound feed.
2021-08-03T00:05:49.037Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "5ba1119085da639dbf22c8d77deeab0121fa94a3", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/38/e3sconf_iteea2021_02001.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "aca1af17f51aabd68fd96b08df0deb82ab5b8fac", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
239027089
pes2o/s2orc
v3-fos-license
COVID19 Disease Map, a computational knowledge repository of virus–host interaction mechanisms Abstract We need to effectively combine the knowledge from surging literature with complex datasets to propose mechanistic models of SARS‐CoV‐2 infection, improving data interpretation and predicting key targets of intervention. Here, we describe a large‐scale community effort to build an open access, interoperable and computable repository of COVID‐19 molecular mechanisms. The COVID‐19 Disease Map (C19DMap) is a graphical, interactive representation of disease‐relevant molecular mechanisms linking many knowledge sources. Notably, it is a computational resource for graph‐based analyses and disease modelling. To this end, we established a framework of tools, platforms and guidelines necessary for a multifaceted community of biocurators, domain experts, bioinformaticians and computational biologists. The diagrams of the C19DMap, curated from the literature, are integrated with relevant interaction and text mining databases. We demonstrate the application of network analysis and modelling approaches by concrete examples to highlight new testable hypotheses. This framework helps to find signatures of SARS‐CoV‐2 predisposition, treatment response or prioritisation of drug candidates. Such an approach may help deal with new waves of COVID‐19 or similar pandemics in the long‐term perspective. Introduction The coronavirus disease 2019 (COVID-19) pandemic due to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has already resulted in the infection of over 250 million people worldwide, of whom almost 5 million have died (https://covid19.who.int, accessed on 05.10.2021). This global challenge motivated researchers worldwide to an unprecedented effort towards understanding the pathology to treat and prevent it. To date, over 170 thousand articles have been published in relation to COVID-19 (PubMed query "covid-19[Title/ Abstract] or sars-cov-2[Title/Abstract]", accessed on 01.07.2021). The reported molecular pathophysiology that links SARS-CoV-2 infection to the clinical manifestations and course of COVID-19 is complex and spans multiple biological pathways, cell types and organs (Gagliardi et al, 2020). Resources such as Protein Data Bank repository of viral protein structures (preprint: Lubin et al, 2020) or the IMEx coronavirus interactome offer detailed information about particular viral proteins and their direct binding partners. However, the scope of this information is limited. To gain insight into the large network of molecular mechanisms, knowledge from the vast body of scientific literature and bioinformatic databases needs to be integrated using systems biology standards. A repository of such computable knowledge will support data analysis and predictive modelling. With this goal in mind, we initiated a collaborative effort involving over 230 biocurators, domain experts, modellers and data analysts from 120 institutions in 30 countries to develop the , an open access collection of curated computational diagrams and models of molecular mechanisms implicated in the disease . The C19DMap is a constantly evolving resource, refined and updated by ongoing biocuration, sharing and analysis efforts. Currently, it is a collection of 42 diagrams containing 1,836 interactions between 5,499 elements, supported by 617 publications and preprints. The summary of diagrams available in the C19DMap can be found online (https://covid.pages.uni.lu/map_contents) and in Table EV1. In the article, we explain the effort of our multidisciplinary community to construct the interoperable content of the resource, involving biocurators, domain experts and data analysts. We introduce the scope of the C19DMap and the insight it brings into the crosstalk and regulation of COVID-19-related molecular mechanisms. Next, we outline analytical workflows that can be used on the contents of the map, including the initial outcomes of two case studies. We conclude with a discussion on the utility and perspectives of the C19DMap as a disease-relevant computational repository. An interoperable repository of comprehensive and computable diagrams We constructed a comprehensive diagrammatic description of disease mechanisms in a way that is both human-and machinereadable, lowering communication barriers between experimental and computational biologists. To this end, we aligned the biocuration efforts of the Disease Maps Community , Reactome (Jassal et al, 2020), and WikiPathways (Slenter et al, 2018) and developed guidelines for building and annotating these diagrams. In addition, we integrated relevant knowledge from public repositories Perfetto et al, 2020;Rodchenkov et al, 2020;T€ urei et al, 2021) and text mining resources to update and refine the contents of the C19DMap based on other knowledge-building efforts. This work resulted in a series of pathway diagrams constructed de novo, describing key events in the COVID-19 infectious cycle and host response. The C19DMap project involved three main groups of participants: the biocurators, the domain experts, and the analysts and modellers. Biocurators developed a collection of systems biology diagrams focused on the molecular mechanisms of SARS-CoV-2. Domain experts refined the contents of the diagrams using interactive visualisation and annotations. Analysts and modellers developed computational workflows to generate hypotheses and predictions about the mechanisms encoded in the diagrams. Figure 1 illustrates the ecosystem of the C19DMap Community, highlighting the roles of the participants, available format conversions, interoperable tools and downstream uses. The community members and their contributions are listed on FAIRDOMHub (Wolstencroft et al, 2017). Creating and accessing the diagrams The biocurators of the C19DMap diagrams followed the guidelines developed by the Community, WikiPathways (Slenter et al, 2018) and Reactome (Jassal et al, 2020) based on systems biology standards (Le Nov ere Demir et al, 2010;Keating et al, 2020) and persistent identifiers (Wimalaratne et al, 2018). The diagrams are composed of biochemical reactions and interactions (altogether called interactions) between different molecular entities in various cellular compartments. As multiple teams worked on related topics, biocurators reviewed other diagrams, also across platforms (see also Materials and Methods). The diagrams are accessible online and can be explored using an intuitive user interface. Table 1 summarises information about the curated diagrams, and Table EV1 lists the diagrams and provides links to access them. Enrichment using knowledge from databases and text mining The knowledge of COVID-19 mechanisms is rapidly evolving, as shown by the growth of the COVID-19 Open Research Dataset , a source of manuscripts and metadata on COVID-19related research (preprint: Lu Wang et al, 2020). CORD-19 currently contains almost 480,000 articles and preprints, over ten times more than when it was introduced more than a year ago (accessed on 05.10.2021). In such a quickly evolving environment, manual biocuration needs to be supported by automated procedures to identify and prioritise crucial articles, molecules and their interactions to be included in the C19DMap. Potential knowledge sources for such assisted biocuration are interaction and pathway databases, especially those with dedicated COVID-19 content Perfetto et al, 2020). Their structured and annotated information on protein interactions or causal relationships was generated using separate biocuration guidelines and formats. Nevertheless, their comparable identifiers and references to source publications make them plausible building blocks for constructing the C19DMap (see Materials and Methods). Text mining approaches are another source of information that can direct the biocurators towards the most recent and relevant findings. They automatically extract and annotate biomolecule names and their interactions from abstracts, full-text documents or pathway figures (Bauch et al, 2020;Hanspers et al, 2020). Networks of molecule interactions constructed by text mining can carry substantially more noise than the contents of interaction databases but offer broader literature coverage. The main groups of the C19DMap Community are biocurators, domain experts, and analysts and modellers; communicating to refine, interpret, and apply C19DMap diagrams. These diagrams are created and maintained by biocurators, following pathway database workflows or stand-alone diagram editors, and reviewed by domain experts. The content is shared via pathway databases or a GitLab repository; all can be enriched by integrated resources of text mining and interaction databases. The C19DMap diagrams are available in several layout-aware systems biology formats and integrated with external repositories, allowing a range of computational analyses, including network analysis and Boolean, kinetic or multiscale simulations. Curators ª 2021 The Authors Molecular Systems Biology 17: e10387 | 2021 Table 2 summarises open access interaction databases and text mining knowledge bases supporting the biocuration of the C19DMap. Molecular interactions from these sources have a broad coverage at the cost of depth of mechanistic representation. The biocurators used this content to build and update the map by manual exploration or by programmatic comparison. First, the biocurators visually explored the contents of such networks using available search interfaces to identify interactions of interesting molecules and encoded them in the diagrams. This task was supported by a dedicated visualisation tool COVIDminer (https:// rupertoverall.net/covidminer). The biocurators also used assistant chatbots that respond to natural language queries and return Interoperability of the diagrams and annotations The biocuration of the C19DMap diagrams was distributed across multiple teams, using varying tools and associated systems biology representations. This required a common approach to annotations of diagram elements and their interactions. Additionally, to compare and combine the diagrams in the C19DMap, interoperability of layout-aware formats was needed. The diagrams were encoded in three layout-aware formats for standardised representation of molecular interactions: SBML, SBGNML and GPML. All three formats, centred around molecular interactions, provided a constrained vocabulary to encode element and interaction types, encode layout of corresponding diagrams and support stable identifiers for diagram components. These shared properties, supported by a common ontology (Courtot et al, 2011), allowed cross-format translation of the diagrams, which was essential for harmonising the effort between biocuration platforms. The ecosystem of tools and resources supporting the C19DMap (see Fig 1) ensured interoperability between SBML, SBGNML and GPML via translation, preserving the diagram layout (Bohler et al, 2016;Balaur et al, 2020;Hoksza et al, 2020) for harmonised visualisation of diagrams. Additionally, these diagrams were transformed into inputs of computational pipelines and data repositories, allowing network analysis, pathway modelling and interoperability with molecular interaction repositories (Pillich et al, 2017) (see Materials and Methods). Structure and scope of the COVID-19 Disease Map The C19DMap was built bottom-up, exploiting a rich bioinformatics framework discussed in Section "An interoperable repository of comprehensive and computable diagrams" of the Results, based on knowledge from existing studies of other coronaviruses (Fung & Liu, 2019) and contextualised with data emerging from studies of SARS-CoV-2 (Gordon et al, 2020). The contents of the C19DMap are available online, summarised in a constantly updated overview at https://covid.pages.uni.lu/map_contents (see also Table EV1). Currently, the C19DMap focuses on molecular processes involved in SARS-CoV-2 entry and replication and host-virus interactions (see Fig 2). Emerging scientific evidence of host susceptibility, immune response, cell and organ specificity will be incorporated into the next versions in accordance with our curation roadmap (https:// fairdomhub.org/documents/907). While the interactions of SARS-CoV-2 with various host cell types are vital determinants of COVID-19 pathology (Hui et al, 2020;Mason, 2020;Ziegler et al, 2020), the current C19DMap represents an infection of a generic host cell. Several pathways included in the map are shared between different cell types; for example, the IFN-1 pathway is active in dendritic and lung epithelial cells and in alveolar macrophages (Hadjadj et al, 2020;Lee & Shin, 2020;Sa Ribero et al, 2020). Continued annotations of emerging expression datasets (Delorey et al, 2021) and other sources of information will allow the construction of cell-specific versions of the C19DMap to provide an integrated view of the effects of SARS-CoV-2 on the human organism. An example workflow to construct such a focused version of the map was proposed in Section "Case study: analysis of cellspecific mechanisms using single-cell expression data". SARS-CoV-2 infection and COVID-19 progression are sequential events that start with viral attachment and entry (Fig 3). These events involve various dynamic processes and different timescales that are not captured in static representations of pathways. The correlation of symptoms and potential drugs suggested to date helps downstream data exploration and drug target interpretation in the context of therapeutic interventions. The figure summarises the main sections and content of the C19DMap by illustrating the progressive but overlapping mechanisms at different levels and study features of the disease intended as quick references for the map. A Cellular level (light yellow), the immune response (blue) and other systemic responses (red) of the host following SARS-CoV-2 infection. B The progression of pathophysiology from tissue damage to organ damage and multiple organ dysfunction in severe cases. C Clinical manifestations, depending on the severity of the infection from asymptomatic to critical COVID-19. D Potential intervention strategies that may be suggested based on the analysis of the C19DMap before, during and after infection, depending on the type and target of the intervention. other receptors (preprint: Amraei et al, 2020;preprint: Gao et al, 2020). Viral entry occurs either by direct fusion of the virion with the cell membranes or by endocytosis (Hoffmann et al, 2020a;Xia et al, 2020) of the virion membrane and the subsequent injection of the nucleocapsid into the cytoplasm. Within the host cell, the C19DMap depicts how SARS-CoV-2 hijacks the rough endoplasmic reticulum (RER)-linked host translational machinery for its replication (Chen et al, 2010;Angelini et al, 2013;Nakagawa et al, 2016;V'kovski et al, 2019). The RER-attached translation machinery produces structural proteins, which, together with the newly generated viral RNA, are assembled into new virions and released to the extracellular space via smooth-walled vesicles (Nakagawa et al, 2016) or hijacked lysosomes (Ghosh et al, 2020). These mechanisms are illustrated in the diagrams of the "Virus replication cycle" section in Table EV1: "Attachment and entry", "Transcription, translation and replication" and "Assembly and release". Viral subversion of host defence Endoplasmic reticulum (ER) stress results from the production of large amounts of viral proteins that create an overload of unfolded proteins (Kr€ ahling DeDiego et al, 2011;Fukushi et al, 2012). The mechanisms of the unfolded protein response (UPR) include the mitigation of the misfolded protein load by reduced protein synthesis and increased protein degradation (Sureda et al, 2020) through the ubiquitin-proteasome system (UPS) and autophagy (Choi et al, 2018;Bello-Perez et al, 2020). SARS-CoV-2 may perturb the process of UPS-based protein degradation via the interaction of the viral Orf10 protein with the Cul2 ubiquitin ligase complex and its putative substrates (Gordon et al, 2020;Zhang et al, 2020). The involvement of SARS-CoV-2 in autophagy is less documented (Yang & Shen, 2020). The increased burden of misfolded proteins due to viral replication and subversion of mitigation mechanisms may trigger programmed cell death (apoptosis). The C19DMap encodes major signalling pathways triggering this final form of cellular defence against viral replication (Diemer et al, 2010). Many viruses block or delay cell death by expressing anti-apoptotic proteins to maximise the production of viral progeny (Kanzawa et al, 2006;Liu et al, 2007) or induce it in selected cell types (Diemer et al, 2010;Chu et al, 2016;preprint: Chen et al, 2020b). These mechanisms are illustrated in the diagrams of the "Viral subversion of host defence" section in Table EV1: "ER stress and unfolded protein response", "Autophagy and protein degradation" and "Apoptosis". Host integrative stress response Severe acute respiratory syndrome coronavirus 2 infection damages the epithelium and the pulmonary capillary vascular endothelium (Bao et al, 2020), impairing respiration and leading to acute respiratory distress syndrome (ARDS) in severe forms of COVID-19 . The release of pro-inflammatory cytokines and hyperinflammation are known complications, causing further widespread damage (Chen et al, 2020a;Lucas et al, 2020). Coagulation disturbances and thrombosis are associated with severe cases, but specific mechanisms have not been described yet (Iba et al, 2020;Klok et al, 2020). Nevertheless, it was shown that SARS-CoV-2 disrupts the coagulation cascade and causes renin-angiotensin system (RAS) imbalance (Magro et al, 2020;Urwyler et al, 2020). Angiotensin-converting enzyme 2, used by SARS-CoV-2 for host cell entry, is a regulator of RAS and is widely expressed in the affected organs. The diagrams in the repository describe how ACE2converted angiotensins trigger the counter-regulatory arms of RAS and the downstream signalling via AGTR1, regulating the coagulation cascade (Gheblawi et al, 2020;McFadyen et al, 2020). These mechanisms are illustrated in the diagrams of the "Integrative stress response" section in Table EV1: "Renin-angiotensin system" and "Coagulopathy". Host immune response The innate immune system detects specific pathogen-associated molecular patterns, through pattern recognition receptors (PRRs) that recognise viral RNA in the endosome during endocytosis or in the cytoplasm during virus replication. The PRRs activate associated transcription factors promoting the production of antiviral proteins such as interferon-alpha, interferon-beta and interferon-lambda (Takeuchi & Akira, 2010;Berthelot & Liot e, 2020;Blanco-Melo et al, 2020;Hadjadj et al, 2020;Park & Iwasaki, 2020). SARS-CoV-2 impairs this mechanism (Chu et al, 2020), but the exact components are yet to be elucidated (Liao et al, 2005;Devaraj et al, 2007;Frieman et al, 2007;Li et al, 2016;Bastard et al, 2020). The C19DMap includes both the virus recognition process and the viral evasion mechanisms. It provides the connection between virus entry, its replication cycle, and the effector pathways of pro-inflammatory cytokines, especially of the interferon type I cascade (Wong et al, 2018;Mesev et al, 2019;Mantlo et al, 2020;Su & Jiang, 2020;Thoms et al, 2020;Ziegler et al, 2020). Key metabolic pathways modulate the availability of nutrients and critical metabolites of the immune microenvironment (Rao et al, 2019). They are a target of infectious agents that reprogram host metabolism to create favourable conditions for their reproduction (Kedia-Mehta & Finlay, 2019). The C19DMap encodes several immunometabolic pathways and provides detailed information about the way SARS-CoV-2 proteins interact with them. The metabolic pathways include haem catabolism (Batra et al, 2020) and its downstream target, the NLRP3 inflammasome (van den Berg & Te Velde, 2020), tryptophan-kynurenine metabolism governing the response to inflammatory cytokines (Murakami et al, 2013;preprint: Su et al, 2020), and nicotinamide and purine metabolism (Renz et al, 2020). Finally, we represent the pyrimidine synthesis pathway, tightly linked to purine metabolism, affecting viral DNA and RNA syntheses (Hayek et al, 2020;Xiong et al, 2020). These mechanisms are illustrated in the diagrams of the "Innate Immune Response" section in Table EV1: "PAMP signalling", "Induction of interferons and the cytokine storm" and "Altered host metabolism". Exploration of the networked knowledge The diagrams of the C19DMap were curated in a distributed manner across various platforms and tools. In order to coordinate such an effort and get a systematic overview of the contents of the map, we programmatically analysed the content of the diagrams, benefiting from their standard encoding and annotation (see Materials and Methods). This allowed us to identify crosstalk and functional overlaps across pathways. Then, we linked the diagrams to interaction and text mining databases to fill the gaps in our understanding of COVID-19 mechanisms and generate new testable hypotheses. (Mogensen, 2009). Downstream, IFN-1 activates Tyk2 and Jak1 protein tyrosine kinases, causing STAT1:STAT2:IRF9 (ISGF3) complex formation to promote the transcription of IFN-stimulated genes (ISGs). Importantly, TBK1 also phosphorylates IKBA, an NF-kB inhibitor, for proteasomal degradation in crosstalk with the UPS pathway, allowing free NF-kB and IRF3 to co-activate ISGs (Fang et al, 2017). Another TBK1 activator, STING, links IFN signalling with pyrimidine metabolism. SARS-CoV-2 M protein affects these IFN responses by inhibiting the RIG-I:MAVS:TRAF3 complex and TBK1, preventing IRF3 phosphorylation, nuclear translocation and activation (Zheng et al, 2020 The network structure of the diagrams and their interactions based on existing crosstalk (shared elements), candidate crosstalk, and candidate regulators. Colour code: green-pathways or pathway groups; blue-proteins with one or two neighbours; yellow-proteins with three or four neighbours; and red-proteins with five or more neighbours. associated with impaired IFN-1 (Hadjadj et al, 2020) may be a host attempt to compensate for the lack of IFN-1 activation (Rubio et al, 2013), leading to NF-kB hyperactivation and release of proinflammatory cytokines. Also, SARS-CoV-1 viral papain-like proteases, contained within the nsp3 and nsp16 proteins, inhibit STING and its downstream IFN secretion (Chen et al, 2014). Perturbations in these pathways may impair the IFN response against SARS-CoV-2 and explain persistent blood viral load and an exacerbated inflammatory response in COVID-19 patients (Hadjadj et al, 2020). New crosstalk from interaction and text mining datasets New relationships emerging from associated interaction and text mining databases (see Section "Exploration of the networked knowledge" of the Results) suggested new pathway crosstalk (see Figs 4B and EV3). One of these was the interplay between ER stress and the immune pathways, as PPP1R15A regulates the expression of TNF and the translational inhibition of both IFN-1 and IL-6 (Smith, 2018). This finding coincided with the proposed interaction of pathways responsible for protein degradation and viral detection, as SQSTM1, an autophagy receptor and NFKB1 regulator, controls the activity of cGAS, a double-stranded DNA detector (Seo et al, 2018). Another association revealed by text mining data was ADAM17 and TNF release from the immune cells in response to ACE2-S protein interaction with SARS-CoV-1 (Haga et al, 2008), potentially increasing the risk of COVID-19 infection (Zipeto et al, 2020). This new interaction connected diagrams of the (i) "Viral replication cycle" via ACE2-S protein interactions, (ii) "Viral subversion of host defence mechanisms" via ER stress, (iii) "Host integrative stress response" via the renin-angiotensin system and (iv) "Host innate immune response" via pathways implicating TNF signalling. Novel regulators of protein activity Finally, we identified potential novel regulators of proteins in the C19DMap using interaction and text mining databases (see Fig 4C). These proteins take no part in the current version of the map but interact with molecules already represented in at least one of the diagrams. An example of such a novel regulator was NFE2L2, which controls the activity of HMOX1 in the context of viral infection (Kesic et al, 2011). In turn, HMOX1 controls immunomodulatory haem metabolism (Zhang et al, 2019), the mechanisms of viral replication, and is a target of SARS-CoV-2 Orf3a protein . The suggested NFE2L2-HMOX1 interaction is supported by the literature reports of NFE2L2 importance in COVID-19 cardiovascular complications due to crosstalk with the renin-angiotensin signalling pathway (Valencia et al, 2020) and potential interactions with viral entry mechanisms (Hassan et al, 2020). Interestingly, the modulation of the NFE2L2-HMOX1 axis was already proposed as a therapeutic measure for inflammatory diseases (Attucks et al, 2014), making it an appealing extension of the C19DMap. Computational analysis and modelling for hypothesis generation The standardised representation and programmatic access to the contents of the C19DMap support reproducible analytical and modelling workflows. Here, we discuss the range of possible Data interpretation and network analysis The projection of omics data onto the C19DMap broadens and deepens our understanding of disease-specific mechanisms, in contrast to classical pathway enrichment analyses, which often produce lists of generic biological mechanisms. Visualisation of omics datasets on the map diagrams creates overlays, allowing interpretation of specific conditions, such as disease severity or cell types . Datasets projected on the C19DMap can create signatures of molecular regulation determined by the expression levels of the corresponding molecules. Together, multiple omics readouts and multiple measurements can increase the robustness of such signatures . This interpretation can be extended using available SARS-CoV-2-related omics and interaction datasets to infer which transcription factors, their target genes and signalling pathways are affected upon infection (Dugourd & Saez-Rodriguez, 2019). Combining regulatory interactions of the C19DMap with such data collections extends the scope of the analysis and may suggest new mechanisms to include in the map. Besides the visual exploration of omics datasets, the network structure of the C19DMap allows extended network analysis of viral-human protein-protein interactions (PPIs) (Gordon et al, 2020). It can be expanded by merging virus-host with human PPIs and proteomics data to discover clusters of interactions indicating human biological processes affected by the virus (Messina et al, 2020). These clusters can be interpreted by visualising them on the C19DMap diagrams to reveal additional pathways or interactions to add to the map. Mechanistic and dynamic computational modelling Diagrams from the C19DMap can be coupled with omics datasets to estimate their functional profiles and predict the effect of interventions, e.g. effects of drugs on their targets (Salavert et al, 2016). However, such an approach has a substantial computational complexity, limiting the size of the input diagrams. Large-scale mechanistic pathway modelling can address this challenge but requires transformation of diagrams into causal networks, which, combined with transcriptomics, (phospho-)proteomics or metabolomics data (Dugourd et al, 2021), contextualise the networks and hypotheses about intervention outcomes. Both approaches provide a set of coherent causal links connecting upstream drivers such as stimulations or pathogenic mutations to downstream changes in diagram endpoints or transcription factor activities. Dynamic modelling allows analysis of changes of molecular networks in time to understand their complexity under diseaserelated perturbations (Naldi et al, 2018b). C19DMap diagrams, translated to SBML qual using CaSQ (see Materials and Methods), can be used in discrete modelling, using modelling software that supports SBML qual file import. Notably, multiscale processes involved in viral infection, from molecular interactions to multicellular behaviour, can be simulated using a dedicated computational architecture. In such a multiscale setup, single-cell models run in parallel to capture the behaviour of heterogeneous cell populations and their intercellular communications at different time scales, e.g. diffusion, cell mechanics, cell cycle, or signal transduction (Osborne et al, 2017;preprint: Wang et al, 2020). Implementing detailed COVID-19 signalling models in the PhysiBoSS framework (Letort et al, 2019) may help better understand complex dynamics of interactions between immune system components and the host cell. Case study: analysis of cell-specific mechanisms using single-cell expression data To investigate cell-specific mechanisms of COVID-19, we projected single-cell expression data onto the C19DMap. To this end, we calculated differentially expressed genes (DEGs) for two datasets relevant to the disease. The first dataset describes non-infected bronchial secretory cells Visual exploration of the differential expression profiles in the C19DMap revealed that transient secretory cells specifically express molecules associated with the virus replication cycle (TMPRSS2). This suggests that these cells are more susceptible to viral entry than the other types of bronchial secretory cells. Also, the interferon 1 signalling pathway was up-regulated in both secretory1 and transient secretory cells. However, transient secretory cells showed upregulation of elements up-and downstream of the pathway (IFNAR1-JAK1, and ISG15 or OAS1); in secretory1 cells, the upregulated proteins were downstream (transcription factor AP-1). In the intestinal organoid dataset, the comparison of infected and bystander immature enterocytes confirmed the downregulation of the ACE2 receptor reported by the original article Data ref: Triana et al, 2021), as visualised in the virus replication cycle diagram. In addition, exploration of other affected pathways may suggest the context of this observationfor instance, the C19DMap demonstrated the differential activity of the pyrimidine deprivation pathway, which could suggest a reduction of transcriptional activity as a host response to the viral infection. Enrichment analysis of diagrams indicated that mitochondrial dysfunction, apoptosis, and inflammasome activation were dysregulated in infected enterocytes. The enrichment analysis of the cell-typespecific overlays was obtained by the GSEA plugin of the C19DMap. These results can be replicated and examined directly by the users via the visual interface of the C19DMap (see https://covid.pages. uni.lu/minerva-guide/ and Materials and Methods). Case study: RNA-Seq-based analysis of transcription factor activity As discussed above, the diagrams of the C19DMap can be coupled with omics datasets. Here, we highlight how the map systematically reveals the transcription factors (TFs) related to SARS-CoV-2 infection. To do so, we conducted differential expression analysis between SARS-CoV-2 infected Calu-3 human lung adenocarcinoma cell line and controls. Results were used to estimate TF activity deregulation upon viral infection. We mapped the outcomes of the TF activities to pathway diagrams of the C19DMap (see Materials and Methods). The results for the interferon type I signalling diagram are shown in Fig 5. This pathway included some of the most active TFs after SARS-CoV-2 infection, such as STAT1, STAT2, IRF9 and NFKB1. These are well-known components of cytokine signalling and antiviral responses (Cheon et al, 2013;Fink & Grandvaux, 2013). Interestingly, these TFs were located downstream of various viral proteins (E, S, Nsp1, Orf7a and Orf3a) and members of the MAPK pathway (MAPK8, MAPK14 and MAP3K7). SARS-CoV-2 infection is known to promote MAPK activation, which mediates the cellular response to pathogenic infection and promotes the production of pro-inflammatory cytokines . Overall, these results highlighted that the molecular mechanisms of the response of the human cells to SARS-CoV-2 infection can be investigated by combining omics datasets with the diagrams of the C19DMap. Case study: RNA-Seq-based analysis of pathway signalling The diagrams of the C19DMap allow for a complex analysis of how the infection may affect signalling sequences in encoded pathways based on available omics data. To demonstrate this approach, we applied a mechanistic modelling algorithm that estimates the functional profiles of signalling circuits in the context of omics datasets. We used expression profiles from nasopharyngeal swabs of COVID-19 patients and controls Data ref: Lieberman et al, 2020) to calculate the differential expression profiles and derive the pathway signalling activities (see Materials and Methods). To illustrate this approach, we focused on the results of the analysis of the apoptosis pathway, also shown in Fig 6 and Table EV2. We observed an overall downregulation of both the CASP3 and CASP7 subpathways and an inhibition of the circuit ending in effector protein CASP3, possibly due to the downregulation of AKT1 and BAD and the downstream inhibition of BAX. Although the BAX downstream genes were up-regulated, the signal arriving at them was diminished by the effect of the previous nodes. Although CASP8 was up-regulated, the cumulative effect of the individual node activities resulted in the inhibition of CASP7. Indeed, inflammatory response via CASP8 has been described as a result of SARS-CoV-2 A zoom was applied in the area containing the most active TFs (red nodes) after infection. Node shapes: host genes (rectangles), host molecular complex (octagons), viral proteins (V shape), drugs (diamonds) and phenotypes (triangles). infection , and the role of caspase-induced apoptosis has been established, together with the ripoptosome/caspase-8 complex, as a pro-inflammatory checkpoint (Chauhan et al, 2018), which may be triggering up-regulation of such processes in other pathways. Overall, our findings recapitulate reported outcomes and provide explanations of the effects of interactions on pathway elements. Discussion Our knowledge of COVID-19 molecular mechanisms is growing at a great speed, fuelled by global research efforts to investigate the pathophysiology of SARS-CoV-2 infection. Keeping an overview of all the findings, many of which focus on individual molecules, is a great challenge just one year after the start of the pandemic. The C19DMap aggregates this knowledge into molecular interaction diagrams, making it available for visual exploration by life science and clinical researchers and analysis by computational biologists. The map complements and interfaces with other COVID-19 resources such as interaction databases Perfetto et al, 2020), protein-centric resources (preprint: Lubin et al, 2020) and relevant omics data repositories (Delorey et al, 2021) by providing a context to particular pieces of information and helping with data interpretation. The diagrams of the C19DMap describe molecular mechanisms of COVID-19, grounded in the relevant published SARS-CoV-2 research, completed where necessary by mechanisms discovered in related beta-coronaviruses. We developed the contents of the C19DMap de novo in an unprecedented, community-driven effort involving independent biocurators, as well as WikiPathway and Reactome biocurators. Over forty diagrams with molecular resolution have been constructed since March 2020, shared across three platforms. In this work, we combined and harmonised expertise in biocuration across multiple teams, formulated clear guidelines and cross-reviewed the outcomes of our work with domain experts. Although the approach of community curation was applied in the past (Slayden et al, 2013;Naithani et al, 2019), we are not aware of any curation effort on a similar scale for a single human disease to date. In this work, we established a computational framework accompanying the biocuration process, integrating interaction databases and text mining solutions to accelerate diagram building. This allowed us not only to enrich particular diagrams but also to explore crosstalk between them and prioritise key novel regulators of the encoded pathways. Thanks to the interoperability of different systems biology formats, we performed this analysis for diagrams constructed in different biocuration environments, extending current advances in pathway interoperability (Bohler et al, 2016). Activation levels were calculated using transcriptional data from GSE152075 and the Hipathia mechanistic pathway analysis algorithm. Each node represents a gene (ellipse), a human metabolite/viral protein (circle) or a function (rectangle). The pathway is composed of circuits from a receptor to an effector. Significant differential regulation of circuits in infected cells is highlighted by colour arrows (blue: inactive in infected cells). The colour of elements corresponds to the level of differential expression in SARS-CoV-2-infected human nasopharyngeal swabs versus non-infected nasopharyngeal human swabs. Blue: downregulated, red: up-regulated and white: no statistically significant differential expression. Moreover, by developing reproducible analysis pipelines for the contents of the C19DMap, we promoted early harmonisation of formats, support of standards and transparency in all steps. Preliminary results of such efforts are illustrated in the case studies above. Notably, the biocurators and domain experts participated in the analysis helped to evaluate the outcomes and corrected the curated content if necessary. This way, we improve the quality of the analysis and increase the reliability of the models used to generate testable predictions. The C19DMap is an open access repository of diagrams and reproducible workflows for content conversion and analysis. We followed FAIR principles in making our content and code available to the entire research community (Wilkinson et al, 2016). Importantly, FAIRDOMHub is an essential platform for disseminating all information about the project and linking contributors to their contributions. The C19DMap Community is open and expanding as more people with complementary expertise join forces. Using the FAIR approach for sharing the results of our work makes this effort more scalable. Recognising individual contributions and open access policy promote the distributed knowledge building and generation of research data. The project aims to provide the tools to deepen our understanding of the mechanisms driving the infection and help boost drug development supported by testable suggestions. It offers insights into the dynamic nature of the disease at the molecular level and its propagation at the systemic level. Thus, it provides a platform for a precise formulation of models, accurate data interpretation, the potential for disease mitigation and drug repurposing. In the longer run, the constantly growing C19DMap content will be used to facilitate the finding of robust signatures related to SARS-CoV-2 infection predisposition, disease evolution or response to various treatments, along with the prioritisation of new potential drug targets or drug candidates. This approach to an emerging worldwide pandemic leveraged the capacity and expertise of an entire swath of the bioinformatics community, bringing them together to improve the way we build and share knowledge. By aligning our efforts, we strive to provide COVID-19-specific pathway models, synchronise content with similar resources and encourage discussion and feedback at every stage of the curation process. Such an approach may help to deal with new waves of COVID-19 or similar pandemics in the long-term perspective. Methods and Protocols Biocuration platforms Individual diagrams were encoded in systems biology layout-aware formats (see below) by biocurators using CellDesigner (Matsuoka et al, 2014), Newt (https://newteditor.org), SBGN-ED (Czauderna et al, 2010) and ySBGN (https://github.com/sbgn/ySBGN). This community-based curation was coordinated by sharing curation topics, e.g. relevant pathways or particular SARS-CoV-2 proteins across the community to cover the available literature and identify synergies. Curation guidelines (https://fairdomhub.org/documents/ 661) were established to ensure proper representation and annotation of the key features of the diagrams. Curation guidelines for logical models (Niarakis et al, 2020) were followed. Regular technical reviews of the diagrams were performed following a previously established checklist to harmonise their content. The diagrams are stored and versioned in a GitLab repository (https://gitlab.lcsb.uni. lu/covid/models). Individual diagrams are visualised in the MINERVA Platform . The entry-level view is based on Fig 2. Reactome (Jassal et al, 2020) biocuration efforts initially focused on SARS-CoV-1 and its proteins, and their functions are extensively documented in the experimental literature. Reactome curators were assigned a subpathway from the viral life cycle, a host pathway or potential therapeutics. Curators were supported by an editorial manager and a dedicated SARS literature triage process. The resulting set of pathways for SARS-CoV-1 provided the basis for computational inference of the corresponding SARS-CoV-2 pathways based on structural and functional homologies between the two viruses. The computationally inferred SARS-CoV-2 infection pathway events and entities were then reviewed and manually curated using published SARS-CoV-2 experimental data. Reactome diagrams are available via a dedicated pathway collection (https://reactome.org/ PathwayBrowser/#/R-HSA-9679506). The WikiPathways (Slenter et al, 2018) diagrams were constructed using PathVisio (Kutmon et al, 2015), with annotation of pathway elements from the integrated BridgeDb identifier mapping framework (van Iersel et al, 2010). All pathways are stored in GPML format (Kutmon et al, 2015). The WikiPathways diagrams are available via a dedicated pathway portal, grouping pathway models specific to SARS-CoV-2, other coronaviruses and general cellular processes relevant to the virus-host interactions (https:// www.wikipathways.org/index.php/Portal:COVID-19). Layout-aware systems biology formats The diagrams are available in SBML format (Keating et al, 2020), allowing computational modelling of biological processes. SBML stores visual information about encoded elements and reactions using render (Bergmann et al, 2018) and layout (Gauges et al, 2015) packages. An early version of SBML adapted by CellDesigner allows storing layout and rendering information. Systems Biology Graphical Notation (SBGN) format is a graphical standard for visual encodings of molecular entities and their interactions, implemented using SBGNML for encoding the layout of SBGN maps and their annotations. Finally, GPML (Kutmon et al, 2015) is a structured XML format for computable representation of biological knowledge used by the WikiPathways platform. Interactions and interacting entities are annotated following a uniform, persistent identification scheme, using either MIRIAM Registry or Identifiers.org (Juty et al, 2012) and the guidelines for annotations of computational models. Viral protein interactions are explicitly annotated with their taxonomy identifiers to highlight findings from strains other than SARS-CoV-2. Stable protein complexes from SARS-CoV-2 and SARS are annotated using the Complex Portal. Interaction databases The biocuration process was supported by interaction and pathway databases storing structured, annotated and curated information about COVID-19 virus-host interactions. The IMEx Consortium (Meldal et al, 2019) dataset contains curated Coronaviridae-related interaction data from reviewed manuscripts and preprints, resulting in a dataset of roughly 7,300 interactions extracted from over 250 publications, including data from SARS-CoV-2, SARS, CoV, and other strains of Coronaviridae. The dataset is updated with every release of IMEx data and is open access (https://www.ebi.ac.uk/intact/resources/datasets#coronavirus). The SIGNOR 2.0 dataset contains manually annotated and validated signalling interactions related to the host-virus interaction, including cellular pathways modulated during SARS-CoV-2 infection. The dataset was constructed from the literature on causal interactions between SARS-CoV-2, SARS-COV-1, MERS proteins and the human host and is openly available (https://signor. uniroma2.it/covid/). The Elsevier Pathway Collection (Daraselia et al, 2004;Nesterova et al, 2020) COVID-19 dataset comprises manually reconstructed and annotated pathway diagrams. Statements about molecular interactions are extracted into a knowledge graph by a dedicated text mining technology adapted for extracting facts about viral proteins and viruses from the literature. These interactions were filtered for experimental evidence, used for pathway reconstruction and made openly available (http://dx.doi.org/ 10.17632/d55xn2c8mw.1). Information from OmniPath (T€ urei et al, 2021) on existing interactions gathered from pathway and interaction databases was used in a programmatic way to suggest cellspecific interactions and cell-cell interactions specific to immune reactions. Text and figure mining Text mining was performed on the CORD-19: COVID-19 Open Research Dataset dataset (preprint: Lu Wang et al, 2020). INDRA (Gyori et al, 2017), AILANI COVID-19 (https://ailani.ai) and BioKB processed CORD-19 dataset (https://biokb.lcsb.uni.lu/topic/DOID: 0080599), with their results available programmatically via REST API and SPARQL interfaces. An OpenNLP-based (https://opennlp. apache.org/) text mining workflow using GNormPlus (Wei et al, 2015) was applied to the CORD-19 dataset and the collection of MEDLINE abstracts associated with the genes in the SARS-CoV-2 PPI network (Gordon et al, 2020) using the Entrez GeneRIFs, https://www.ncbi.nlm.nih.gov/gene/about-generif. (https://gitlab. lcsb.uni.lu/covid/models/-/tree/master/Resources/Text%20mining). Also, we used data from 221 CORD-19 dataset figures using a dedicated Figure Mining Workflow (Hanspers et al, 2020), with results available at https://gladstone-bioinformatics.shinyapps.io/shiny-cov idpathways. Results of text mining were accessed by the curators in the form of molecular interactions with references to the articles and to sentences from which these interactions were derived. We systematically aligned the C19DMap with assembled INDRA Statements, both to enrich and to extend the map (see "Crosstalk analysis" below). The content of INDRA and AILANI COVID-19 was accessible via interfaces that allow users to provide natural language queries, such as "What are COVID-19 risk factors?" or "What are the interactors of ACE2?", facilitating extracting knowledge from the results of text mining workflows. The results of the INDRA workflow were visualised using the COVIDminer project (https://rupertoverall.net/ covidminer). Each extracted statement describes a directed interaction between two gene products, small molecules or biological processes. The causal network representing the COVIDminer database is browsable through a web interface. The results of the OpenNLP-based text mining workflow were imported into a BioKC biocuration platform for structured processing and SBML export. Crosstalk analysis Crosstalk analysis was performed for the list of C19DMap diagrams (Table EV1). The code is available at: https://gitlab.lcsb.uni.lu/ covid/models/-/tree/master/Resources/Crosstalks. Individual diagrams were accessed via the API of the MINERVA Platform, WikiPathway diagrams via the rWikipathways package (https://github.com/ wikipathways/rWikiPathways) and Reactome diagrams via the Reactome API. Text mining interactions are from the INDRA EMMAA Collection (https://emmaa.indra.bio/dashboard/covid19), dataset timestamp: 2020-12-01-21-05-54. Verified molecular interactions for quality control of the text mining data were obtained from Omni-Path using the OmnipathR package (https://github.com/saezlab/ OmnipathR). We filtered text mining interactions of the EMMAA dataset for "belief" of 0.8 or higher and retained those matching the direction and interacting molecules to the OmniPath dataset. We call this filtered group of interactions "EMMAA-OP interactions". Crosstalk between C19DMap diagrams was calculated based on the HGNC identifiers of their elements. For simplification, all elements of the same diagram were considered to be interacting with each other. Three types of networks were constructed: existing crosstalk, new crosstalk and new regulators. Diagram groups followed the scheme in the list of C19DMap diagrams (Table EV1). The networks were visualised using Cytoscape (Shannon et al, 2003). The colour code is common for the networks: light green for nodes representing a diagram or a diagram group, light blue for nodes having one or two neighbours, yellow for nodes having three or four neighbours and red for nodes with five or more neighbours. Diagram nodes have prefixes indicating their provenance. Diagram groups have no prefixes, as they combine diagrams across platforms. Existing crosstalk between diagrams, or groups of diagrams, was calculated by identifying shared HGNC identifiers linking diagrams or groups of diagrams. To calculate new crosstalk between diagrams, we merged the EMMAA-OP interactions with the network of existing crosstalk and kept only those new interactions that link at least two upstream and two downstream diagrams or diagram groups. To calculate new upstream regulators of existing diagrams, we merged the EMMAA-OP interactions with the network of existing crosstalk. We kept interactions with source elements, not within existing diagrams, and target elements in at least one existing diagram or diagram group. The C19DMap diagrams (Table EV1) in CellDesigner format were translated using CaSQ (Aghamiri et al, 2020) into executable Boolean networks. Conversion rules and logical formulae were inferred according to the topology and the annotations of the diagrams. SBML-qual files (Chaouiya et al, 2013) generated with CaSQ (Aghamiri et al, 2020) retained their references, annotations and layout of the original CellDesigner file. They can be used for in silico simulations and analysis with CellCollective (Helikar et al, 2012), GINsim (Naldi et al, 2018a) or MaBoSS (Stoll et al, 2017). CaSQ was adapted to produce SIF files necessary for HiPATHIA (Hidalgo et al, 2017) and CARNIVAL . Differential expression analysis of the transcript abundances between conditions was performed with DESeq2 (Love et al, 2014). The resulting t-values from the differential expression analysis were used to estimate the effect of SARS-CoV-2 at the transcription factor (TF) activity level. This analysis was performed using the software Viper (Alvarez et al, 2016) algorithm coupled with TF-target interactions from DoRothEA (Garcia-Alonso et al, 2019). DoRothEA TF-target interactions have a confidence level based on the reliability of their source, which ranges from A (most reliable) to E (least reliable). Here, interactions with confidence levels A, B and C were selected. Activities of TFs having at least five different targets were computed. The TFs normalised enrichment score from the Viper output was mapped on the "Interferon type I signalling pathway diagram" (https:// fairdomhub.org/models/713) of the C19DMap using the SIF files generated by CaSQ. The resulting network was visualised using Cytoscape (Shannon et al, 2003). Notebooks to reproduce the results of this case study are available at https://github.com/ saezlab/Covid19. RNA-seq-based analysis of pathway signalling The CoV-HiPathia (Rian et al, 2021) web tool was used to calculate the level of activity of the subpathways of the apoptosis diagram (https://fairdomhub.org/models/712) from the C19DMap. RNA-Seq transcriptomic profiles come from a public dataset of nasopharyngeal swabs from 430 individuals with SARS-CoV-2 and 54 negative controls, Gene Expression Omnibus reference GSE152075 Data ref: Lieberman et al, 2020). RNA-Seq gene expression data with the trimmed mean of M-values (TMM) normalisation (Robinson et al, 2010) were rescaled to the range [0;1] for the calculation of the signal and normalised using quantile normalisation (Bolstad et al, 2003). Normalised gene expression values and the experimental design (case/control sample names files) were uploaded to CoV-Hipathia to calculate the level of activation of the signalling in the selected diagram. A case/control contrast with a Wilcoxon test was used to assess differences in signalling activity between the two conditions. To reproduce the results, files with normalised gene expression data and the experimental design can be generated using the code https://gitlab.lcsb.uni.lu/covid/models/-/tree/master/ Resources/Hipathia/data_preprocessing. These files can then be used in CoV-HiPathia at http://hipathia.babelomics.org/covid19/ under the "Differential signalling" tab. Diagrams from the C19DMap can be selected in the "Pathway source" section, under "Disease Maps Community curated pathways". Expanded View for this article is available online.
2021-10-20T06:16:41.961Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "76cd939f21672ead36b9bd279c32527ee071d2e0", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.15252/msb.202110387", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "447f4f3df95dc6ca4250c200bff8234fce0f14cb", "s2fieldsofstudy": [ "Computer Science", "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
126428464
pes2o/s2orc
v3-fos-license
Occupation time of Lévy processes with jumps rational Laplace transforms We are interested in occupation times of Lévy processes with jumps rational Laplace transforms. The corresponding boundary value problems via the Feynman-Kac representation are solved to obtain an explicit formula for the joint distribution of the occupation time and the terminal value of the Lévy processes with jumps rational Laplace transforms. Introduction The occupation time is the amount of time a stochastic process stays with in a certain range. It is an interesting topic for stochastic processes. Many explicit results on Laplace transforms for occupation times have been obtained for some well known examples of Lévy process. For a standard Brownian motion W = {W t : t ≥ 0}, P. Lévy's arcsine law is a well known result. It states the following, let Γ + (t) be the time W spends above 0 up to time t: Lévy [10] (for more details see Chapter IV of [16]) showed that for each t > 0 the variable Γ + (t)/t follows the arcsine law: This result was then extended to a Brownian motion with drift by Akahori [2] and Takács [14]. After that, the investigation on occupation times of Lévy processes has made much great progress. For recent works in this topic, see [1], [3], [12], [9], [15] and the references therein for more details. In this paper, we are interested in the joint Laplace transforms of X = (X t ) t≥0 and its occupation times, i.e, where α > 0, β > 0, γ is some suitable constant and e α is an independent (of X) exponential random variable with rate α > 0 and X = (X t ) t≥0 is a Lévy process with jumps * HEC Montréal, CANADA. E-mail: djilali.ait-aoudia@hec.ca rational Laplace transforms proposed by Lewis and Mordecki [11], see also Kuznetsov [8]. And the purpose is deriving formulas for This extends recent results obtained in Ait-Aoudia and Renaud [1], (Theorem 2) on the processes with hyper-exponential jumps. More precisely, to find an explicit formula for the function ψ(x) in Equation (1.2), the corresponding boundary value problem via the Feynman-Kac representation is considered. By direct calculation, the associated ordinary integro-differential equation (OIDE) is transformed into a homogeneous ordinary differential equation (ODE) of higher order, which is then solved in closed form and its solution equals to ψ(x). Results obtained here can be applied to price occupation time derivatives as in Cai et al. [3], in which the authors have noted that there are several products in the real market with payoffs depending on the occupation times of an interest rate or a spread of swap rates. For other investigations, see, e.g., [15], [17] and [18]. The rest of the paper is organized as follows. In section 2, we introduce the jumpdiffusion process having jumps with rational Laplace transform. Section 3 contains our main results. The model where µ ∈ R and σ > 0 represent the drift and volatility of the diffusion part respectively, W = {W t , t ≥ 0} is a (standard) Brownian motion, N = {N t , t ≥ 0} is a homogeneous Poisson process with rate λ and {Y i , i = 1, 2, . . . } are independent and identically distributed random variables supported in R \ {0}; moreover, {W t , t ≥ 0}, {N t , t ≥ 0} and {Y i , i = 1, 2, . . . } are mutually independent; finally, the probability density function (pdf) of Y 1 is given by where, p ij , q ij ≥ 0 and they are such The parameters η j and θ j can in principle take complex values (see [11]) with 0 < η 1 < Re(η 2 ) < · · · < Re(η m ), Another important tool to establish the key result of the article is the infinitesimal generator of X. Note that X is a Markovian process and its infinitesimal generator is given by for any bounded and twice continuously differentiable function h. Throughout the rest of the paper, the law of X such that X 0 = x is denoted by P x and the corresponding expectation by E x ; we write P and E when x = 0. The Lévy exponent ECP 23 (2018), paper 68. of X is given by Accordingly, G is a rational function on C. The equation G(ζ) − α = 0 with α > 0, σ > 0 and µ ∈ R yields S = M + N + 2 zeros with M = m i=1 m i and N = n j=1 m i (see [8] for details). Let us denote the zeros of G(ζ) − α in the half-plane Re(ζ) > 0 {Re(ζ) < 0} as ρ 1,α , ρ 2,α , . . . , ρ M +1,α {ρ 1,α ,ρ 2,α , . . . ,ρ N +1,α }. Main results Throughout this paper X = {X t , t ≥ 0} will be a Lévy process of the type described before, that is with jumps rational Laplace transforms. The time spent by X between the lower barrier h and the upper barrier H, from time 0 to time T , is given by Our main objective is to obtain the joint distribution of eα 0 1 {h<Xt<H} dt and X eα , where e α is an independent (of X) exponential random variable with rate α > 0. In order to do so, we will compute the following joint Laplace-Carson transform with respect to T : for where β ≥ 0, α > 0 and we assume that 0 < γ < min(η 1 , θ 1 ) and G(γ) < α. Clearly, we Now, our goal is to solve the boundary problem (3.3) and find explicit formulae for ψ(x). We first show that ψ satisfies an integro-differential equation and then derive an ordinary differential equation for ψ. Based on the ODE, we show ψ can be written as a linear combination of known exponential functions. is a polynomial whose zero coincide with those of G(ζ) − α. Also, denote by D α the differential operator such that its characteristic polynomial is P α (ζ). The following Lemma will be needed for our proof of Proposition 3.2. Lemma 3.1. Let d (k) indicate the k-th derivative with respect to x of any differentiable function. Let φ be a bounded and continuous function on R and for δ > 0, we define two functions F + and F − such that Then for all i ≥ 1, Proof. We need only to prove first part of the Lemma, the proof of the second part is similar. We proceed by induction on i. For i = 1, we have Moreover, for all i ≥ 2, It follows inductively that for all i ≥ 2, which is the desired result. We may now state. is infinitely differentiable and satisfies the ODE x ≥ H, (3.8) for some constants Q L k , Q 0 k , Q 1 k and Q U k . To complete the proof,D must be shown to coincide with Dα. To show that the characteristic polynomials of Dα andD will suffice. WriteP(ζ) as the characteristic polynomial ofD. Then, by (3.11),P is given bŷ This equation reveals that the characteristic polynomial Pα(ζ) of Dα equals that,P(ζ), ofD. Therefore, any solution to (3.3) can be expressed as Furthermore, we can argue that the coefficients Q L 0,k and Q U 0,k should be zero. In fact, we know that Thus, lim x→±∞ ψ(x)/e γx < +∞, which implies Q L 0,k and Q U 0,k must be zero and the proof is complete. Here V is an 2S = 2(M + N + 2)-dimensional vector, Proof. We suppose that ψ is a bounded solution to the boundary value problem (3.3) and and for x > H, (3.20) Now, observe that G(ρ k,α ) − α = 0 for all k and with Γ (i, u) is the incomplete gamma function (see [7], p. 342). Consequentely, substituting w 1 (x), w 2 (x) and w 3 (x) into (3.19) and (3.20) yields that for any x < h ECP 23 (2018), paper 68. and for j = 1, . . . , n, i = 1, . . . , n j , In adition, we can also obtain another four equations from the fact that ψ(x) is continuously differentiable at x = h and x = H: Consequently, since all of these equations are linear with respect to the undetermined parameters, it follows that the constant vector Q = Q L i , Q 0 i , Q 1 j , Q U j , i = 1, . . . , M + 1, j = 1, . . . , N + 1 satisfies a linear system (3.14) which completes the proof. Proof. Using the same idea as in Cai and Kou [4] (Theorem 4.1). Applying Ito's formula to the process {ψ(X t )e −αt−β t 0 1 {h<Xs <H} ds , t ≥ 0} we obtain that the process Since G(γ) < α, it follows from Fubini's theorem that So, using Lebesgue's dominated convergence theorem, we have that {M t , t ≥ 0} is actually a positive martingale. In particular which ends the proof. Because the e ρ k , are distinct, the Vandermonde matrix in equation (3.24) is invertible. Consequently C = 0 and A is invertible.
2019-04-22T13:12:43.796Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "f34f6be1cde4c5bc864ebbfa9a404c9826aaca07", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1214/18-ecp169", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "76e48569447b5caf33fe49558c2cfe29a54e2ae3", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
266794912
pes2o/s2orc
v3-fos-license
Does Childhood Trauma Predict Impulsive Spending in Later Life? An Analysis of the Mediating Roles of Impulsivity and Emotion Regulation We sought to investigate whether adverse childhood experiences increase impulsive spending in later life, and whether emotion dysregulation and impulsivity mediate this association. Limited research has examined associations between these factors, and examining the mechanisms involved may inform interventions for impulsive spending. This study used a cross-sectional, correlational design including 189 adult participants who completed an online survey assessing childhood trauma, adverse childhood experiences, impulsive spending, impulsivity, and emotion dysregulation. Greater adverse childhood experiences and childhood trauma were positively correlated with impulsive spending, as well as general impulsivity and emotion dysregulation. Mediation analyses indicated that emotion dysregulation and greater impulsivity accounted for the positive relationship between childhood trauma and impulse spending. Adverse childhood experiences and childhood trauma are associated with increased risk of impulse spending in adulthood via elevated general impulsivity and emotion dysregulation. Introduction Adverse childhood experiences (ACEs) are potentially traumatic events that could impact a child's health and development (Boullier & Blair, 2018).Between 2020 and 2021, there were over 24,800 child abuse offences reported in England and Wales, an 11.6% increase compared to 2019and 2020(Statista Research Department, 2021).There are five main types of ACEs, including physical abuse, emotional abuse, sexual abuse, physical neglect, and emotional neglect (Gilbert et al., 2009).ACEs also refer to household dysfunction, such as parental separation, mental illness in the family, domestic violence, and an incarcerated family member (Boullier & Blair, 2018).It should be acknowledged that adversity may come in different forms and durations and, therefore, ACEs alone may not provide a comprehensive list of childhood trauma (Boullier & Blair, 2018).Research examining the impact of ACEs has found associations with poorer health outcomes in later life, these being both physical and mental (Boullier & Blair, 2018).Further to this, ACEs are believed to impact behaviour, life opportunities, and economic stability (Boullier & Blair, 2018).The link between ACEs and economic stability is pertinent to this research, which investigates the impact of ACEs on impulse spending in later life. Impulse spending describes when "a consumer experiences a sudden, often powerful and persistent urge to buy something immediately" (Rook, 1987, p. 191).Valence et al. (1988) have suggested three constructs involved in the related concept of compulsive spending: (a) strong emotional activation (an increase in psychological tension), (b) high cognitive control (knowledge that spending will reduce this tension), and (c) high reactivity (a preference for tension reduction rather than a solution to the problem).At present, there is limited research investigating an association between ACEs and impulse spending, though researchers have discovered that ACEs are associated with gambling in later life; Lotzin et al. (2018) found that four out of five gamblers reported experiencing at least one ACE in their lifetime, indicating that ACEs increase the likelihood of becoming a problem gambler by 60%. Impulsivity is defined as "actions that are poorly conceived, prematurely expressed, unduly risky, or inappropriate to the situation which often results in undesirable outcomes" (Evenden, 1999, p. 348).Whiteside and Lynam (2001) suggested that certain personality traits are associated with impulsivity, such as urgency, lack of premeditation, lack of perseverance, and sensation seeking.Research shows that high exposure to ACEs is related to impulse control difficulties in later life, specifically negative urgency (an impulsive act used to improve one's mood, without thought of the later consequences (Shin et al., 2018)).Youn and Faber (2000) have suggested that a lack of control, or impulsive attitude, is a potential contributing factor to impulse spending.Bratko et al. (2013) found that trait impulsivity positively correlated with impulse spending.Research has also shown that specific trait impulsivity elements of cognitive complexity and motor impulsivity predict impulsive spending (Alloway et al., 2016).Other research has shown a relationship with non-planning, attention, and motor impulsivity (Sokić et al., 2020).Together, this evidence indicates that impulsivity may be linked to ACEs and impulse spending, and that those who act impulsively may also struggle to regulate their emotions. Emotional dysregulation is defined as "the impaired ability to regulate and/or tolerate negative emotional states" (Dvir et al., 2014, p. 1).There are various processes involved in emotional regulation, including biological, psychological, and interpersonal mechanisms (Ford, 2005).Emotion regulation involves the ability to control how and when emotions are felt as well as the intensity with which emotions are experienced (Dvir et al., 2014).Research shows that a large majority of individuals who spend impulsively report feeling better after their purchases (Gardner & Rook, 1988).Iyer et al. (2020) contended that a consumer's mood may act as an explanation for the affective and cognitive processes involved in impulse spending, suggesting that there is an emotional aspect that contributes to a need to spend impulsively.In line with this, in a study using retrospective reporting, Ozer and Gultekin (2015) found that mood prior to making a purchase was linked to increases in the likelihood of impulsive spending. There is little research linking childhood adversity and impulsive spending in adulthood.One study found that increased negative mood regulation expectancies, alexithymia, and childhood maltreatment were all independent predictors of compulsive buying (Kaur & Mearns, 2021).This study also showed that negative mood regulation expectancies moderated the effect of maltreatment on impulse spending (Kaur & Mearns, 2021). A small body of literature has investigated the role of both trait impulsivity and emotional dysregulation as overlapping and interacting risk factors for other impulsive behaviours.For example, Jakubczyk et al. (2018) found that those with alcohol use disorders had greater impulsivity via emotional dysregulation.Research on a brief intervention for risky behaviours showed that reductions in risky behaviours were linked to changes in emotional dysregulation rather than impulsivity (Weiss et al., 2015).However, no research to date has examined the role of both impulsivity and emotional dysregulation together in predicting compulsive spending and their relationship with childhood trauma. In the present study, we sought to investigate the link between childhood trauma and impulsive financial behaviours in later life, and whether this association is accounted for by impulsivity and emotional dysregulation.Little research has investigated the proposed relationships; examining the mechanisms through which ACEs impact impulsive spending is important to extend our understanding of the association between these two constructs and develop possible interventions.We examined three hypotheses: (a) greater childhood trauma will be correlated with more severe impulse spending in later life, (b) emotion dysregulation will mediate the relationship between ACEs and impulse spending, (c) greater impulsivity will mediate the relationship between ACEs and impulse spending. Participants This research included a sample of 243 participants from the general population who were recruited online as well as students participating in exchange for university course credit.Non-university participants were given the opportunity to enter a prize draw to win a £50 voucher for participation.There were no inclusion criteria beyond being aged 18 years and above.Those with or without issues related to impulsive spending, childhood trauma, and mental health were eligible to participate.Ages ranged from 18 to 70 (M = 30.97,SD = 13.83).Most participants were female, aged between 18 and 25 years, and identified as White (see Table 1).Ninety-four participants reported a mental health condition; bipolar disorder was the most common, followed by anxiety and depression (see Table 2). Measures Childhood Maltreatment To assess childhood maltreatment, the Childhood Trauma Questionnaire (CTQ; Bernstein et al., 1994) was used.This is a 28-item, retrospective, self-report tool that measures physical abuse, emotional abuse, sexual abuse, physical neglect, and emotional neglect during childhood.The scale also assesses aspects of the child's environment growing up.Ten items were reverse coded.An example question is: "When I was growing up, my parents were too drunk or high to take care of the family".Participants were asked to respond using a five-point Likert scale (1 = never true to 5 = very often true) to indicate the frequency of these scenarios.Scores were totalled; higher scores indicate greater exposure to childhood trauma.Cronbach's alpha yielded excellent internal consistency (α = .94). Impulsivity To assess impulsivity, the Barratt Impulsivity Scale short form (BIS-11; Patton et al., 1995) was used.This scale is a 14-item, self-report tool used to assess impulsive or non-impulsive behaviours.Specifically, it assesses the personality and behavioural aspects of impulsivity.Six items were reverse coded.Participants responded to questions using a four-point Likert scale (1 = rarely/never, 4 = almost always/always).An example question is: "I squirm at plays or lectures".Higher scores indicate greater impulsivity.Internal consistency in the current sample was good (α = .89). Impulse Spending To assess impulse spending, the Compulsive Buying Scale (CBS; Valence et al., 1988) was used.This is an 11-item, self-report tool used to assess spending habits.Participants responded to questions using a five-point Likert scale (1 = strongly disagree, 5 = strongly agree).An example question is: "When I have money, I cannot help but spend part or all of it".A higher total score indicates a greater tendency towards impulse spending.Internal consistency in the current sample was excellent (α = .94). Emotion Dysregulation To assess emotion dysregulation, the Difficulties in Emotion Regulation Scale (DERS-16; Bjureberg et al., 2016) was used.This scale is a 16-item, self-report tool used to assess levels of difficulty with emotion regulation.Participants responded to questions using a five-point Likert scale (1 = almost never, 5 = almost always).An example question is: "I have difficulty making sense out of my feelings".Higher scores indicate greater emotion regulation difficulties.Internal consistency in the current sample was excellent (α = .96). Subscale scores were not computed as we sought to examine overall patterns and limit the number of analyses performed. Procedure Participants completed an online survey via Qualtrics.Ethical approval was obtained through the University of Southampton ethics committee.Participants took part in exchange for psychology course credit or were recruited online via social media and organisations for mental health and financial difficulties (which publicised the study though their social media accounts and email lists).All participants gave informed consent.Participants with mental health conditions were included and asked to provide further information regarding their diagnosis, if they wished to do so.See Fig. 1 for a recruitment flow chart. Design and Statistical Analysis This research utilised a cross-sectional, correlational design to explore the link between ACEs and financial impulsivity in later life. A one-tailed Pearson correlation was carried out to investigate whether childhood trauma correlated with impulse spending.Normal distribution was determined through visual inspection of histograms and statistics for kurtosis and skewness which were in the normal range (-2 to + 2) for all standardised measures.A parallel mediation analysis (using PROCESS v.4.0,Hayes, 2018) was conducted with childhood trauma as the independent variable, impulsive spending as the dependent variable, and general impulsivity and emotional dysregulation as parallel mediators.Given that some participants had a mental health diagnosis, we reconducted the mediation in the clinical and non-clinical groups separately to examine whether this impacted the results.Kline (2023) recommends 10 to 20 participants per parameter, indicating our sample size was sufficient for the mediation analyses and when divided by clinical vs. non-clinical participants. Results A one-tailed Pearson's correlation was used to assess the relationship between all five scales (see Table 3).Childhood trauma (CTQ) was positively correlated with impulsivity, impulsive buying, and emotion dysregulation (p < .001). 243 participants enrolled to participate. 42 did not answer any questions. 205 remaining; 16 removed for only partial completion. Discussion This study sought to examine the relationship between childhood trauma and impulse spending in later life, and the possible mediating roles of impulsivity and emotional dysregulation.Childhood trauma showed a weak but significant positive correlation with impulsive spending, indicating that greater exposure to childhood trauma increases the likelihood that an individual will impulsively spend in later life, supporting Hypothesis A. This aligns with previous findings showing that childhood maltreatment is linked to greater compulsive spending (Kaur & Means, 2021). The parallel mediation results showed that emotion dysregulation and impulsivity mediated the relationship between childhood trauma and impulsive spending; specifically, greater childhood trauma predicted increased impulsive spending via greater impulsivity and emotional dysregulation (supporting Hypotheses B and C).These findings align with Espeleta et al. (2018) who found that emotion dysregulation mediated the association between childhood adversity and tendencies to engage in maladaptive behaviours.While their findings are not specific to impulse spending specifically, they showed that ACEs are linked to emotion dysregulation which may lead to difficulties in later life across a range of domains.The current findings also align with Shin et al. (2018) who showed that ACEs are related to impulsivity in later life, as well as Kaur and Mearns (2021) who found that negative mood regulation expectancy moderated the link between childhood adversity and impulsive spending.These findings are also consistent with research showing that mood prior to spending increases the likelihood of impulsive spending (Ozer & Gultekin, 2015). No research prior to the current study has looked at impulsivity as a mediator between ACEs and impulse spending.The present research, for the first time, indicates that all factors are linked, and that emotion dysregulation and impulsivity are mechanisms involved in the relationship between ACEs and impulse spending.Whilst previous research has shown an interaction between impulsivity and emotional dysregulation in other risky impulsive behaviours such as alcohol use disorders (Jakubczyk et al., 2018), the current findings show that both factors are relevant to impulsive spending and appear to explain a link between impulsive spending and childhood trauma. A possible explanation as to why emotion dysregulation acts as a mediator between ACEs and impulse spending is that an exposure to ACEs in early childhood can have long term psychological impacts, leading to negative mood (Jaworska-Andryszewska & Rybakowski, 2019).As previously discussed, Gardner and Rook (1988) found that individuals who are susceptible to impulse spending often engage in the behaviour to improve their mood.The inability to tolerate negative emotions may lead to maladaptive behaviours and coping mechanisms. Impulsivity may act as a mediator between ACEs and impulse spending because exposure to trauma in childhood increases the tendency to act impulsively (Shin et al., 2018).Researchers have proposed a number of potential reasons and mechanisms that may explain why childhood trauma can lead to elevated impulsivity; from an evolutionary perspective, childhood maltreatment is likely to lead to a lack of resources and, therefore, a rush to move forwards quickly in one's life to obtain access to resources (Liu, 2019).Disrupted neurological development may also explain this relationship (Liu, 2019); for example, those with childhood trauma show differing electroencephalography brain activity during experimental impulsivity paradigms (Kim et al., 2018). Clinically, the present results imply that psychological therapies should seek to reduce impulsivity and improve emotion regulation to mitigate impulse spending in those with histories of early adversity and childhood trauma.Mindfulness reduces trait impulsivity in those with substance use issues (Davis et al., 2019), and a brief mindfulness intervention has been shown to reduce impulsivity in experimental conditions (Dixon et al., 2019).However, the literature is inconsistent, with other research showing that an 8-week mindfulness course did not reduce impulsivity (Korponay et al., 2019).Dialectical behaviour therapy has been shown to improve emotion regulation (Neacsiu et al., 2014;Rozakou-Soumalia et al., 2021) and reduce impulsive behaviours (Jamilian et al., 2014).Future research should adapt these therapies to see if they can reduce or prevent impulsive spending in those who are at high risk due to early childhood adversity. While this research has important implications for those with impulsive spending behaviours, the study is limited by the cross-sectional and correlational design and retrospective reports of childhood trauma, which preclude causality in the mediation analyses; future researchers should examine the present associations longitudinally or experimentally.Nevertheless, research has shown good test-retest reliability of the Childhood Trauma Questionnaire in clinical populations (Shannon et al., 2016).The sample was predominantly female and of White ethnicity, limiting the generalisability of the findings.Reliance on self-report measures could have resulted in socially desirable responses.However, due to the personal nature of the study, it would be difficult to conduct without some aspect of self-report; future research may seek to incorporate reports from others (e.g., on spending behaviours) to see if these align with self-reports. Regarding the sample, there was a high proportion of individuals with bipolar disorder and other mental health conditions, likely because this study was advertised by Bipolar organisations and probably appealed to this group given the high prevalence of impulsive spending within this condition (Fletcher et al., 2013;Richardson et al., 2017).The mediation analyses separated by clinical status showed that impulsivity remained a mediator in both clinical and nonclinical participants, though emotional dysregulation was only a mediator in the non-clinical sample.This is surprising given that emotional dysregulation is common in clinical populations including mood and anxiety disorders (De Prisco et al., 2022;Hofmann et al., 2012).It may be that, as the clinical group are more emotionally dysregulated, there was less variance in the clinical sample on this variable, meaning that mediation could not be shown statistically.Another possibility is that impulsivity is a stronger predictor within the clinical sample; research shows that those with bipolar disorder have elevated levels of impulsivity even outside of acute mood episodes (Newman & Meyer, 2014). Conclusion Overall, exposure to childhood trauma is associated with increased impulsive spending in adulthood.Childhood trauma increases impulsive spending via increased general levels of impulsivity and emotion dysregulation; these mechanisms are therefore likely to be important targets in psychological interventions to reduce impulsive spending in those exposed to childhood trauma.Further evidence is required to establish causality, and replication of the observed effects in more representative groups will strengthen the findings. Fig. 2 Fig. 2 Parallel Mediation Model for the effect of Childhood Trauma on Impulse Spending via Emotion Dysregulation and Impulsivity.Note.Path c' = direct effect; path c = total effect.Estimated path coefficients are unstandardized.*** p < .001 Table 1 Demographic Information
2024-01-07T16:06:37.127Z
2024-01-05T00:00:00.000
{ "year": 2024, "sha1": "8ff6318508b3efe8fed2bced6a2538e951a4cb8b", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40653-023-00600-7.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "164be20ac445baa3c5dcdfe567a06291f9c9cc19", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
216326521
pes2o/s2orc
v3-fos-license
Gender and Digital Teaching Competence in Dual Vocational Education and Training : In recent decades, technological advances have been revolutionizing all areas of society, including the teaching resources and methodologies used in the world of education. Teachers are in the process of adapting to develop the digital skills they need for the use of Information and Communication Technologies (ICTs), a process that must be permanent and in which there are still knowledge gaps undermining its application. This study aims to determine whether this lack of digital skills is influenced by the gender of teachers, for example, whether there is a gender gap in ICT application in teaching, specifically Dual Vocational Education and Training, which is a teaching area that has been growing exponentially in recent years. A descriptive quantitative method has been used for this study with a sample of 1568 teachers of Dual Vocational Education and Training from the Autonomous Community of Andalusia, with data collected through a questionnaire. The results show that while the level of knowledge of ICT resources is medium among this group and is therefore improvable, there are no significant gender di ff erences between teachers with respect to the application of e-skills by teaching professionals, despite the existence in other contexts of a large digital gender gap in new technology professionals. Introduction Knowledge of Information and Communication Technologies (ICT) is increasingly essential in all facets of today's society, whether in the world of work or in everyday life. The development of ICTs, as in all facets of life, has also reached the field of education, contributing positive developments that significantly affect the processes of training and learning [1,2]. That is why European and national institutions have recently developed regulations to promote key skills in digital knowledge and their application to education. In December 2006, the 'European Parliament Recommendation on the Key Competences of Lifelong Learning' was published, one of which is digital competence, which was defined as "the creative, critical and secure use of information and communication (ICT) to achieve the objectives related to work, employability, learning, leisure, inclusion and social participation". At the national level, Spanish legislation on education covers these areas in "Organic Law 8/2013, of December 9, for the improvement of educational quality". It states that ICTs should be used for pedagogical purposes in various areas of the curriculum, in order to promote the inclusion of digital resources and tools that stimulate activities among teachers and students. Advances in technology in teaching improve meaningful learning [3], and lead to increased motivation for students [4,5]. As Sáez López [6] points out, the incorporation of ICTs into the learning process not only significantly improves practical knowledge of applications and programs, but also contributes to skills development and fosters active and self-employed students. The Need for ICT Skills from Dual Vocational Education and Training Teachers As the educational environment becomes the main place to develop digital skills, the figure of the teacher increasingly stands out as a fundamental actor in the process of learning such knowledge [6]. In addition, the digital competence of teachers, understood as all the skills and abilities teachers have to effectively achieve the management and deployment of technology in the educational field, is justified by the proliferation of resources, means and methodologies of new technologies in the classroom [7][8][9]. This means that a teacher needs to have good training in this area. In fact, many researchers izedhave discussed the need to include ICT knowledge in the competencies obtained by education professionals [10]. Training of digital teachers also requires 'permanent training' to improve technological skills and abilities, as ICT resources are constantly evolving [11]. In Spain, with the approval of the Organic Law for the Improvement of Educational Quality (LOMCE) the country's teacher training plans were reformulated, with the Europe 2020 Strategy emerging and positing the need for the acquisition of key skills to achieve quality teaching such as heterogeneity, languages, continuous training and of course ICT integration [8]. Thus, the National Institute of Educational Technologies and Teacher Training (INTEF) classified digital competence in five major areas, which are information and information literacy (A1), communication and collaboration (A2), creativity (A3), security (A4) and problem solving (A5) [12]. Recent research into digital competence and its integration into teaching learning processes has provided some notable insights: • The integration of ICT into schools is often linked to technological infrastructure, its inclusion in curricular designs, availability of resources, teacher training and even the attitude of the teacher towards its use [13,14]. • Teachers do not have the optimal digital skills needed to teach ICT. Many authors such as Bates [15] point out that the main barrier to innovation in the use of technology is the fear of change; most teachers are not comfortable with technology and also feel that students as digital natives have more knowledge than them. • However, at the university level, teachers are not reluctant to use ICT; on the contrary, they enthusiastically employ their pedagogical knowledge of ICT in the classroom [16,17]. • In general, teachers recognize that ICT mastery generates better results in their academic activities [18]. • Personal characteristics, such as gender and age, can influence how teachers adopt different types of teaching innovation [19,20]. Dual Vocational Education and Training is a new modality within vocational training. Dual Vocational Education and Training projects within an education system combine teaching and learning processes in a company and in a training center, and are characterized by the fact that they are carried out in a system of alternation between the education center and the company, with a number of hours or days of stay of variable duration between the work center and the education center. With this new innovative modality, companies can support new organizational models of Vocational Training that are directed towards the search for excellence in the company's relationship with Vocational Training Centers and promote their Corporate Social Responsibility [21]. This study aimed to determine the level of digital competence of Dual Vocational Training teachers (Dual VET) [22,23], as this is a key factor of analysis and study in direct relation to learning outcomes in students, as has already been studied in other branches of education such as universities [17]. As the number of students in Dual Vocational Education and Training have increased in recent years and therefore the increase in teachers in this modality of teaching, together with the scarcity of research related to the subject, it is envisaged that this research will help present and future teachers to better understand what are the elements of a methodology based on the use of information and communication technologies. Gender Gap in ICT Knowledge and Skills This article explores the influence of gender on teachers' digital knowledge. Based on research carried out in recent decades, technology has always been an area dominated essentially by men, with a major underrepresentation of the female gender. The data collected in different studies on the number of women engaged in the design and creation of software for technology companies found a very low level of women specialists [24]. It is clear that ICT-related university studies have low female participation. In addition, although in recent years the percentage of women enrolled in university degrees has been increasing, in they are still under-represented amongst engineering teachers [25,26]. New generations of women have been users since ICT's infancy, even in higher percentages than males, but they remain a minority of the people focused on the study, design and development of new technologies. A study conducted by Garrido, Rubio and Valle [27] on the differences between men and women in ICT knowledge and management when they begin their university education indicates that "female students have less mastery than male students in regards to computer and Internet programming, database design, spreadsheets, use of collaborative working software and online help manuals, understanding hardware and software compatibility, and improving multimedia productions". However, although other research shows how men often have a more optimistic vision and attitude than women regarding the use of ICT [28], the research of Garrido et al [27] points out that these gender distinctions in the "use" of ICTs disappear in the domain of "basic and moderate knowledge" of digital tools. In this context, it can be said that women have equal digital knowledge to that of men at the "user level", but they represent a small minority in specialized studies and senior jobs in the area of Computer application development. The "digital gender gap" is referred to as the distance between men and women in the use of new technologies. Barragán and Ruiz Pinto [29] point out that, in recent years, there have been many technological advances in society, leading to many new technological tools being incorporated into people's lives. Although these technological instruments offer many advantages, it is also true that they manifest new dangers, and even welcome and reproduce certain social threats. One of these threats is the transmission of gender inequality through ICT. The difference in the use of technology between men and women is therefore a social problem that needs to be eradicated in all social spheres and, of course, also in school classrooms at all levels of education. There are many works that point to the need for women to be incorporated into the use of ICTs as a form of the elimination of discrimination in IT. Hence, the importance of including gender equality in ICT management is seen as being high [30]. In theory, the possibilities for male and female access to ICT are gender-balanced [31], mainly due to equality legislation. Organic Law 3/2007 states an intention to incorporate the principle of equal opportunities for men and women in the design and implementation of all public information society development programs, the intention to promote the full incorporation of women into the Information Society, the intention to promote content created by women in the field of the Information Society and finally, the intention to provide public funding for projects of the field of technology information and communication that include non-sexist language and content [32,33]. Despite this institutional commitment to equality, the differences between women and men in the technological field continue to persist. However, it must be stressed that these inequalities do not currently concern both the presence (access to ICTs) and the professionalization of women in this field. In this context of promoting equality, this challenge highlights the importance of the role of teachers in providing educational interactions for their students. Therefore, this research aims to detect if there is a gender gap among Dual Vocational Education and Training teachers in Andalusia. Objectives of the Study This study is designed to find out the level of ICT skills possessed by Dual Vocational Education and Training teachers. In addition, the gender qualitative variable can be extrapolated to determine the differences in digital competencies between male and female teachers. To this end, a study of two hypotheses was developed: 1. There are no significant differences between men and women in relation to the different variables in the study. 2. There are no significant differences between the subjects in relation to previous ICT training, the previous level of studies, the professional category, the professional family to which they belong and the population size where they teach. We set out three specific objectives: • To examine correlations between study-dependent variables and analyze how variables that are quantitative or qualitative contribute to the Degree of Information and Digital Literacy dimension (the degree of ICT knowledge held by teachers). • Determine whether there are significant differences between participants based on previous ICT training, the previous level of studies, the professional category, the professional family to which they belong and the population size where they teach. • Analyze the sample size needed to detect significant differences. Method This paper advocates the use of a methodological approach of a quantitative descriptive nature. The aim is to be able to describe through statistical tests the educational reality regarding the level of digital competence in the teaching staff of Dual Vocational Education and Training in Andalusia [34,35]. Instrument Responses were collected through the development of an ad hoc questionnaire on the level of digital teacher competence. The initial data-frame included 68 columns and 1568 observations and participating subjects, which were Dual Vocational Education and Training teachers. The independent variables were then factored in: Finally, regarding the internal consistency of the instrument, Cronbach's Alpha test was applied, which obtained a result of α = 0.875, an optimal value to guarantee the viability of the research. Participants A total of 1568 participants, dual-training teachers from the Autonomous Community of Andalusia, participated in the study, with the following characteristics: Procedure Initially, the outliers and lost values were analyzed, the latter being omitted from the data-frame. Two outliers were found, one for the variable age (55) and one for experience (25). In both cases, a manual function was defined with software R, stating that if the value of each of the variables was greater than the quantile or 95th percentile, it would be imputed by the median, while it was more robust than the mean. The feature engineering was then carried out. The original dataset was divided into two large sets: training and testing using the initial-split function of R with a prop of 85%. This means that the training dataset kept 85% of the original data. This is because the training group was used to model the algorithm and the testing group to evaluate its performance. The metric independent variables, age and experience, were applied to the transformation of Yeo-Johnson, which is an extension of Box-Cox's transformation. Categorical variables were then converted into indicator variables or dummies. Finally, the scores of the different study-dependent variables were added. Before analyzing significant differences, an additional procedure was performed to implement a bin-based, funnel correlation based method: • IDL (all scores equal to or above the average, which means 25.29 were replaced by 1, otherwise by 0) • CCDR (all scores equal to or above the average, which means 25.23 were replaced by 1, otherwise by 0) • CDC (all scores equal to or above the average, which means 25.31 were replaced by 1, otherwise by 0) • DC (all scores equal to or above the average, which means 28.18 were replaced by 1, otherwise by 0) • CS (all scores equal to or above the average, which means 25.31 were replaced by 1, otherwise by 0) • PS (all scores equal to or above the average, which means 25.13 were replaced by 1, otherwise by 0) A characteristic element of this analysis was that these bins cannot be factored if they were not converted into categorical variables. Results For illustrative purposes, before showing the results of the study object of this article (digital gap in teacher ICT), it is interesting to present the main results of the generic study on digital competences (Table 1): Main statistical parameters for each item in the study. The average age of the participants was 32.4 years and the amount of experience was 8.4 years. In general, the age of women was higher than men. Hypothesis 1 (H1). There are no significant differences between men and women in relation to the different variables in the study. With respect to hypothesis 1, in relation to this gender study, the statistics show the following data ( Figure 1): • A weak correlation is observed between study-dependent variables. This suggests that there are no significant effects of interaction between the two. Because the gender variable is nominal, the contingency coefficient and the V of Cramer were used. A weak relationship was observed (V.005). Fisherʹs paired comparisons indicated that there were no significant differences in relation to the different dependent variables (Figure 2): Based on these results, the hypothesis 1 is considered to be null. Hypothesis 2 (H2). There are no significant differences between subjects in relation to previous ICT training, prior level of studies, the professional status, the professional family to which they belong and the population figure. To respond to hypothesis 2, the Mann-Whitney U-Test (for ICT) was used because it is a two-level ordinal. We also used the Pair wise post-hoc Test for Multiple Comparisons of Mean Rank Sums for non-replicated Blocked Data (Nemenyi-Test) and the Pair wise Test for Multiple Comparisons of Mean Rank Sums (Dunn's-Test) Comparisons of Rank Sums for non-replicated Blocked Data (Conover-Test). The results were as follows: • There were no significant differences in ICT in relation to each dependent variable, that is to say, previous ICT experience does not influence the ICT knowledge and skills identified in teachers. • There were significant differences in L. Stud (factor 3 with respect to 4 and 5, and factor 4 relative to factor 8) in CCDCR; there were also significant different for factor 3 with respect to 4 and 5 in CDC. In other words, there are differences between the previous level of studies and the level of collaboration and communication, and also between both and the ability to create virtual content. • There were significant differences in B. Know (factor 0 relative to factor 1) in IDL, meaning factor 2 relative to 3 in CDC. This means that the professional family which teachers belongs to influences their ability to create virtual content. • There were significant differences in P. Categ between factor 0 and 1 for PS. Or put another way, the teacher's professional category influences their problem-solving ability. • There were significant differences for inhabitants between factors 0 and 3 with respect to CCDR (the population number affects the level of communication). Based on this data, although the differences are not very high, hypothesis 2 must be rejected, while admitting that there are factors that interact between the different variables. Finally, the power of the statistical test was calculated, which was the probability of rejecting the null hypothesis. To this end, the following procedure was used: A new dataset, x, was defined by selecting only the gender variable, DC, ICT. F (with level-0) and another analogous variable, and then selecting the same type of variables but using ICT.F level 1 instead. Cohen's d was then used for both datasets (d estimate 0.1). For the calculation of Power calculations for t-tests of means, the d value was replaced by the previous d estimate, and a power of 80% was defined. The results showed that 1052 subjects (n × 1052.024) were needed to detect significant differences in the study, a result that is otherwise within the sample limit. Discussion and Conclusions The main objective of this research work, as reflected, was to determine the existence of a gender gap in the ICT knowledge of Dual Vocational Education and Training teachers, that is to say that what was studied for hypothesis 1. In this context, it is necessary to clarify that, according to previous studies, as well as in the results obtained in this study, it is necessary to define what is meant by gender gap. If by gender gap, we describe the distance that exists between qualified professionals in utilizing new information and communication technologies, it is clear that it still exists, as other previous studies indicate [20,28,30]. Society therefore has an obligation to alleviate it in order to encourage the inclusion of women in all areas. However, in this study, the aim is to increase the ability of teachers to implement ICT in their teaching methodologies. This qualification does not require high-level knowledge, nor is it necessary to have high studies in computer technology, but instead an acceptable level of knowledge as a user of the relevant digital tools is needed. If by gender gap, we mean the initial mastery of ICT tools or the degree of qualification at the user level achieved by teachers, then this study in its hypothesis 1 does not detect significant gender differences, which coincides with other studies such as those of Gil-Juárez, Vitores, Feliu and Vall-llovera [31], or those of Garrido et al. [27]. Teachers at all levels of education need ICT knowledge at the user level to implement digital tools in the classroom, and it seems that they are insufficiently trained in this context. However, there are no gender-significant differences other than in regards to the possession of high levels of professional knowledge. In addition, the possibilities of access to ICT for men and women is gender balanced {28]. In short, it should be emphasized that, currently according to research in this area, these inequalities do not concern the presence (access to ICTs) or the professionalization of women in this field. The main conclusions, therefore, are: Firstly, and most importantly, as the main reason for this article, it can be concluded that there are no significant differences between the ICT knowledge applied in teaching by Dual Vocational Education and Training teachers compared to other teachers, as is apparent in the research conducted. This lack of a digital gender gap refers exclusively to the basic knowledge needed to implement new technologies in a teaching didactic. Secondly, this quantitative study supported the need for ICT skills used by teachers, as seen in the pre-existing literature in the introduction section. This study corroborates this idea by taking into account that all the research about the level of ICT skills or knowledge that teachers, which have shown averages around 2.4 points out of four. Therefore, these skills should be improved through training programs in teacher studies and in the lifelong learning policies for teaching workers. Finally, in relation to hypothesis 2, even if it is not the main objective of study in this article, it can be said that certain factors such as the level of previous study, the professional family to which the teacher belongs, their professional category and the population numbers where they work all influence the variables of collaboration and communication, the ability to create virtual content and the capacity to solve problems. These influences can be studied in other research work. The main limitation of this study is that it is focused on a specific type of teachers, which are dual-training teachers. However, this was the purpose of this research, that is to say that to limit a teacher sector in one type of teaching, in order to support the general idea of a lack of significant differences in the implementation of ICT by all teachers, as there are other works that have investigated this aspect, but without differentiation between different educational stages. As a future line of research, study could be limited at other specific educational stages such as early childhood education, university education, job training, etc. In addition, the study could be extended to the other Autonomous Communities, as the present study only focuses on Andalusia. Authors should discuss the results and how they can be interpreted in light of previous studies and of working hypotheses. The findings and their implications should be discussed in the broadest context possible. Future research directions may also be highlighted.
2020-03-26T10:39:02.084Z
2020-03-24T00:00:00.000
{ "year": 2020, "sha1": "75fa26c2f26eb79388eb612a32e92990ca0a300d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-7102/10/3/84/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "beff9904835cff01fbddd14cd496b46bd658d1fd", "s2fieldsofstudy": [ "Computer Science", "Education" ], "extfieldsofstudy": [ "Psychology" ] }
259946100
pes2o/s2orc
v3-fos-license
A qualitative exploration of the psychosocial needs of people living with long‐term conditions and their perspectives on online peer support Abstract Introduction Approximately 20% of people with a long‐term condition (LTC) experience depressive symptoms (subthreshold depression [SUBD]). People with SUBD experience depressive symptoms that do not meet the diagnostic criteria for major depressive disorder. However, there is currently no targeted psychological support for people with LTCs also experiencing SUBD. Online peer support is accessible, inexpensive and scalable, and might offer a way of bridging the gap in psychosocial care for LTC patients. This article explores the psychosocial needs of people living with LTCs and investigates their perspectives on online peer support interventions to inform their future design. Methods Through a co‐produced participatory approach, online focus groups were completed with people with lived experience of LTCs. Focus groups were audio recorded and transcribed verbatim. Reflexive thematic analysis (TA) was conducted adopting a critical‐realist approach and an inductive analysis methodology that sought to follow participants' priorities and concerns. Results Ten people with a range of LTCs participated across three online focus groups, lasting an average of 95 (±10.1) min. The mean age was 57 (±11.4) years and 60% of participants identified as female. The three key emerging themes were: (1) relationship between self and outside world; (2) past experiences of peer support; and (3) philosophy and vision of peer support. Adults living with LTCs shared their past experiences of peer support and explored their perspectives on how future online peer support platforms may support their psychosocial needs. Conclusion Despite the negative impact(s) of having a long‐term physical health condition on mental health, physical and mental healthcare are often treated as separate entities. The need for an integrated approach for people with LTCs was clear. Implementation of online peer support to bridge this gap was supported, but there was a clear consensus that these interventions need to be co‐produced and carefully designed to ensure they feel safe and not commercialised or prescriptive. Shared explorations of the potential benefits and concerns of these online spaces can shape the philosophy and vision of future platforms. Patient or Public Contribution This work is set within a wider project which is developing an online peer support platform for those living with LTCs. A participatory, co‐produced approach is integral to this work. The initial vision was steered by the experiences of our Patient and Public Involvement (PPI) groups, who emphasised the therapeutic value of peer‐to‐peer interaction. The focus groups confirmed the importance and potential benefit of this project. This paper represents the perspectives of PPI members who collaborate on research and public engagement at the mental–physical interface. A separate, independent Research Advisory Group (RAG), formed of members also living with LTCs, co‐produced study documents, topic guides, and informed key decision‐making processes. Finally, our co‐investigator with lived experience (E. A.F.) undertook the analysis and write‐up alongside colleagues, further strengthening the interpretation and resonance of our work. She shares first joint authorship, and as a core member of the research team, ensures that the conduct of the study is firmly grounded in the experience of people living with LTCs. Conclusion: Despite the negative impact(s) of having a long-term physical health condition on mental health, physical and mental healthcare are often treated as separate entities. The need for an integrated approach for people with LTCs was clear. Implementation of online peer support to bridge this gap was supported, but there was a clear consensus that these interventions need to be co-produced and carefully designed to ensure they feel safe and not commercialised or prescriptive. Shared explorations of the potential benefits and concerns of these online spaces can shape the philosophy and vision of future platforms. Patient or Public Contribution: This work is set within a wider project which is developing an online peer support platform for those living with LTCs. A participatory, co-produced approach is integral to this work. The initial vision was steered by the experiences of our Patient and Public Involvement (PPI) groups, who emphasised the therapeutic value of peer-to-peer interaction. The focus groups confirmed the importance and potential benefit of this project. This paper represents the perspectives of PPI members who collaborate on research and public engagement at the mental-physical interface. A separate, independent Research Advisory Group (RAG), formed of members also living with LTCs, co-produced study documents, topic guides, and informed key decision-making processes. Finally, our co-investigator with lived experience (E. A.F.) undertook the analysis and write-up alongside colleagues, further strengthening the interpretation and resonance of our work. She shares first joint authorship, and as a core member of the research team, ensures that the conduct of the study is firmly grounded in the experience of people living with LTCs. and a further 20% experience subthreshold depression (SUBD). 2,3 SUBD is the experience of depressive symptoms that do not meet the diagnostic criteria for MDD. 4,5 In those with LTCs, SUBD is associated with having a significant impact on people's lives, including reduced quality of life, poorer reported physical health outcomes and increased risk of MDD. [5][6][7] SUBD is also a key risk factor for major depression, with 42% of patients who have SUBD comorbid with type 2 diabetes or heart disease developing major depression within 2 years. 7,8 Currently, there is no targeted psychological support for people with LTCs who are also experiencing SUBD. To prevent the escalation to MDD, the needs of those with LTCs experiencing SUBD need to be more carefully understood. Online health interventions reportedly increase self-management behaviours and improve wellbeing. 9,10 Studies in patients with LTCs have highlighted improved self-efficacy, adaptive coping and empowerment as benefits of participating in online support groups. 11 Peer support is defined as 'a range of approaches through which people with similar LTCs or experiences support each other to better understand the condition and aid recovery or self-management'. 12 Peer support may take place face-to-face, over the phone or online. 13 Online peer support platforms often embed a psychoeducation element. Psychoeducation interventions are defined as a 'professionally delivered treatment modality that integrates and synergizes psychotherapeutic and educational interventions' 14 and are considered more holistic than traditional medical model interventions. 14 However, there is currently little evidence exploring the effectiveness of online peer support combined with psychoeducation interventions to support people with LTCs experiencing SUBD. Recent findings suggest that online peer communities may offer similar benefits to face-to-face support. 15 A qualitative systematic review considered how people with LTCs describe their experiences with online peer support. The main findings suggested that feelings of reciprocity, social support and access to experiential knowledge were experienced when accessing online peer support. 16 To our knowledge, there have been no randomised controlled trials (RCTs) of online peer support and psychoeducation interventions available to people with a diverse range of LTCs and SUBD (i.e., platforms that are not condition-specific). However, RCTs of face-toface peer support were shown to be effective on mental and physical health outcomes for those with LTCs, including people with diabetes, asthma and cardiovascular disease. 17,18 Other research suggests peer support interventions for those currently experiencing depressive symptoms or higher scores of psychological distress were more effective at reducing depressive symptoms compared to usual care. 19,20 Online peer support platforms for varying health needs are abundant. Yet, there are no online peer support and psychoeducational interventions tailored to support those experiencing SUBD in the context of LTCs. This article is set within the context of a wider project aiming to develop an online peer support and psychoeducation platform for those living with LTCs and SUBD. Intervention mapping has been used to integrate theory and evidence and guide the development of the project. 21 The study reported in this article is nested within the step 'Intervention Mapping: Needs Assessment'. 21 This article aims to explore the psychosocial needs of people living with LTCs and investigates their perspectives on online peer support interventions to inform their future design. | Design A focus group study of the psychosocial needs of people living with LTCs and their perspectives on online peer support. | Patient and public involvement (PPI) This article is set within the context of a wider project that is developing an online peer support platform for those living with LTCs and SUBD. An intervention mapping 21 and participatory, co-production approach has been embedded throughout. Three groups were established as part of the participatory design as follows: (1) focus group participants; (2) A separate, independent RAG was formed of members also living with LTCs. They supported the study throughout by co-producing all study documents and by collaborating on key decision-making processes. They also co-produced the focus group topic guide with the research team. The PDP was made up of an external design agency, researchers, clinicians, a co-applicant with lived experience (E. A.F.) and participants from the focus groups. The PDP will also be involved in the subsequently planned co-design stages of developing the peer support platform. | Participants Participants with LTCs were invited to take part in this study through flyer advertisements circulated through established PPI groups (the ICCPG, the Guy's and St Thomas' PPI group, the King's College Hospital PPI group) and through snowball sampling via these groups (e.g., word of mouth). Inclusion criteria were over 18 years of age, living with an LTC and the ability to give informed consent to participate. Exclusion criteria were insufficient English to be able to engage in focus group discussions. Participants were aware that they were being invited to discuss issues such as how their physical health condition affects their mental wellbeing and that the platform was being developed for use among people with SUBD and LTCs specifically. Three focus groups, with 10 people in total, were conducted, exploring the psychosocial needs of people living with LTCs and their perspectives on online peer support. Focus groups were intended to shift the experience of power from the researcher to the group of participants, and to enable participants to feel supported by the group and not isolated in their experiences. 22 Due to restrictions imposed secondary to the 2019 novel coronavirus (COVID-19) pandemic, focus groups were carried out online via videoconferencing platforms and group sizes were reduced due to the online shift. Consultations with the RAG and researchers with experience of online delivery of focus groups informed the choice of platform to ensure optimal engagement. Clear, standardised, step-by-step instructions were provided to participants on how to download, access and use the platform. All participants had the necessary equipment (i.e., a device to take part, a webcam and microphone) and were offered a practice call with a member of the | Analysis Focus groups were audio recorded and transcribed verbatim. Transcripts were reread alongside listening to the audio recording to anonymise and check accuracy. Reflexive thematic analysis (TA) was conducted (E. A.F. and H. R.) adopting a critical-realist approach 23,24 and an inductive analysis methodology that sought to follow participants' priorities and concerns. 25 This analysis was co-produced using a participatory approach and therefore reflexive TA was selected by the authors as most appropriate due to its accessibility and acknowledgement that the authors play an active role in the analysis. 23 The focus groups were not carried out in a social vacuum as our assumptions and experiences as researchers impact the research we conduct. 23 Table 2. | Reporting Reporting was guided by the Standards for Reporting Qualitative Research (SRQR), which consists of a 21-item checklist. 27 The SRQR has been used to ensure standards for presenting qualitative analysis are met, while also allowing the flexibility and approach of this work to be maintained. | Participant characteristics Ten people with a range of LTCs participated across three online focus groups. Table 1 provides an overview of participant characteristics. The mean age was 57 (±11.4) years and 60% of those taking part in the focus groups identified as female. The majority (80%) of the participants used technology daily, and 30% had used internet support groups before. | THEMES AND SUBTHEMES Throughout the focus groups, a range of experiences were described in relation to the psychosocial needs of people living with LTCs and their perspectives on online peer support. We present three themes: (1) relationship between self and outside world; (2) past experiences of peer support; and (3) philosophy and vision of peer support. Table 2 provides an overview of the themes presented, corresponding subthemes, definitions and evidencing quotations. | Mind-body separation Participants felt that healthcare culture generally groups physical and mental care as separate entities, even in the context of LTCs. This separation was felt in previous experiences of treatments received in healthcare environments, 'when I was diagnosed, mental health issues didn't come into it. You had your condition and that was your condition. But now when we're asked to talk about how we feel … I find it really hard' (focus group 2, participant 2), and was reflected in the way some participants viewed their own health: as two distinct halves of mental and physical. Participants showed awareness of the complex nature of health in certain contexts (e.g., social situations, in the workplace). Despite this, they reported health discussions with clinicians as seeming reductive and more two-dimensional in nature, without acknowledgement from their doctor or nurse that their physical health status was likely to be affected by the condition of their mental health. The discussion of these interactions with clinicians was broad and varied according to participants. For some, the emotional side of living with an LTC was never discussed with their healthcare professional (HCP). Participants reported that clinicians either did not discuss mental health issues and/or did not seem to consider themselves to be in an appropriate role to discuss them, though this was not the case for all. | Predictable variability This subtheme was strongly emphasised by participants and captures how participants expect to experience good and bad days with their Severe allergic asthma 1 Spondylolisthesis Tension between self-reliance and needing help Wanting to be independent but also the discomfort with having to ask for help when support from others is needed. I've asked for somebody's help to help me go upstairs, um, in, in the tube station to go through the stairs (…) And the person said, oh, I haven't got any money. (…) Can be tough on, on, on your mental health eventually. Because then you feel even more self-conscious and anxious and, um … And, and, and paranoid in a lot of respect. (focus group 2, participant 3) Total strangers who are, like, loads older than me asking if they can help me which is extremely sweet but it makes me feel a bit pathetic. (focus group 2, participant 1) But the thing I, I've noticed the most in regard to mental health and that's sort of relationship within oneone's's self and the outside world, is, um, how would you say? The atmosphere, um, around one in the outer world, I find very unsettling. You know, the, the sort of vulnerabilities and the frailties and the suspicions and all these unsettling things, um, that seem to be within others, uh, affect me it' (focus group 3, participant 1). Two rationales for this were given. First, the conversation itself was perceived as negative or not solution-focused, or second, it served as a reminder of the participants' own health when they did not want to focus on it. Traditionally, peer support in people with LTCs has been centred around a particular condition, but we found evidence that this approach did not work for everyone. Several participants described encountering attitudes of competitive comparison where symptoms were pitted against each other: 'condition-specific groups […] didn't help because everybody was comparing their back pain to your back pain and that just wasn't helpful' (focus group 2, participant 4). Finally, while acknowledging that a condition-specific approach could be successful for some, participants pointed out that 'no size fits all' (focus group 3, participant 1) and it was important not to assume a particular initiative could engage all those who wanted support. | A safe and credible zone According to participants, successful peer support platforms should be a secure and confidential space and their development should involve co-production with members of the patient group that they aim to cater for. The need for safety while accessing peer support was a key concern, although there were different definitions of what it meant to be safe in this context. We found that being in a safe space could mean, amongst other things, an expectation of privacy, Interestingly, there was also an emphasis on accessibility to peer support, which in practice could result in a less private space, by virtue of online peer support being easy to find and participate in. The need for accessibility and privacy is concisely summed up here: 'within my culture, it's like a taboo when it comes to mental health. So it's about making the site, […] easy to access' (focus group 3, participant 2). The ease of participating in online peer support was also discussed: 'we can do it from our homes, we can listen to each other, but you haven't got to think about how to get somewhere' (focus group 2, participant 4). | Transparent motivations Participants believed that peer support platforms should not feel prescriptive or commercial in nature. One participant explained that it could depend on the motivations of the platform creators (whether commercial or academic) that result in an unwanted and prescriptive user experience: 'But things can get corrupted along the way by […] other agendas, shall we just say. And it's very conspicuous in the commercial world' (focus group 1, participant 1). It was felt that a commercial imperative was not inherently negative but considered likely to impact the integrity or values of a platform, which participants were acutely sensitive towards. | 2085 that service users found changes to their usual mental healthcare worrying, particularly when these changes were not effectively communicated. 30 Participants recognised that online peer support and psychoeducation did not require them to leave home and could therefore reduce the burden of self-management by helping people to feel more connected and supported by others in similar situations. | Technology becomes an essential skill This links closely with Griffiths et al., 18 social support and access to experiential knowledge. 16 Whilst research assessing the efficacy of peer support for depression found that peer support interventions were more effective at reducing depressive symptoms compared to usual care, 19 mitigating the potential adverse effects of online peer support is also key. Easton et al. 31 Additionally, future work should explore the potential role that HCPs may have in facilitating online peer support. Their role(s) may be multifaceted, from screening, referral and signposting, to moderating the platform and contributing to the psychoeducational material. Therefore, future work is needed to explore these potential roles and what people with LTCs would view as the most valuable role HCPs may play. Also, a limitation of this work that should be recognised is that we did not use a clinical measure to assess the mental health of participants, so the findings are not specific to those with SUBD. However, the recruitment flyer was framed under the title of 'Online peer support for preventing depression in people with LTCs: focus groups' and all people recruited to this study were aware that they were being invited to discuss issues such as how their physical health condition affects their mental wellbeing. | CONCLUSION Adults living with a range of different LTCs expressed the potential benefits that online peer support may have on supporting their psychosocial needs. They also expressed potential concerns around negative engagement with online peer support, highlighted by their discussions that emphasised the importance of these spaces feeling safe. Based on the shared experience of those who took part in this work and the value of co-production, careful, collaborative consideration is essential to develop the guiding principles of a future peer support platform, to explore potential moderation processes, and coproduce a moderation policy. That participants expressed that any online peer support platform needs to be a safe and credible zone highlights the need for platforms to be co-designed with the people that will ultimately use them to ensure this is a priority throughout. These findings evidence how important identifying needs in the preintervention design stage is to promote a more purposeful intervention design that is user-led.
2023-07-19T06:18:52.514Z
2023-07-17T00:00:00.000
{ "year": 2023, "sha1": "b5690b91a19fdc3eaeb7ae1ba5a9b279dd9a3a42", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/hex.13814", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "44b4c1550f8b4215bd2a8afa3aed1381bd5399c3", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
222826498
pes2o/s2orc
v3-fos-license
Occludin, caveolin‐1, and Alix form a multi‐protein complex and regulate HIV‐1 infection of brain pericytes Abstract HIV‐1 enters the brain by altering properties of the blood‐brain barrier (BBB). Recent evidence indicates that among cells of the BBB, pericytes are prone to HIV‐1 infection. Occludin (ocln) and caveolin‐1 (cav‐1) are critical determinants of BBB integrity that can regulate barrier properties of the BBB in response to HIV‐1 infection. Additionally, Alix is an early acting endosomal factor involved in HIV‐1 budding from the cells. The aim of the present study was to evaluate the role of cav‐1, ocln, and Alix in HIV‐1 infection of brain pericytes. Our results indicated that cav‐1, ocln, and Alix form a multi‐protein complex in which they cross‐regulate each other's expression. Importantly, the stability of this complex was affected by HIV‐1 infection. Modifications of the complex resulted in diminished HIV‐1 infection and alterations of the cytokine profile produced by brain pericytes. These results identify a novel mechanism involved in HIV‐1 infection contributing to a better understanding of the HIV‐1 pathology and the associated neuroinflammatory responses. HIV-1 is known to penetrate into the CNS as the result of the systemic infection in a process that includes alterations of the structural integrity of the BBB. 16 One of the most important features of the BBB is the existence of high-resistance interendothelial tight junctions (TJs). They are formed by integral membrane proteins, such as claudins, occludin (ocln), and cytoplasmic proteins (eg, zonula occludens proteins) that act as structural components by limiting paracellular permeability via closing the gaps between adjacent endothelial cells. 17,18 While it is believed that HIV-1 disrupts the BBB permeability by functional and structural hindrance of TJs, recent evidence indicates that HIV-1 can also infect brain pericytes. [19][20][21][22][23] Infected pericytes can release the virus into the microenvironment, potentially providing a previously unrecognized mechanism of HIV-1 entry into the CNS directly related to pathobiology of the BBB. [20][21][22] Occludin (ocln) is a transmembrane TJ protein 24 ; however, recent evidence has shown an important role of this protein in regulating cellular metabolism through influencing AMPK protein kinase activity, ATP production, and glucose uptake. 25 Importantly, ocln can control the responses of human pericytes to HIV-1 and influence the level of infection. 20 Another membrane protein that is involved both in maintaining BBB integrity and HIV-1 infection is caveolin-1 (cav-1). [26][27][28] Cav-1 is the major structural protein of the caveolae and lipid rafts, where it plays a key role in regulation of raft-dependent endocytosis. 29 As such, it is involved in multiple cellular processes that include angiogenesis, signal transduction, inflammation, cellular senescence, lipid regulation, cancer, and apoptosis. [30][31][32][33][34][35] Studies have also related cav-1-mediated endocytosis to epithelial TJ dynamics. 36 One of the essential phases of the viral life cycle is the budding and release of viral particles from infected cell. Retroviruses, such as HIV-1, evolved the ability to modify the machinery of target cells in order to promote viral egress from the cell. While it is known that this process involves endosomal sorting complexes required for transport (ESCRT) proteins and pathways, the associated machinery of HIV-1 release is still poorly understood and has never been studied in HIV-1-infected brain pericytes. Alix is an early acting ESCRT factor that plays a direct role in exosome biogenesis 37 and is involved in HIV-1 budding from the cells. 38,39 As a multidomain adaptor protein, Alix binds to the viral Gag protein which recruits the ESCRT factor to facilitate membrane fission and virion release. 40 The goal of the present study was to evaluate the role of proteins involved in membrane plasticity during HIV-1 infection of human brain pericytes. Our results demonstrate for the first time that cav-1, ocln, and Alix form a complex that is affected as a result of pericytes infection with HIV-1. Importantly, modifications of this complex can regulate the rate of HIV-1 infection and egress from infected cells. | 16321 TORICES ET al. scrambled (SCR) silencer negative control siRNA Cat# SR30004 (OriGene) was used as nonspecific control siRNA. Pericytes were transfected by Amaxa Nucleofector Technology with 1 µg siRNA or control siRNA per 10 6 cells using the Basic Nucleofector Kit originally designed for primary mammalian endothelial cells (Lonza, Switzerland, EU, Cat# VPI-1001). Next day, cells were washed and allowed to recover in normal pericyte medium. For ocln overexpression experiments, ocln cDNA cloned into the pCMV6 plasmid was obtained from OriGene (Cat# RC206468). As a negative control, the pCMV6 plasmid (OriGene Cat# PS100001) was used. Cells were transfected with 2 µg of ocln overexpressing vector per 10 6 cells or negative control vector by using Amaxa Nucleofector Technology. | Immunoprecipitation Pericytes were lysed in NP-40 buffer containing fresh added protease and phosphatase inhibitor cocktail solution (Roche, Basel, Switzerland, EU, Cat# 11697498001). A total of 600 µg of protein lysate was incubated with 1 µg of rabbit anti-caveolin-1 antibody (Thermo Fisher Scientific, Cat# PA1-064) with gentle rotation. After 24 hours rotation, the samples were incubated for 1 hour with 25 µL protein A/G PLUS-Agarose (Santa Cruz Biotechnology, Cat# sc-2003). To collect the immune complex, samples were centrifuged at 5000 rpm for 3 minutes, and both the pellets and the supernatants were collected. The immunoprecipitates were washed three times in NP-40 lysis buffer and analyzed by immunoblotting. GAPDH was analyzed as control. | Immunofluorescence Pericytes were grown on a glass coverslip coated with 10 mg/mL of poly-L-lysine (Millipore Sigma, Cat# P8920). To visualize cellular membranes, cells were incubated for 20 minutes at 37°C with the CellBrite DiB Blue Cytoplasmic Membrane Dye (Biotium, Fremont, CA, USA, Cat# 30024), followed by washing three times with pericytes growth medium. Cells were fixed with 4% of paraformaldehyde for 15 minutes and permeabilized with PBS containing 0.2% of Triton X-100 for 15 minutes. To prevent nonspecific binding, cells were washed with PBS and blocked for 1 hour with 3% of BSA in TBS. Coverslips were then incubated with rabbit anti-caveolin-1 antibody (Thermo Fisher Scientific, Cat# AP1-064), or mouse anti-Alix (Bio-Rad Laboratories, Cat# VMA00273), overnight at 4°C in a humidified atmosphere. All antibodies were diluted at 1:100 in 3% of BSA in TBS. After the excess of the primary antibody was removed, the slides were washed three times with PBS and incubated for 1 hour with Alexa Fluor 488-(Thermo Fisher Scientific, Cat# A12379) or 647-(Thermo Fisher Scientific, Cat# A31571) secondary antibodies (1:400 in 3% BSA in TBS, Thermo Fisher Scientific). Next day, coverslips were incubated with mouse anti-occludin 594 conjugate antibody | Multiplex cytokine assay ELISA The quantitative determination of inflammatory cytokines in culture medium was performed using a Bio-Plex Pro Human Cytokine 27-plex Assay with a Bio-Plex MAGPIX multiplex reader (Bio-Rad Laboratories, Cat# M500KCAF0Y). The assay was conducted in a 96-well plate according to the manufacturer's instructions, and the levels of cytokines were calculated in pg/mL. | Statistical analysis Statistical analyses were performed with GraphPad Prism 6 (GraphPad Software, La Jolla, CA, USA). Statistical significance between the control and experimental group in vitro analyses was performed by using the two-way ANOVA following by Turkey's multiple comparisons test or Student's ttest. The significance level was set at P < .05. Alix expression in brain pericytes We first evaluated the impact of HIV-1 infection on expression of cav-1, ocln, and Alix in brain pericytes. The cells were mock-infected or infected with HIV-1 for 48 or 72 hours, and the expression of targeted proteins was evaluated by immunoblotting. These experimental time points were based on earlier results from the laboratory that demonstrated that HIV infection in brain pericytes results in a dual-stage response pattern influenced by occludin expression levels and picks at 48 hours. 11,20 After 48 hours infection, a significant decrease in ocln levels and an increase in the expression levels of Alix were observed. No changes, however, were found in cav-1 expression ( Figure 1A). Conversely, the levels of cav-1, ocln, and Alix were all elevated when pericytes were infected by HIV-1 for 72 hours ( Figure 1B), indicating that the levels of these proteins fluctuate in the course of infection. The kinetics of ocln, cav-1, and Alix changes were confirmed by immunostaining 48 and 72 hours post HIV-1 infection ( Figure 1C). | Cav-1, ocln, and Alix form a multiprotein complex in mock-infected and HIV-1infected brain pericytes Taking into consideration that a) the expression of cav-1, ocln, and Alix is modified by HIV-1, b) all three proteins are involved in HIV-1 infection, and c) all three proteins also play a role in membrane plasticity, we investigated if they could interact among themselves and form a complex. Indeed, Alix is an all-helical protein with multiple charged patches. It is likely that these charged patches may allow Alix binding to tyrosine motifs in helical bundles in ocln and cav-1 (Figure 2A). The lysates of mock-infected pericytes were incubated with cav-1-antibody, and the precipitates were probed for the presence of ocln and Alix. The results of these co-immunoprecipitation studies strongly suggested that cav-1, ocln, and Alix can form a multi-protein complex ( Figure 2B). Indeed, both ocln and Alix were found in cav-1 precipitates. Moreover, no traces of ocln or Alix were determined in the supernatants post co-immunoprecipitation, indicating a high-efficiency binding with cav-1 ( Figure 2B). In the next series of experiments, we evaluated if this complex can be affected in pericytes infected with HIV-1. Brain pericytes were infected with HIV-1 for 48 hours, followed by immunoprecipitation of cell lysates with anti-cav-1 antibody and probing the precipitates for ocln and Alix. As indicated in Figure 2C, the formation of the cav-1-ocln-Alix complex was not affected by HIV-1 infection, mirroring the results from mock-infected pericytes. To support these results, we next performed immunostaining assay on both mock-infected and infected pericytes ( Figure 2D,E). As illustrated, several focal colocalization spots between cav-1, ocln, and Alix were identified, confirming spatial interactions between these proteins (arrows in Figure 2D,E). The complex is localized primarily to the membrane and submembrane compartments in mock-infected cells. Following HIV-1 infection, the colocalization spots between cav-1, ocln, and Alix appear also in the cytoplasm, potentially reflecting the intracellular movements of these proteins during the infection ( Figure 2E). | Cav-1, ocln, and Alix regulate each other expression After noting that cav-1, ocln, and Alix form a complex in human brain pericytes, we investigated the molecular interactions between these proteins and their impact on each other's expression. We also evaluated if interactions between these proteins could be affected in HIV-1-infected pericytes. Pericytes were transfected either with cav-1 siRNA, an ocln expression vector, or Alix siRNA. Cells were then either mock-infected or infected with HIV-1 for 48 hours, and the expression of individual proteins was evaluated by immunoblotting (Figures 3-5). Effectiveness of ocln overexpression is illustrated in Figure 4A,B. Upon transfection with ocln expression vector, ocln levels increased by 1200% in mock-infected cells and by 900% in HIV-infected cultures. Ocln overexpression significantly decreased protein expression of cav-1 by 32% ( Figure 4C) in mock-infected pericytes. Importantly, F I G U R E 1 HIV-1 infection alters cav-1, ocln, and Alix expression in brain pericytes. Pericytes were either mock-infected or infected with 60 ng/mL HIV-1 p24 for 48 h (A) or 72 h (B). The expression of cav-1, ocln, and Alix was evaluated by immunoblotting. GAPDH was used as a loading control. C, Representative immunostaining images of cav-1 (green), ocln (red), and Alix (purple) in mock-infected or HIV-1-infected for 48 or 72 h. Alix expression intensity was increased by the same factors in all groups to allow for better visualization. Graphs show the mean ± SD from three independent experiments. ****P < .0001, **P = .003, *P < .0449, n = 4-9 per group; scale bars, 20 µm this impact on downregulation of cav-1 was not observed in ocln-overexpressing pericytes upon HIV-1 infection ( Figure 4B). Ocln overexpression did not affect Alix expression ( Figure 4C). Silencing of Alix decreased its protein level by 51% in mock-infected and by 62% in HIV-1-infected pericytes ( Figure 5A,B). Alix downregulation significantly decreased ocln protein expression in mock-infected cells by 26%; however, this effect was not observed when pericytes were infected with HIV-1 ( Figure 5C). Silencing of Alix did not affect cav-1 protein levels in both mock-infected and HIV-1infected cells ( Figure 5D). | Modifications of the cav-1, ocln, and Alix complex regulates HIV-1 infection and egress In order to address the importance of the cav-1-ocln-Alix complex in HIV-1 pathology, we evaluated whether its modifications could modulate HIV-1 infection. Specifically, p24 was analyzed as the marker of active HIV replication in the supernatants of cell cultures with silenced cav-1, overexpressed ocln, and/or silenced Alix, and infected with HIV-1. The results indicate that all employed manipulations of the cav-1-ocln-Alix complex resulted in F I G U R E 2 Cav-1, ocln, and Alix form a stable complex in mock-infected and HIV-1-infected pericytes. A, Diagram illustrating Alix and its binding partners identified in this study in a structural ribbon representation. The ESCRT-associated protein Alix binds tyrosine motifs via its C-terminal Proline Rich Domain (PRD). Caveolae are made up of oligomers of cav-1 and cavin proteins, and the figure shows a trimeric coiled coil for Cavin4a HR1 domain. For ocln, the region shown is the cytoplasmic C-terminal region that is known to bind scaffolding proteins. Figure 6A). When all three factors were employed together (the last bar on Figure 6A), there was no potentiation of the effect as compared to treatments with these agents alone. However, individual siRNA and occludin-overexpression construct were employed at half concentrations in this combined treatment (0.5 µg cav-1 and Alix siRNA each, plus 1 µg pCDNAoccludin per 10 6 cells) as compared to single treatments in order to avoid cell toxicity (1 µg siRNAs and 1 µg pCDNAoccludin per 10 6 cells). We then exposed new cultures of mock-infected pericytes to the conditioned media obtained from HIV-1-infected pericyte cultures from Figure 6A. As shown in Figure 6B, exposure to conditioned media increased p24 production, indicating that HIV-1 particles generated in Figure 6A were released to cell culture media and retained their infectivity. The most pronounced infection was observed when brain pericytes were exposed to the conditioned medium collected from HIV-1-infected wild-type pericytes. Conditioned media collected from infected pericyte cultures subjected to silencing of cav-1, overexpression of ocln, and/or silencing Alix were less infectious ( Figure 6B), largely mirroring the results from Figure 6A. We also infected wild-type pericytes and pericytes with silenced cav-1, overexpressed ocln, and/or silenced Alix with HIV-1 pNL4-3 engineered to encode interdomain GFP as a transcriptional reporter ( Figure 6C). Then, HIV-1 transcription efficiency was quantified by GFP fluorescence ( Figure 6D). The results indicate a significant reduction in HIV-1 transcriptional efficiency as the outcome of the employed cav-1, ocln, or Alix modifications. Similar to the results in Figure 6A,B, Acombined exposure to all three factors did not potentiate the effect on HIV-1 infection as compared to single treatments. However, like Figure 6A, siRNAs and the ocln overexpression construct were used at half concentrations in the combined experiments in order to avoid cell toxicity. | HIV-1 infection and/or modulation of the cav-1-ocln-Alix complex affect cytokine production by brain pericytes HIV-1 infection and/or disruption of the BBB integrity promote neuroinflammatory responses; processes that are consistent with the observations that patients living with HIV-1 suffer from inflammatory conditions. 41 Therefore, we evaluated the impact of the modification of the cav-1, ocln, Alix complex on pro-inflammatory responses in mock-infected and HIV-1-infected pericytes. A total of 27 inflammatory cytokines were analyzed in cell culture media using a customized Cytokines Multi-Analyte ELISArray Kit. The levels of IL-13, GM-CSF, IL-1β, IL-4, IL-8, and MIP-1α were | DISCUSSION Prominent alterations of the BBB integrity that occur during HIV-1 infection 42,43 may contribute to the development of neurocognitive disorders in infected individuals. However, the direct interactions of HIV-1 with cells composing the BBB, and especially with BBB pericytes, are not well understood. Our group reported that brain pericytes express the main HIV-1 receptor, CD4, HIV-1 co-receptors CCR5 and CXCR4, and that these cells are prone to HIV-1 infection. 19 Several studies confirmed these results, identifying brain pericytes as an important player in HIV-1 infection. 11,20,22,23,44 We recently proposed that pericytes might be HIV-1 reservoir cells in the CNS, as they can alternate from productive HIV-1 infection to latent steps of the viral cycle. 22 The importance of these findings stems from the fact that infection of pericytes may provide a gateway for HIV-1 to enter brain parenchyma. To support this notion, we demonstrated that HIV-1 is released from infected pericytes and can effectively infect new cell populations 20 ( Figure 6). Retroviruses such as HIV-1 evolve to modify the machinery of the host cell in order to promote all phases of viral life cycle, including egress from the cell. While HIV-1 entry and replication are relatively well understood, HIV-1 egress and the role of host cellular machinery in this process are understudied. In particular, no studies were performed on the mechanisms regulating HIV-1 egress from infected pericytes. The current study closes this gap and it is built on the premise that proteins involved in membrane plasticity, such as cav-1, ocln, and/or Alix can be subjected to HIV-1-mediated regulation, and in turn, influence HIV-1 life cycle and active infection. Indeed, a correlation between the time lapsed HIV-1 infection of pericytes and the expression levels of cav-1, ocln, and Alix was found in the present study. Specifically, ocln expression significantly decreased 48 hours after infection, followed by elevated levels at the later stages of infection, such as 72 hours. Cav-1 expression levels were found to be unaltered 48 hours following HIV-1 infection; however, they were significantly increased at 72 hours. Finally, Alix expression F I G U R E 5 HIV-1 infection obliterates Alix-mediated regulation of ocln expression. Pericytes were transfected with 1 µg Alix siRNA per 10 6 cells and either mock-infected or HIV-1-infected with 60 ng/mL HIV-1 p24 for 48 h. The expression of Alix, ocln, and cav-1 was evaluated by immunoblotting (A) and compared among groups. GAPDH was used as a loading control. Alix silencing (B) resulted in downregulation of ocln in mock-infected pericytes and this effect was obliterated in HIV-1-infected pericytes (C). Alix silencing did not affect cav-1 levels (D). Graphs indicate the mean ± SD from three independent experiments. ****P < .0001, **P = .003, *P < .0449, n = 4-9 per group F I G U R E 6 Modifications of the cav-1-ocln-Alix complex regulates HIV-1 infection. A, Pericytes were transfected with cav-1 siRNA (1 µg), ocln expression vector (2 µg), or/and Alix siRNA (1 µg) per 10 6 cells. When treatment with all three agents was combined, they were used at half the concentrations. Transfected cultures were then either mock-infected or HIV-1-infected as in Figure 1 and p24 was analyzed in cell culture media by ELISA. B, p24 levels in the media from pericyte cultures exposed to conditioned media from (A) for 48 h. C, Confocal microscopy images of HIV-1 pNL4-3-GFP-infected pericytes after ocln overexpression, cav-1 silencing, and Alix silencing. Nuclei were stained with DAPI (blue). GFP fluorescence (green) reflects HIV-1 transcription. D, Quantification of the mean fluorescence intensity (MFI) of GFP from (C). Data represents mean ± SEM from two independent experiments. ****P < .0001, ***P = .0002, **P = .003, *P < .0449, n = 4 per group. Scale bars, 20 µm | 16329 levels demonstrated a direct correlation with the time lapsed following HIV-1 infection as they steadily increased both 48 and 72 hours after infection ( Figure 1). Taking into consideration the coordinated responses of cav-1, ocln, and Alix in response to HIV-1 infection, we next hypothesized that they may form a structural and/or functional complex that plays a role in HIV-1 life cycle. Our novel results indicate that cav-1, ocln, and Alix co-immunoprecipitate with each other and colocalize in human brain pericytes ( Figure 2). These findings agree with the report that cav-1 can co-immunoprecipitate with ocln 45,46 in MDCK II cells and that these proteins also associate in T84 monolayers. 47 Studies have also indicated that cav-1-dependent endocytosis may play an important role in epithelial TJ dynamics, particularly in the regulation of ocln endocytosis. It was demonstrated that this process followed myosin light-chain kinase (MLCK) activation and appeared to be necessary for TNFinduced regulation of TJ composition and function. 36,48 In addition, cav-1 was shown to be involved in ocln recycling in brain endothelial cells. 49 However, the formation of the cav-1-ocln complex in non-barrier forming cells, such as pericytes, as well as the presence of Alix in this complex are novel observations. Our novel results indicated that cav-1, ocln, and Alix not only form a multi-protein complex, but also cross-regulate each other's expression (Figures 3-5). Specifically, the levels of cav-1 influenced ocln expression but not Alix levels. Ocln levels had a strong regulatory impact on cav-1 expression but did not affect Alix. On the contrary, Alix expression influenced cellular ocln levels. HIV-1 infection diminished the regulatory abilities of the individual components of this complex to influence each other's expression. Nevertheless, the cav-1-ocln-Alix complex was preserved even in infected pericytes. In order to further clarify the role of the cav-1, ocln, and/ or Alix in HIV-1 infection, we investigated if modification of the expression of these proteins can impact viral replication and production of the HIV-1 capsid protein p24, the indicator of active HIV-1 infection. Our compelling and novel data indicates that silencing cav-1 can attenuate the transcriptional efficiency and the active production of p24 by HIV-1infected pericytes ( Figure 6). These results support the notion that the multifaceted functions of cav-1 may be involved in the pathogenesis of HIV-1 infection. For example, it has been reported that HIV-1 infection upregulates the expression of cav-1 in macrophages. 50 Cav-1 could serve as an early, F I G U R E 7 HIV-1 infection and/or modifications of the cav-1/ocln/Alix complex affect anti-and pro-inflammatory cytokine profile in pericytes. Pericytes were transfected with cav-1 siRNA (Cav-1-), ocln expression vector (Ocln+), or Alix siRNA (Alix-) as in Figure 5 and either mock-infected or HIV-1-infected for 48 or 72 h. Cytokine levels were analyzed in cell culture media by Bio-Plex Pro Human Cytokine assay kit. INF-γ levels decreased to non-detectable (#) concentrations after HIV-1 infection in control, cav-1-, and Alix-pericyte cultures. The data are mean ± SEM from two independent experiments. ****P < .0001, ***P = .0002, **P = .003, *P < .0449, n = 4 per group critical modulator responsible for signaling pathways that results in the disruption of TJs proteins. Contrasting our results, selected reports have demonstrated that cav-1 overexpression inhibits HIV-1 replication in macrophages. 28 Other studies, however, have suggested that cav-1 may serve as an effective target for protecting against HIV-1-related disruption of the BBB. 51 Silencing the cav-1 gene has also been described as having a key role in protecting against HIV-1 by defending against HIV-1 Tat-induced downregulation of ocln. 52 Modification of the cav-1-ocln-Alix complex by overexpression of ocln also resulted in a prominent decrease in p24 levels in HIV-1-infected pericyte cultures ( Figure 6). These findings are consistent with our reports 25 in which we characterized ocln as a novel NADH oxidase that inhibits HIV-1 transcription by controlling SIRT-1 expression and activation in human brain pericytes. The inverse relationship between ocln and HIV-1 transcription was further substantiated when it was shown that occludin silencing resulted in 75% and 250% increase in viral transcription in human primary macrophages and differentiated monocytic U937 cells, respectively. 25 Like cav-1, silencing Alix also resulted in a decrease in p24 release and HIV-1 egress by infected pericytes (Figure 6). Alix is a part of ESCRT machinery, and its interaction with HIV-1 has been described. 53 However, the exact role and function of Alix in HIV-1 life cycle remain elusive. Some studies suggested that Alix has only a minor effect on HIV-1 budding. [54][55][56] However, other reports described an integral role of Alix in HIV-1 release and increased virus production. [57][58][59] The results of the current study are consistent with the latter observations. Indeed, among the components of the cav-1-ocln-Alix complex, the most prominent impact on decreasing p24 levels was observed upon Alix silencing. Patients living with HIV-1 suffer from inflammatory conditions 41 ; therefore, we evaluated the expression level of cytokines produced by pericytes upon modification of the cav-1-ocln-Alix complex in mock-infected and HIV-1infected pericytes. Our results indicated that the levels IL-10, IL-15, INF-γ, and G-CSF were lower in HIV-1-infected pericytes in comparison to mock-infected pericytes. In contrast, IL-6, MCP-1/CCL-2, and RANTES exhibited elevated levels when pericytes were infected with HIV-1. These results concur with the well-established anti-inflammatory role of IL-10 and the pro-inflammatory impact of IL-6, MCP-1, and RANTES, 60,61 whose expression has been correlated with HIV-1 replication. 62 In the case of INF-γ, several studies suggested its anti-inflammatory role in the CNS, 63 and G-CSF has been described either as an anti-or pro-inflammatory cytokine, and/or immunomodulatory factor that suppresses the production of pro-inflammatory cytokines. [64][65][66] In addition to changes in cytokine profiles due to HIV-1 infection, we also observed profound alterations of cytokine production upon modification of the cav-1-ocln-Alix complex. Specifically, ocln overexpression resulted in elevation of RANTES, INF-γ, G-CSF, MIP1-b, and IP-10, as well as a decrease in MCP-1/CCL-2. These patterns of change reflect both strong anti-and pro-inflammatory responses. The role of RANTES, INF-γ, MCP-1/CCL2, and G-CSF in HIV-1 infection was addressed above. MIP1 plays a role as an HIV-1-suppressive factor 67 ; contrarily, IL-10 is pro-inflammatory and induces chemotaxis, apoptosis, cell growth, and angiostasis by binding to the CXCR3 receptor. These results confirm the impact of both HIV-1 infection and the cav-1-ocln-Alix complex on the cytokines profile, which includes stimulation of both anti-and pro-inflammatory cytokines, and, as such, is likely to modulate the overall neuroinflammatory responses in HIV-1-infected brains. In conclusion, we have described for the first time that cav-1, ocln, and Alix form a multi-protein complex in human brain pericytes. These proteins can regulate each other, and their interactions are affected upon HIV-1 infection. Additionally, we presented evidence that modulation of the cav-1-ocln-Alix complex results in diminished HIV-1 replication. These findings provide novel cellular mechanisms involved in HIV-1 infection of brain pericytes that contribute to a better understanding of the pathology of brain infection by HIV-1 and the associated neuroinflammatory responses.
2020-10-17T13:06:25.054Z
2020-10-14T00:00:00.000
{ "year": 2020, "sha1": "2451de31aba1f2234c43e66a29d57f170f27ab8b", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1096/fj.202001562R", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b302fe01fd51c61d3dbfdf48adfa7566c75e9e4c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
272890
pes2o/s2orc
v3-fos-license
MGRA: Motion Gesture Recognition via Accelerometer Accelerometers have been widely embedded in most current mobile devices, enabling easy and intuitive operations. This paper proposes a Motion Gesture Recognition system (MGRA) based on accelerometer data only, which is entirely implemented on mobile devices and can provide users with real-time interactions. A robust and unique feature set is enumerated through the time domain, the frequency domain and singular value decomposition analysis using our motion gesture set containing 11,110 traces. The best feature vector for classification is selected, taking both static and mobile scenarios into consideration. MGRA exploits support vector machine as the classifier with the best feature vector. Evaluations confirm that MGRA can accommodate a broad set of gesture variations within each class, including execution time, amplitude and non-gestural movement. Extensive evaluations confirm that MGRA achieves higher accuracy under both static and mobile scenarios and costs less computation time and energy on an LG Nexus 5 than previous methods. Introduction The Micro-electromechanical Systems (MEMS) based accelerometer is one of the most commonly-used sensors for users to capture the posture, as well as the motion of devices [1]. Extensive research has been carried out based on the accelerometer data of mobile devices, including phone placement recognition [2], knee joint angle measurement [3], indoor tracking [4] and physical activity recognition [5,6]. Research conducted so far still faces a challenging problem, which is not tackled effectively: signal drift or the intrinsic noise of MEMS-based accelerometers on commercial mobile devices. Moreover, the accelerometer enables a mobile device to "sense" how it is physically manipulated by the user. As a result, a new type of interaction based on motion gestures performed by the users has been proposed, making eye-free interactions possible without stopping movement. The objective of a motion gesture recognition system is to find out which gesture is intended by the users, which is a spatio-temporal pattern recognition problem. Except the problem of accelerometer signal drift or intrinsic noise, motion gesture recognition systems confront three new challenges as follows: • Intuitively, one cannot promise to perform the same gesture exactly twice. The motion gestures usually vary strongly in execution time and amplitude. Therefore, the gesture recognition system should take into account the motion variances of the users. • The gesture recognition system should provide on-the-move interaction under certain mobile scenarios, like driving a car or jogging. The non-gestural user movements have an effect on acceleration signals, making gesture recognition more difficult. • Training and classification of the motion gestures are expected to be executed entirely on the mobile devices. Therefore, the computation and energy costs need to be limited for such self-contained recognition systems. Previous work on motion gesture recognition can be categorized into two types: template-based and model-based. Template-based approaches store some reference gestures beforehand for each class and match the test gesture with some similarity measurements, such as Euclidean distance [7]. uWave [8] applies Dynamic Time Warping (DTW) to evaluate the best alignment between gesture traces in order to tackle execution time variation. Model-based methods are based on the probabilistic interpretation of observations. Exploiting Hidden Markov Model (HMM), 6DMG [9] is generally robust to time and amplitude variations. The recognition accuracy of previous research, however, is somehow affected by non-gestural user movements, like sitting in a running vehicle, which will be shown in the Evaluation section. Furthermore, most previous works carry out the calculation on a nearby server instead of on mobile devices, which may involve privacy issues. In order to solve all of the above issues, we try to answer one non-trivial question: what are the robust and unique features for gesture recognition hidden in the raw acceleration data? This paper is dedicated to extracting robust features from the raw acceleration data and exploits them to realize gesture recognition on mobile devices. The features should accommodate a broad set of gesture variations within each class, including execution time, amplitude and non-gestural motion (under certain mobile scenarios). In our solution, we first collected 11,110 motion gesture traces on 13 gestures performed by eight subjects across four weeks, among which, 2108 traces are collected under mobile scenarios. We then enumerate the feature set based on the time domain, the frequency domain and Singular Value Decomposition (SVD) analysis. The best feature vector of 27 items is selected under the guide of mRMR [10], taking both static and mobile scenarios into consideration. We then implement our Motion Gesture Recognition system using Accelerometer data (MGRA) with the best feature vector, exploiting SVM as the classifier. The implementation is on an LG Nexus 5 smartphone for the evaluations. MGRA is first evaluated through off-line analysis on 11,110 motion traces, comparing accuracy with uWave [8] and 6DMG [9]. The results demonstrate that MGRA achieves an average accuracy of 95.83% under static scenarios and 89.92% under mobile scenarios, both better than uWave and 6DMG. The computation and energy cost comparison on the LG Nexus 5 also confirms that MGRA outperforms uWave and 6DMG. The major contributions are as follows: • A comprehensive gesture set of 11,110 motion traces was collected containing 13 gestures performed by eight subjects across four weeks, among which, 2108 traces are collected under mobile scenarios. Based on this dataset, 34 statistical features are enumerated through the time domain, the frequency domain and SVD analysis with the visualization of their impact on gesture classification. • We exploit mRMR to determine the feature impact order on gesture classification for static and mobile scenarios, respectively. The best feature vector of 27 items is empirically chosen as the intersection of these two orders. • The MGRA prototype is implemented with the best feature vector on the LG Nexus 5. We compare MGRA on classification accuracy, computation and energy cost under both static and mobile scenarios to previous research of uWave and 6DMG. MGRA achieves the best performance on all metrics under both scenarios. The rest of this paper is organized as follows. In Section 2, we introduce the technical background on motion gesture recognition. Section 3 illustrates our data collection process and our observation on execution time, amplitude and scenario variations based on our gesture sets. Details on feature enumeration are described in Section 4, and Section 5 presents the feature selection process. Section 6 gives a system overview of MGRA. Section 7 presents the comparison of the accuracy of MGRA to uWave and 6DMG on two gesture sets, both under static and mobile scenarios. It also shows the time and energy cost of MGRA, uWave and 6DMG on Android smartphones. We conclude our work in Section 8. Related Work This section reviews the research efforts on gesture recognition systems based on the accelerometer for mobile devices. The objective of a gesture recognition system is to classify the test gesture (that the user just performed) to a certain class according to the training gesture set (that the user performed early). Previous research can be mainly categorized into two types: template-based and model-based. Intuitively, some basic methods measure the distance between the test gesture and the template gestures of each class and select the class with the minimum distance as the result. Rubine [11] made use of the geometric distance measure on single-stroke gestures. Wobrock et al. [7] exploited the Euclidean distance measurement after uniformly resampling the test gesture to handle execution time variation. To cope with sampling time variations, several methods based on Dynamic Time Warping (DTW) are presented. A similarity matrix is computed between the test gesture and the reference template with the optimal path, representing the best alignments between two series. Wilson et al. [12] applied DTW on the raw samples from the accelerometer and gyroscope for gesture recognition. uWave [8] first quantized the raw acceleration series into discrete values, then employed DTW for recognition. Akl and Valaee [13] exploited DTW after applying compressive sensing on raw accelerations. Nevertheless, the amplitude variation still affects the recognition accuracy for the aforementioned DTW-based methods. Statistical methods, such as the widely-used Hidden Markov Model (HMM), are based on probabilistic interpretation of gesture samples to model the gestural temporal trajectory. HMM-based methods are generally robust, as they rely on learning procedures on a large database, creating a model accommodating variations within a gesture class. Each underlying state of HMM has a particular kinematic meaning and describes a subset of this pattern, i.e., a segment of the motion. Schlömer et al. [14] leveraged the filtered raw data from the acceleration sensor embedded in the Wii remote and evaluated 5, 8 and 10 states, respectively, for motion model training. 6DMG [9] extracted 41 time domain features from the samples of accelerometer and gyroscope of the Wii remote. It exploited eight hidden states for building an HMM model with 10 training traces for each class. Support Vector Machine (SVM) is also extensively applied for motion gesture recognition. SVM-based methods usually offer lower computational requirements at classification time, making them preferable for real-time applications on mobile devices. The gesture classification accuracy depends closely on the feature vector for SVM-based methods. Wu et al. [15] extracted the feature of mean, energy and entropy in the frequency domain and the standard deviation of the amplitude, as well as the correlation among three axes in the time domain. As the raw time series are divided into nine segments and each feature is repeatedly extracted from every segment, the total feature set contains 135 items in [15]. In [16], Haar transform was adopted in the feature extraction phase and produces descriptors for modelling accelerometer data. The feature set contains 24 items. In this paper, we also exploit SVM as the core of MGRA owing to its low computation cost on classification. Different from previous approaches, we focus on feature enumeration through not only the time domain and the frequency domain, but also SVD analysis. Then, we select the feature vector of 27 items based on mRMR, taking both static and mobile scenarios into consideration. We realize MGRA entirely on an Android smartphone, the LG Nexus 5. Gesture Design and Data Collection This section introduces our gesture collection phase and reveals some key observations on raw sampling data of motion gestures. Gesture Collection We developed a motion gesture collection application on the LG Nexus 5 with a sampling rate of 80 Hz. To make the users free from interactions with the touch screen, we rewrote the function of the short press on the power button. In our approach, the application records the accelerometer readings after the first press on the power button and ends on the second press. During the interval of two presses, the users perform motion gestures. Eight subjects, including undergraduates, graduates and faculty members, participated in data collection, with ages ranging from 21 to 37 (ethical approval for carrying out this experiment has been granted by the corresponding organization). Each subject was asked to perform gestures in his or her own convenient style. We did not constrain her or his gripping posture of the phone, the scale and the speed of the action. The subjects only participate when available. We asked them to perform each gesture no less than 20 times for one collection period. Since a necessary and sufficient number of single gestures is needed for phone command control, we define nine gestures as our classification target. These nine gestures are chosen as the combination of upper letters "A", "B" and "C", as shown in Figure 1. The choice of these gestures is due to two reasons, as follows: (1) Each gesture shares one character with the other three gestures, ensuring the difficulty of recognition. If these gestures can be classified with high accuracy, the gestures of other characters' combination customized by future end users will keep high recognition precision. (2) The choice of spatial two-dimensional symbol gestures is consistent with the previous survey study [17]. Our survey on motion gesture design from 101 freshmen also indicates that there are 96.8% of 218 gestures created as combinations of English characters, Chinese characters and digits. The statistics of the designed gestures is shown in Table 1. We further collected two English words "lumos" and "nox" (magic spells from the Harry Potter novel series) and two Chinese characters to build the second gesture set. The gestures in this set share no common parts with each other; shown in Figure 2. We name the gesture set in Figure 1 Confusion Set and the second in Figure 2 Easy Set. We thereby construct MGRA targeted on Confusion Set and verify the recognition results with Easy Set to prove the gesture scalability of MGRA in the Evaluation section. Taking the mobile situations into consideration, our subjects collect the gesture traces not only under static scenarios, but also sitting in a running car. After four weeks, we collected 11,110 gesture traces, among which 2108 are performed under mobile scenarios. The amounts of all kinds of bespoken gestures are summarized in Table 2. Observation on Gesture Traces A motion gesture trace is described as a time series of the acceleration measurements. a = (a 1 , a 2 , ..., a n ), where a i = [a x (i), a y (i), a z (i)] T , is a vector of x, y and z acceleration components according to the phone axes. n is the number of samples within the whole gesture. Figure 3 shows the raw acceleration series of three typical motion traces of gesture A performed by one subject. The first and second traces are chosen from static traces, where the third is selected from mobile traces. It is observable that the motion patterns usually vary strongly on execution time, amplitude and scenario type. For example, the difference on execution time is about 0.5 s between two static gestures. The maximum value on the X-axis of one static trace is about three higher than the other in Figure 3a. Moreover, the values of the mobile trace are commonly larger than the two static traces on all three axes. The same phenomena coincide with other subjects. We deem these phenomena as the time, amplitude and scenario variety of motion gestures and discuss them in detail in the following subsections. Time Variety We count the execution time of all motion gesture traces in Confusion Set. Figure 4a shows the execution time distribution for one subject performing gesture A, which is similar to a Gaussian distribution. This is reasonable because a person normally cannot control actions precisely under the unit of seconds, while the whole gestures are finished in no more than 2 s. Figure 4b presents the box plot of the gesture traces in Confusion Set performed by the same subject. It shows that execution time variety exists among all gestures. We divide these nine gestures into three groups according to the execution time. The groups of gestures AA, AB and BB have a longer execution time, while gesture C has a shorter execution time. The remaining forms the third group. Therefore, applying just the time length can distinguish three gesture groups for this subject. This conclusion is consistent with the motion traces of other subjects. We calculate the composite acceleration of the raw traces to estimate the whole strength when a user is performing gestures. The composite acceleration of the raw traces comes from Equation (1), which captures the user behaviour as a whole. The mean composite acceleration of the gesture indicates the strength used to perform the gesture by subjects. Figure 5a shows the average composite acceleration distribution of gesture AB performed by one subject under static scenarios. Similar to time variation, the distribution is also like a Gaussian distribution. Figure 5b shows the box plot of the mean composite acceleration amplitude among all motion traces. There is amplitude variety for all gestures performed by this subject, even under only static scenarios. According to Figure 5b, nine gestures can be categorized into two groups: gestures AB, B, BB and BC have a relatively higher strength than the other gestures. For other subjects, the variety in amplitude also exists. Scenario Variety There is a wide variety of mobile devices that have no common features, except mobility. We collect the gesture traces in a running car as an example for mobile scenarios. We drive the car in the university campus to collect mobile traces, and the trajectory is shown in Figure 6a. We then compare the composite accelerations under mobile scenarios to the values under static scenarios, shown in Figure 6b. The amplitude of composite acceleration is higher under mobile scenarios than that under static scenarios. This is mainly because the car contributes to acceleration values when changing speed. Time, amplitude and scenario varieties have been observed from our gesture set, which can have a direct impact on the recognition accuracy. Hence, we think about extracting the robust and unique features from the raw acceleration series for gesture recognition, which can help in accommodating these three varieties. Feature Enumeration Feature extraction is the fundamental problem in the area of pattern recognition. However, few works, which extract effective features and make a quantitative comparison of their quality for gesture recognition, have been reported. This section illustrates our feature enumeration from acceleration data through the time domain, the frequency domain and SVD analysis. Time Domain Features The motion gestures are different from each other in their temporal spatial trajectories. The trajectory can be reflected by time domain features to some degree. We extract the time domain features from the raw acceleration traces. As discussed in Section 3.2.1, the gestures can be classified into three groups based only on execution time length. Therefore, we take the time length as our first feature, labelled as { f 1 }. Due to the difference in spatial trajectories, the number of turns in an action may be used to distinguish different gestures. We use zero-crossing rates on three columns of raw traces to estimate the change of acceleration, reflecting some clues about the spatial trajectories. Figure 7a shows the zero-crossing rate on the X-axis of nine gestures performed by one subject. It shows that nine gestures can be categorized into five groups according to the zero-crossing rate on the X-axis, which are the groups {C}, {A, CC}, {AC, B}, {AA, AB, BC} and {BB}. We thereby treat the zero-crossing rate on three axes as feature { f 2 , f 3 , f 4 }. The composite acceleration of the raw traces calculated as Equation (1) can capture the user behaviour as a whole. Here, we calculate the mean and standard deviation (std) on composite acceleration, as well as each column. The mean shows how much strength the user uses when performing certain gestures. The standard deviation reveals how the user controls his or her strength during performing a gesture. This confirms that the mean and standard deviation can to some extent work in gesture classification. Hence, we label the eight items of the mean and standard deviation as { f 5 , f 6 , ..., f 12 }. We further calculate the maximal and minimal values on three axes, respectively, and on composite acceleration. Figure 8a shows the maximal and minimal acceleration on the Z-axis for nine gestures performed by one subject. Each class contains 40 trials. It provides some boundaries for gesture recognition, e.g., Figure 8a shows that the traces of AC, BC and C can be distinguished by these two parameters. Therefore, we take these eight values as features { f 13 , f 14 , ..., f 20 }. The time complexity for extracting time domain features turns out to be O(n). Frequency Domain Features We extract the features from the frequency domain. We apply Fast Fourier Transform (FFT) on the three columns of the raw traces. Previous research took all of the low frequency parts directly as the features [18], but found the recognition results to be worse than just by calculating the correlation from the original time series. Unlike previous approaches, we assume that people have an implicit frequency while performing certain gestures, and we try to locate it. We select the frequency with the largest energy instead of the base and second frequency and use it to represent the frequency domain features. The second frequency always has significantly high energy for most gestures containing repeating symbols, e.g., AA. In order to align with the different time lengths of the motion traces, we take the period as the feature instead of the frequency. Therefore, the frequency feature includes the period and energy of a certain frequency. Figure 8b shows the frequency feature on the X-axis of 40 traces per gesture performed by one subject. It shows that some pairs or groups of gestures can be distinguished by these two parameters. For example, the traces of gestures AA, BC and C have no intersection among one another in Figure 8b. Similar results occur in the other axes practically. Therefore, the period and energy features on the three axes are all adopted into our feature set as { f 21 , f 22 , ..., f 26 }, respectively. The time complexity to compute the frequency features is O(nlogn). SVD Features During the data collecting phase, we observe that the subjects tend to hold the phone in different postures to perform different gestures. Therefore, we aim to represent such posture differences. Singular Value Decomposition (SVD) provides a unique factorization of the form A = UΣV * . In the scenario of motion traces, n is the sample number of the traces, U is an n × 3 unitary matrix, Σ is a 3 × 3 diagonal matrix with non-negative real numbers on the diagonal and V * denotes the conjugate transpose of a 3 × 3 unitary matrix. The diagonal entries σ i are the singular values of A listed in descending order. The complexity of SVD on the motion traces is O(n). V * is the rotation matrix from the actual motion frame to the phone frame, indicating the gripping posture. As the gestures that we are studying are 2D gesture, the first and the second column vector of V * can be critical for the phone posture estimation, labelled as { f 27 , f 28 , ..., f 32 }. Figure 9a shows V * 11 and V * 21 of one subject in which each gesture class contains 40 traces. It shows that the nine gestures can be first divided into two groups on parameter V * 11 , which are groups {A, AA, AB, AC} and {B, BB, BC, C, CC}. This indicates that this subject uses different phone gripping postures when performing the gestures starting with different characters. There is further discriminative power inside each group. For example, gestures BB and CC can be separated on parameter V * 21 . We then dig into singular value Σ. Σ represents the user's motion strength on three orthogonal directions when performing actions. Recall that even the same user cannot perform identical gestures with exactly the same strength, and our gesture recognition feature should be suitable under mobile scenarios; so, we leverage the relative value, called the σ-rate (σ r ), defined as follows: The σ-rate represents how the user allocates her or his strength on orthogonal directions when performing a gesture relatively. Figure 9b shows features σ r (1) and σ r (2) on 40 traces per gesture performed by one subject. In Figure 9b, gestures BC and C are clearly not intersected, showing that the σ r can provide some clues to classify different gestures. Therefore, we add σ r to the feature set, labelled as { f 33 , f 34 }. As U contains the time series information, which has been extracted by the time domain and frequency domain analysis, we leave U out of consideration. All together, the feature set is composed of 34 features, shown in Table 3. Though we only depict the classification impact of features on one subject, similar results empirically exist for other subjects. Table 3. Feature set. Feature Selection Feature selection is another elementary problem for pattern classification systems. We select the best feature vector using the mRMR approach [10] and Confusion Set validation. mRMR determines the feature order that minimizes the redundancy and maximizes the relevance to minimize the classification error. We When adding an item to F, we delete this item in S and M to speed up the operation of "∈". In order to make Algorithm 1 easy to understand, we leave out the details of this speed up trick. The result of F in (x) is thereby shown in Table 4. When executing Algorithm 1, there can be some value of x that no new feature is added to F. For example, when x = 4, the F in (4) = null. This is because F in (1 : 4) = F in (1 : 3) = { f 12 , f 1 , f 31 }. Meanwhile, F in (x) may add two features together for some value of x, like x = 22, as shown in Table 4. As illustrated in Algorithm 1, we add these two features according to their impact order under static scenarios. For x = 22, we first add feature f 34 and add f 8 afterwards. After coping with the intersection order, we delete the "null" out, and all 34 features are sorted in F in . mRMR only provides the order of F in ; we further verify the classification results of F in (1 : x) for x = 1, 2...34 in Section 7.1. Here, we report that the best classification result comes from F in (1 : 27) i.e., The best feature vector contains 27 items, including time length, zero-crossing rates on the X-axis and the Y-axis, the mean and standard deviation on each column, the standard deviation of composite acceleration, the maximal and minimal values of accelerations on three axes and the composite acceleration, the energy on the X-axis, the first and the second column vector of V * and σ r . The time complexity to calculate F in (1 : 27) turns out to be O(nlogn). Design of MGRA Due to constrained computing and storage resources and being concerned with time consumption, we use multi-class SVM as the core for gesture recognition. The design of MGRA is shown in Figure 10, including five major components: • Sensing: recording the acceleration data when a user is performing motion gestures between two presses on the phone power button. Evaluation This section presents the results of the off-line analysis on the collected trace set and the online evaluation on LG Nexus 5 smartphones. For off-line analysis, we first determine the key parameters of MGRA, i.e., the feature vector and the training set number. We further compare two different SVM kernels and one different classifier of random forest to confirm the functionality of SVM with the RBF kernel. Then, we compare the classification accuracy of MGRA with two state-of-the-art methods: uWave [8] and 6DMG [9]. For online evaluation, we compare the energy and computation time among MGRA, uWave and 6DMG. Parameter Customization Feature vector selection has a great impact on classification for SVM. Section 5 only provides the feature impact order as F in . It still needs to allocate the length of the feature vector; meanwhile, a larger training trace number, better classification accuracy. However, it brings the end users a heavy burden to construct a training set when the number of traces is big. Hence, we conduct a grid search for optimal values on these two parameters for the static traces, with the feature number n f varying from one to 34 according to the order F in and the training set number n t varying from one to 20. For each combination of these two parameters, we train the SVM model with n t traces per gesture class, randomly selected from Confusion Set under static scenarios for each subject. We only use static traces for training, as the end user may prefer to use MGRA under mobile scenarios, but may not like to collect training traces while moving. After constructing the model, the rest of the traces of each subject under static scenarios is used to test the subject's own classification model. This evaluation process is repeated five times for each combination. Figure 11 shows the average recognition accuracy of static traces among eight subjects for each parameter combination. From the point of view of the training traces number, it confirms the tendency that a larger number means better classification accuracy. When the training trace number exceeds 10, the accuracy improvement is little. The maximum recognition accuracy under static scenarios is 96.24% with 20 training traces per gesture class and with 27 feature items. For the combination of 10 training traces and 27 features, the average recognition accuracy is 95.83%, no more than 0.5% lower than the maximum value. Hence, we use 10 as the trace number per gesture class for training in MGRA. We further dig into the impact on recognition with different feature numbers while keeping n t = 10. Table 5 shows the confusion matrices of one subject on feature number n f = 1, 4, 11, 27. When only using the standard deviation of the composite acceleration ( f 12 ), the average classification accuracy is only 44.82%. For example, Table 5a shows that 55% traces of gesture AA are recognized as gesture AB, and the other 45% are recognized as gesture BB. After adding time feature ( f 1 ), posture V * 22 ( f 31 ) and σ r (1) ( f 33 ), only 7.5% and 5% of gesture AA are recognized incorrectly as gestures AB and BB, described in Table 5b. The average accuracy for four features increases to 83.31%. After adding seven features, Table 5c shows that there is only 2.5% of gesture AA classified as gesture AB. None of gesture AA is recognized as gesture BB. Such an experiment shows that MGRA with 11 features correctly recognizes between AA and BB. It also illustrates that gesture AA is more confusable with AB than BB, confirming that two gestures are harder to classify when sharing common parts. The average recognition accuracy is 92.52% for 11 features. For 27 features, the recognition error is little for each gesture class, as shown in Table 5d. Besides, all error items are on the intersection of two gestures sharing a common symbol, except only 2.4% of gesture A is recognized incorrectly as gesture B. The average recognition accuracy achieves 96.16% for this subject on the feature vector of 27 items. Before determining the feature number, we should also consider the traces collected under mobile scenarios where the acceleration of the car is added. For each parameter combination, we test the models constructed from the static traces, with the traces collected under mobile scenarios. Someone may ask: why not construct the SVM model based on mobile traces separately? Because it is inconvenient for potential users in practice to collect a number of gestures for training under mobile scenarios. Even collecting the training traces in a car for our subjects, it requires someone else to be the driver. However, it is easy for the user to just perform a certain gesture to call commands on the smartphone under mobile scenarios, comparing to the interaction with touch screens. Table 5. Confusion matrices of MGRA on one subject with n t = 10 and different n f (%). Figure 12 depicts the classification results for the same subject as Figure 11, testing his static models with mobile traces. The highest accuracy of 91.34% appears when the training set number is 20 and the feature number is 27. For the combination of n t = 10 and n f = 27, the recognition accuracy is 89.92%. The confusion matrix of n t = 10 and n f = 27 tested with mobile traces is shown in Table 6. Most error items are also on the intersections of gestures sharing one common symbol, except that a small fraction of gesture B is classified as gesture A and CC, gesture BB as AA and gestures C and CC as B. Comparing Table 6 to Table 5d, most error cells in Table 6 do not exist in Table 5d. This indicates that the accuracy decreases when testing the classification model constructed from static traces with mobile traces. However, the average recognition accuracy is 90.04%, still acceptable for this subject. Taking the training sample number as 10, we further plot the average recognition accuracy among all subjects for all feature number values under two scenarios respectively, in Figure 13. It shows that the recognition accuracy achieves the maximum when n f = 27, for the test traces under both static and mobile scenarios. Therefore, we choose F in (1 : 27) as the feature vector of MGRA. Comparison with SVM Kernels and Random Forest We choose RBF as the SVM kernel for MGRA, withthe assumption based on the central limit theorem. To justify the choice of SVM kernel, we further examine Fisher and polynomial kernels under both static and mobile scenarios. The feature vector is chosen as F in (1 : 27) for the comparison. Besides, some previous research has demonstrated the classification performance of random forest on activity or gesture recognition [19]. Therefore, we also evaluate random forest as the classifier and apply all 34 features under both scenarios. Table 7 shows the confusion matrix of the Fisher kernel, the polynomial kernel and random forest under static scenarios for the same subject with the results in Table 5d. It first shows that most error items are on the intersections of gestures sharing one common symbol, which is consistent with the error distribution in Table 5d. Table 7a also demonstrates that the Fisher kernel achieves 100% accuracy on classifying repetition gestures, which are AA, BB and CC. However, the accuracy for all of the other gestures is lower than that in Table 5d. Comparing the polynomial to the RBF kernel, it shows that the classification accuracy of gestures A, AA, AB, BB, BC and C is higher than 95% in both Tables 5d and 7b. The misclassification between gestures AC and B is larger than 7% in Table 7b for the polynomial kernel, which has been corrected in Table 5d by the RBF kernel. The average classification accuracy among all subjects when applying three different SVM kernels under static scenarios is shown in Table 8. The accuracy of the RBF kernel is the highest, and the other two are close to or above 90% for static traces. However, the accuracy decreases clearly when applying the model trained under static scenarios to the traces under mobile scenarios for the Fisher and the polynomial kernel, as shown in Table 9a,b. For gestures BC and B, the classification accuracy is no more than 60% for both the Fisher and the polynomial kernels. Gesture BC is misclassified as AC and B with a high percentage for both kernels, because these three gestures share some common parts. For gesture B, there are 27.5% and 32.5% misclassified as gesture CC for both kernels, respectively. These errors come from both gestures being performed as two circles by the subject. The only difference is that gesture B includes two vertical circles, while gesture CC contains two horizontal circles. Referring back to Table 6, the RBF kernel misclassifies only 2.5% of gesture B as CC under mobile scenarios. The average classification accuracy results among all subjects with three kernels under mobile scenarios is also listed in Table 8. It shows that the Fisher and the polynomial kernel are not robust to scenario change. On the contrary, the RBF kernel retains accuracy very close to 90%. Applying random forest as the classifier, we get the confusion matrices for the same subject, listed in Tables 7c and 9c, under both scenarios, respectively. The results show that random forest is also not robust to scenario change. The average accuracy results are lower under both scenarios, when comparing random forest to SVM with the polynomial and RBF kernels, shown in Table 8. When digging into the Variable Importance Measure (VIM) of the random forest average on all subjects, Figure 14 shows that F in (1 : 27) are still the most important features for random forest. Except that F in (28, 29) are a little more important than F in (27) under static scenarios, and F in (28) outperforms F in (27) a bit under mobile scenarios. We further conduct an evaluation of random forest on feature set F in (1 : 27). The average classification accuracy is 86.17% and 68.32% under static and mobile scenarios, respectively, decreasing no more than 3.5% compared to the accuracy with all 34 features. This confirms that the feature selection results of mRMR and Algorithm 1 are independent of classification method and scenario. After comparison among three SVM kernels and the classifier of random forest, we confirm that SVM with the RBF kernel is suitable as the classifier for MGRA. Table 9. Confusion matrices of SVM kernels and classification method testing static models with mobile traces for one subject. Accuracy Comparison with uWave and 6DMG We compare MGRA, uWave [8] and 6DMG [9] on classification accuracy with both Confusion Set and Easy Set in this section. uWave exploits DTW as its core and originally only records one gesture trace as the template. The recognition accuracy directly relies on the choice of the template. To be fair for comparison, we let uWave make use of 10 training traces per gesture in two ways. The first method is to carry out template selection from 10 training traces for each gesture class first. This can be treated as the training process for uWave. The template selection criterion is that the target trace has maximum similarity to the other nine traces, i.e., the average distance from the target trace to the other nine traces after applying DTW is minimum. We call this best-uWave. The second method is to compare the test gesture with all 10 traces per gesture class and to calculate the mean distance from the input gesture to nine gesture classes. We call this method 10-uWave. 10-uWave does not have any training process, but it will spend much time on classification. For 6DMG, we extract 41 time domain features from both acceleration and gyroscope samples in the gesture traces. The number of hidden states is set to eight, experimentally chosen as the best from the range of (2, 10). 6DMG uses 10 traces per gesture class for training. We first compare the test trace set from static scenarios. Table 10 shows the confusion matrix of best-uWave, 10-uWave and 6DMG of the same subject of Table 5d. Most classification errors of MGRA, comparing Table 5d to Table 10, still exist when applying best-uWave, 10-uWave and 6DMG; except that best-uWave corrects MGRA's error of 2.4% on gesture A recognizing as gesture B. 6DMG corrects MGRA from recognizing 2.5% of gesture AB as gesture AA and 7.3% of gesture AC as BC. Both best-uWave and 10-uWave decrease MGRA's error of 7.3% to 4.9% on recognizing gesture AC as BC. On the contrary, a majority of errors existing for best-uWave, 10-uWave and 6DMG are corrected by MGRA. For example, the confusion cell of gesture CC recognized as BC is 7.5%, 7.5% and 5.0% for best-uWave, 10-uWave and 6DMG, respectively, in Table 10, which are completely corrected by MGRA. The average accuracy of best-uWave, 10-uWave and 6DMG for this subject under the static scenario is 89.79%, 90.35% and 91.44%, respectively, lower than 96.14% of MGRA. Then, we compare the performance with the test traces by the same subject under mobile scenarios, shown in Table 11. Comparing Table 6 to Table 11, most recognition errors of MGRA still exist for best-uWave, 10-uWave and 6DMG, except a small fraction of errors are decreased. On the contrary, MGRA corrects or decreases most error items in Table 11. For example, the cell of gesture CC recognized as B is 23.1%, 23.1% and 5.1%, respectively, for best-uWave, 10-uWave and 6DMG. Referring back to Table 6, MGRA misclassified only 2.6% of gesture CC as B. The reason for gesture CC being misrecognized as B is the same as why gesture B was misrecognized as CC, as discussed in Section 7.2. Therefore, MGRA outperforms best-uWave, 10-uWave and 6DMG clearly under mobile scenarios, whose average recognition accuracy for this subject is 90.04%, 72.50%, 73.3% and 72.72%, respectively. We calculate the average accuracy among all subjects for the test traces under static and mobile scenarios separately and depict the results in Figure 15. It shows that MGRA not only achieves a higher accuracy of classification, but also it has a more stable performance across gestures and scenarios. For best-uWave, 10-uWave and 6DMG, they achieve high accuracy on static traces, but their accuracy decreases about 10% when tested with mobile traces. Moreover, the recognition accuracy is gesture dependent for uWave and 6DMG, especially under mobile scenarios. Table 11. Confusion matrices of three classification methods testing static models with mobile traces for one subject(a) best-uWave; (b) 10-uWave; (c) 6DMG. Considering the impact of gesture set on recognition, we further compare the recognition accuracy on another gesture set, i.e., Easy Set. The evaluation process is the same as the one on Confusion Set. Table 12 shows the confusion matrices for one subject on Easy Set. Table 12a shows that MGRA achieves higher accuracy on static traces from Easy Set, than the result in Table 12d from Confusion Set. Only 2.5% of gesture C is classified incorrectly as B, and 5% of gesture 美 is classified incorrectly as 游. The average accuracy is 98.9%. Table 12b shows that MGRA also achieves a higher average accuracy of 95.7% with traces on Easy Set under mobile scenarios. Recall that the features of MGRA are enumerated and selected based totally on Confusion Set. Therefore, the high classification accuracy on Easy Set confirms the gesture scalability of MGRA. Here, we report the average accuracy comparison among all subjects and all gestures in Table 13. It shows that all approaches improve their accuracy, comparing Easy Set to Confusion Set. For MGRA, the recognition accuracy only decreases 5.48% and 2.87%, from static test traces to mobile test traces for the two gesture sets, respectively. However, the other approaches drop more than 10%. This confirms that MGRA adapts more to mobile scenarios than uWave and 6DMG. Comparing the recognition accuracy on Easy Set, under static and mobile scenarios, MGRA holds accuracy higher than 95% for both. This indicates if the end user puts some efforts into the gesture design, MGRA can achieve high recognition accuracy no matter whether under static or mobile scenarios. One question might be brought up: why does 6DMG fail in comparison to MGRA, which exploits the readings from both the accelerometer and gyroscope? This result basically is due to two reasons. The first is that MGRA extracts features from the time-domain, the frequency domain and SVD analysis, unlike 6DMG, which only extracts features from the time domain. The second is that MGRA applies mRMRto determine the feature impact order under both static and mobile scenarios and finds the best intersection of two orders. mRMR ensures the classification accuracy for selecting features of the highest relevance to the target class and with minimal redundancy. Online Evaluation Energy consumption is one of the major concerns for smartphone applications [20]. Real-time response is also important for user-friendly interaction with mobile devices. Therefore, we conduct a cost comparison for MGRA, best-uWave, 10-uWave and 6DMG on the LG Nexus 5. We measure the energy consumption through PowerTutor [21]. We count the training and classification time for the four recognition methods. Table 14 shows the cost comparison among the four recognition approaches. MGRA has the smallest time and the minimum energy cost for classification. Moreover, the training time is less than 1 min for MGRA, for it extracts altogether 27 features and is trained with the multi-class SVM model. The training and classification time is much higher for best-uWave and 10-uWave, because they take DTW on the raw time series as their cores. The raw series contains 200∼400 real values, larger than the 27 items of MGRA. Besides, DTW exploits dynamic programming, whose time complexity is O(n 2 ). 10-uWave has a much longer classification time than best-uWave, because the test gesture needs to be compared to all 10 templates for each gesture class, i.e., 90 gesture templates. The training time and energy of 6DMG is much greater than MGRA, since 6DMG extracts 41 features for the gesture trace and training the HMM model. Besides, the classification time and energy of 6DMG are also greater than those of MGRA. This comes from 6DMG needing both acceleration and gyroscope sensors. Conclusions In this paper, we implement a motion gesture recognition system based only on accelerometer data, called MGRA. We extract 27 features and verify them on 11,110 waving traces by eight subjects. By applying these features, MGRA employs SVM as the classifier and is entirely realized on mobile devices. We conduct extensive experiments to compare MGRA to previous state-of-the-art works, uWave and 6DMG. The results confirm that MGRA outperforms uWave and 6DMG on recognition accuracy, time and energy cost. Moreover, the gesture set scalability evaluation also concludes that MGRA can be applied to both static and mobile scenarios effectively if the gestures are designed to be distinctive.
2016-04-23T08:45:58.166Z
2016-04-01T00:00:00.000
{ "year": 2016, "sha1": "8770dd3153ec5f8c3ae9390c31eebaca7ed217d5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/16/4/530/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8770dd3153ec5f8c3ae9390c31eebaca7ed217d5", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
27037452
pes2o/s2orc
v3-fos-license
'Next-generation' sequencing becomes 'now-generation' A report on the Advances in Genome Biology & Technology conference, Marco Island, USA, 2-5 February 2011. Upstream: innovation in sequencing sample preparation Many presenters highlighted innovative approaches to simplify sequencing library preparation, expand the range of samples eligible for sequencing, or limit sequencing to specific genomic regions. In describing the Wellcome Trust Sanger Centre's sequencing pipeline, Harold Swerdlow (Sanger Centre, Hinxton, UK) detailed the degree to which amplificationfree Illumina library construc tion reduces the effect of GC composition on sequen cing coverage. Andi Gnirke (Broad Institute, Cambridge, USA) described approaches by which base composition coverage bias can be minimized during the amplification phase of Illumina library construction, for instances where circumstances do not permit ampli ficationfree libraries. Hybrid selection has become a widely adopted means by which to selectively sequence just the exonic portion of the human genome, or other specific regions of interest. By designing oligonucleotides complementary to targeted regions and then hybridizing those oligonucleo tides with genomic DNA on a chip or in solution, signifi cant enrichment in sequencing coverage of the resulting captured DNA may be achieved. Many talks reported use of this technology, such as those from Obi Griffith (Lawrence Berkeley National Laboratory, Berkeley, USA) on breast cancer pharmacogenomics and Donna Muzny (Baylor College of Medicine, Houston, USA) on the use of exon or regional capture at Baylor to characterize mutations associated with tumors, autism, and the 1000 Genomes Project, using Illumina or SOLiD sequencing. Hybrid selection is also beginning to be used for sequencing disease genomes from clinical samples for example, hepatitis C virus (Reinhold Pollner, GenProbe, San Diego, USA). Despite the falling cost of sequencing, selective sequencing of genomic regions of interest will probably remain a key application into the future. Downstream: innovation in data handling and analysis The utility of cheap, abundant sequencing data is amplified by the research community's growing ability to effectively use and analyze the data. The short read nature of current generation sequencing necessitates signifi cantly higher sequencing coverage than long reads for most applications, and so analysis algorithms and hard ware must be capable of dealing not only with the shortness of the reads, but also their extreme abundance. Just as early adopters of the Pacific Biosciences RS sequencer have begun to report increasingly long reads from that new sequencing platform, the community seems to have become adept at dealing with the chal lenges of short reads. Steven Salzberg (University of Maryland, College Park, USA) described a collection of software tools for shortread alignment and analysis (Bowtie, TopHat, and Cufflinks). Bowtie belongs to a new breed of alignment tools that use the BurrowsWheeler transform, which can compact a human reference genome assembly into as little as 1.1 GB of memory such that it allows ultrafast mapping of short reads to the reference. Although mapping of short reads can benefit from reduced hardware requirements, the hardware needs for short read assembly continue to grow, especially as short read assembly algorithms begin to tackle larger genomes. David Jaffe (Broad Institute) described the algorithmic and computational challenges of generating 'good cheap genome assemblies' as implemented in the new ALLPATHSLG assembly software. Through a combina tion of this new assembly software, a large memory server (512 GB RAM), and a specialized laboratory recipe for genome sequencing involving Illumina paired frag ments, Jaffe reported on being able to generate draft assemblies for 15 vertebrate genomes with quality approach ing that derived from capillarybased sequencing. Not having access to high performance computing resources can be a serious impediment to working with nextgeneration sequencing data, and for many, cloud computing is becoming an attractive solution. Toby Bloom (Broad Institute) described her experiences in migrating the Broad Institute's nextgeneration sequence analysis pipeline to the Amazon cloud. Cloudbased analysis presented certain difficulties, most notably a need to keep moving data around within the cloud to match disk storage and performance with targeted com pu ting resources, but as cloud computing services con tinue to evolve and improve it will become a viable and effective analysis solution for small sequencing centers. Whether one computes locally or uses the cloud, data processing and analysis workflows can be complex to navigate and maintain. James Taylor (Emory University, Atlanta, USA) described how the Galaxy workflow system can help. The new development of the Galaxy Tool Shed, akin to an app store for Galaxy, is destined to further popularize the system among the growing community of users. New sequencing-based discoveries and applications Now that genome sequencing data are not only inexpen sive but intelligible, the question many seemed to be wrestling with at this conference was how to make the data useful outside a research context. Sequencing instru ment manufacturers are doing their part to expand the territory of their technology to outside of the research laboratory. The Ion Torrent Personal Genome Machine from Life Technologies, which debuted at last year's con ference, has now been deployed at a number of sites and boasts an extremely short run time (only 2 hours) for rapid data generation. Illumina debuted their MiSeq machine, which costs significantly less than their other sequencing instruments and is also capable of producing sequencing data in hours rather than days. Several cancerthemed talks discussed the potential for sequencing to have an impact on disease treatment and prognosis. David Craig (TGEN, Phoenix, USA) discussed data from a clinical trial designed to discover clinically actionable features of breast cancer patients using sequen cing, but noted that sequencing and analysis of a patient's genome and transcriptome data required 6 weeks of time. Richard Weinshilboum (Mayo Clinic, Rochester, USA) described functional validation of genomewide association study (GWAS) signals, in this case for breast cancer. He highlighted the utility of such studies for identifying markers associated with treatment response or tumor sensitivity to drugs such as aromatase inhibitors and selective estrogenreceptor modulators, but noted that the research investment to discover such markers can be massive (30,000 subjects studied over many years) and might not be broadly replicable for diverse diseases and drugs. Eric Boerwinkle (University of Texas School of Public Health, Houston, USA) also described the challenges imposed by followup investigations of GWAS hits. Boerwinkle and collaborators investigated a single nucleotide variant that conferred a relative risk of 1.3 for atherosclerosis. Although small, this relative risk would boost the 10year risk of coronary heart disease from 15% to 21% in a typical patient, and could lead to a different treatment regimen for approximately 10% of athero sclerosis patients if physicians were to commonly have access to genotype data for this locus. Carlos Bustamante (Stanford University, Stanford, USA) pointed out that current GWASs are biased towards European populations, and the ancestry of a candidate diseasecorrelated marker in the genome must be taken into consideration before the data can be considered clinically relevant. Several talks addressed the potential benefits and pitfalls of personalized genome data. James Lupski (Baylor College of Medicine) described his efforts to use whole genome sequencing to successfully hunt down the genetic variants responsible for a rare neuropathy that has afflicted him and his family, and points out that genomic data from one's relatives can be much more useful in such applications than data from the population at large. Joe Beery (Life Technologies, Carlsbad, USA) described a frustrating 14year journey of working with medical professionals to diagnose and treat a mysterious illness in his twin children. Wholegenome sequencing of his twins using SOLiD technology identified mutations responsible for doparesponsive dystonia, a pharma co logically treatable disease. Though these examples plainly illustrate the potential value of personalized genome sequencing, clear policies on access and use of this information by both patients and clinicians is essential. Ellen Wright Clayton (Vander bilt University, Nashville, USA) advanced the notion that it is imperative to develop a policy framework and decide when genetic data are ready for primetime in medicine. As sequencing data become increasingly cheaper, ubiqui tous, and informative, clinicians and the public will need to be made aware that, in addition to the genome, a multi tude of other factors, such as the physical environ ment, the microbiome, epigenome, and pleiotropy, can have complex roles in sculpting phenotype.
2017-06-23T10:07:56.697Z
2011-03-18T00:00:00.000
{ "year": 2011, "sha1": "bb79939c686cee82ec1d7ce0f315f463bbc5c836", "oa_license": null, "oa_url": "https://genomebiology.biomedcentral.com/track/pdf/10.1186/gb-2011-12-3-303", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "684144bc04d59289ae3b8a3334d518d1cc9691c0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
272964286
pes2o/s2orc
v3-fos-license
Factors contributing to online game addiction in adolescents: a systematic review ABSTRACT INTRODUCTION Excessive use of games is known to have detrimental effects on the physical and mental health of teenagers, including lack of physical exercise, lack of sleep, and decreased face-to-face social interaction, leading to low social skills [1], [2].Low social skills in teenagers will result in poor peer relationships, an inability to adapt to the surrounding environment, decreased academic ability leading to low self-esteem, and the tendency to behave less normatively and more extremely, causing juvenile delinquency and even mental disorders [3]- [5].Efforts to prevent game addiction have been made by providing prevention education, but the education provided only focuses on knowledge, so this is still not optimal.In addition, it is also not known about the specific factors that contribute to online gaming.Therefore, it is very necessary to further examine what factors influence online gaming addiction in teenagers. According to a global survey in 2021, the prevalence of gaming disorder was reported that more than 2 billion people play video games worldwide, which will exceed 3 billion in 2023, and 3-4% of gamers experience video game addiction [6].In 2021, the global prevalence was found to be 3.05%, which means that there are about At least 60 million people worldwide suffer from gaming disorders [6].Teenagers are the  ISSN: Int J Public Health Sci, Vol. 12, No. 4, December 2023: 1763-1770 1764 age group that experiences the most problems with online gaming addiction [7], [8].Online gaming addiction occurs in teenagers aged 14-19 years, and most of them are 16 years old [9], [10].Male and female teenagers have equal opportunities to play games, although males have more experience and gaming skills than females [11] and gaming addiction is more common among male teenagers [12].The results of a CNN Indonesia report in October 2021 showed that 19.3% of teenagers in Indonesia are addicted to the Internet [13]. Online gaming is an internet-based game that is currently popular due to the development of attractive content and its adrenaline points, making online games more popular [14].Various factors are suspected to contribute to gaming addiction, including internal, social, situational, and external factors [8], [15].The lack of emotional presence from parents is one factor that encourages teenagers to seek refuge or comfort [16]. Games become a coping mechanism for all the problems they face.Like drugs, gaming is used to avoid stressful environments and unpleasant feelings, so people will forget their problems.With the use of internet gaming as a form of coping with real-world problems, individuals who are already addicted to online games can be interpreted as escaping from a depressed state [17].Online gaming addiction can cause social anxiety, limiting teenagers' ability to communicate healthily with their peers, where peer relationships are crucial for teenagers [18].Concerning the above, to control online gaming addiction, it is necessary to know what factors contribute to it.Therefore, this systematic review will discuss what factors contribute to online gaming addiction. RESEARCH METHOD A synthesis of pertinent papers on the causes of teen addiction to online gaming was done using a systematic review.The centre for review and dissemination and the Joanna Briggs Institute's criteria, as well as the PRISMA checklist, were utilized to assess the studies' quality.Using the population, intervention, comparison, outcome, study design (PICOS) approach to determine inclusion and exclusion criteria. Search strategy and inclusion criteria for systematic reviews The following electronic databases were initially used to conduct a thorough literature search for published studies: SCOPUS, PROQUEST, SAGE, and SpringerLink.The reference lists of the papers that were found were further searched to find more articles.Studies had to have been written in English within the previous five years (2018-2023), use quantitative or qualitative research designs, and recruit adolescents in order to meet the inclusion criteria.The search terms utilized were "game addiction", "adolescent", "predictor", "reason", "factor", "variable", "determinant", "element", "component", "aspect", "belief", "attitude", "influence", and "effect".The inclusion and exclusion criteria are presented in Table 1, and were used to choose the articles found through the search method. Study selection, data extraction, and management The entire texts were examined based on the publishing year, the database was searched, the population was examined, and elements impacting online game addiction were identified.The full contents, abstracts, and titles of the articles were independently reviewed by the authors.Based on relevance to the subject, the caliber of the research, the strength of the evidence, and other inclusion and exclusion criteria, the writers evaluated the full-text versions of potential articles before deciding whether or not to include them in the review.Each article that was kept was evaluated, and the most important details were compiled into evidence tables that summarized the research methodology and conclusions of each piece.Additional tables provided a summary of the approach, findings, and suggestions.Each study's risk of bias was evaluated using the technique outlined [19].Once again, disagreements were resolved through discussion. Outcome measures In the outputs that we have set, we focus on the field of online games.We focus on articles that consider the factors that influence online gaming addiction.The target audience is addolencent. Study characteristics Based on the search results, the characteristics of the articles that have been found are obtained.Twenty-five articles met the inclusion criteria for review as shown in Figure 1.Evidence supports the contribution of adolescent factors, parental factors, and environmental factors that cause online game addiction.Existing articles are presented as in the following PRISMA flow. Adolescent characteristics 3.2.1. Gender Playing online or offline games does not discriminate by gender.This is consistent with research that explains that both males and females have equal opportunities to play games, although males have more experience and gaming skills than female adolescents [11], [20] and the tendency to play games is not only done by male adolescents but also female adolescents, although some studies state that the incidence of game addiction is more commonly experienced by male adolescents [12], [21], [22]. Age Based on age, it was found that among 14-19-year-old adolescents in junior high and high schools, the majority of respondents who played games were 16 years old.According to some studies, it is known that the prevalence of game addiction occurs at the age of 16, in the age range of 13-29 years and 16-21 years [10], [23]. Factors influencing online game addiction in adolescents Several contributing factors can increase the risk of adolescents experiencing online game addiction, including; i) adolescent factors consisting of cognitive factors, adolescent life satisfaction, and duration of smartphone use.ii) Parental factors, consisting of parenting and communication, parental support, and parental income.iii) Environmental factors, consisting of friendship relationships and the school environment, can cause academic stress in school. . Adolescent factors Online game addiction is influenced by several factors, including adolescent factors, parental factors, and environmental factors.Factors affecting adolescents include cognitive factors, adolescent satisfaction with life, and the duration of smartphone use.The parental factors include parenting, communication, parental support, and socioeconomic conditions.Environmental factors are related to the conditions that exist around adolescents, such as teachers, friends, and the environment in the school environment.A more complete explanation of the factors that contribute to online addiction, as discussed in the following discussion. Cognitive factors One form of maladaptive cognition is constantly thinking and worrying about excessive internet use in someone, resulting in continuous memories of the internet [24].In addition, maladaptive cognition is also related to self-concept, feeling like nobody in the real world, but feeling like someone meaningful when entering the online world.The behavioral implications that arise include creating and controlling online profiles, and allowing entry into various online games [25].Internet gaming players feel more valued and successful when in the online gaming world compared to the real world, so they will feel very disturbed when online games are not available [26]. Certain maladaptive beliefs in this cognitive factor are categorized as follows: excessive evaluation of game rewards and identity, inflexible rules and biases that arise in game situations, excessive dependence on games to meet self-esteem needs, and playing games as a method of gaining social acceptance [27], [28].The results of this study show a positive linear relationship between the presence of maladaptive game cognition, especially maladaptive rules about games and game-based self-esteem, and symptoms of game addiction [29], [30]. Adolescent life satisfaction Adolescent life satisfaction is related to the happiness and well-being felt by adolescents.Adolescents who do not feel life satisfaction will feel sadness and depression [31]- [33].Playing games is one of the activities that are entertaining and fantasy-like, allowing interactions with other players without being limited by time and place.This is a new experience for depressed individuals that is very attractive, so they are interested in experiencing it, which then leads to addiction [12].Several studies explain some motivations why someone is interested in internet gaming, including entertainment and pleasure, emotional coping, seeking challenges, and escaping from reality [34].Players who are addicted to playing to avoid dissatisfaction [35], could be exhibiting indications of withdrawal symptoms, they want to overcome it by playing compulsively. The use of internet gaming as a coping mechanism for problems in the real world is in line with the explanation that individuals who are already excessively using online games can be interpreted as individuals who are escaping from depression [17].This addiction can be started by someone playing online games to meet other interests or greater commitments, regardless of the negative consequences [36], [37].Online games typically contain and support particular features and components, such as comprehension of the plot, distinctive simulations, versatility, competition, and intriguing plots or stories, which lead to addiction. Shyness is defined as a form of discomfort with oneself (feeling strange, too self-conscious) and tends to become someone else to fit in with the desired social environment [38].Shy individuals tend to feel anxious and insecure because they feel judged and afraid of rejection by others when establishing direct interpersonal relationships.When in the virtual world, shy individuals have the freedom to do inhibition (change themselves) because of the anonymity facilities so that others do not know the actual physical form and social relationship.Based on several studies, shy individuals tend to prefer relationships through the virtual world in addition to entertainment media.In reality, entertainment and communication are the main components of online games.Thus, shy individuals can fulfill the need for dependence on others through this internet game. Smartphone usage duration Teens are more at risk of smartphone addiction compared to adults, as they do not have good selfcontrol in smartphone usage.Teens with working parents may be at risk of smartphone addiction, possibly because they are not supervised by their parents after school and they will use their smartphones without rules and guidelines [29].Smartphone addiction becomes one of the drivers for teens to be addicted to games, so it is interrelated.Online gaming has evolved into a significant life event that affects how people think, behave, and react.The sensation of quiet and tranquility that players experience while playing the game is what makes it feel like an escape.The system by which individuals begin messing around more regularly and Int J Public Health Sci ISSN: 2252-8806  Factors contributing to online game addiction in adolescents: a systematic review (Nursalam) 1767 investing more energy messing around.At the point when the game is unexpectedly diminished or halted, serious sensations of discouragement and outrage, like shaking and upsetting profound impacts, emerge.This makes teenagers continue to try to meet their gaming needs because they feel a temporary sense of calmness (as a distraction) [37]. Parental factors Parent-child relationships, communication, and family cohesiveness, as well as family support, are important to form child-family relationships so that children don't feel lonely.The lack of cohesiveness in the family environment will cause children to seek comfort in other ways, such as playing online games.Parental factors that contribute to gaming addiction are parenting and communication, parental support, and parental income. Parenting, parental communication, and parental support The most relevant factors in parenting and parental communication related to gaming addiction are hazardous family connections, family union troubles [39], parental emotional well-being issues, and the shortfall of rules for web gaming usage [40].Three aspects of psychological parental control are linked to addictive behaviour [38], [41].Parental mental control, a subset of parental control, can be defined as parental behaviour that interferes with and manipulates a child's ideas, feelings, and attachment to their parents.Such parental practises, such as instilling guilt, withholding love, and asserting power, might be categorised as manipulative and psychologically repressive [42].Inducing guilt is a type of parental behaviour that makes a youngster feel bad for not complying with their parents' requests. Parents who remove their affection and attention from their children until their performance matches their standards are said to be exhibiting love withdrawal.Assertion of authority is a type of parental behaviour in which parents restrict their child's ability to express his or her feelings and opinions.Parental psychological control differs from parental behavioural management significantly [43].The first sort of parental control disrupts a child's psychological development by affecting their ideas and feelings, whereas the second aims to control or manage a child's behaviour. Individual addiction behaviour is thought to be influenced by psychological parental control.To explain this association, three theories have been extensively presented.The primary hypothesis (selfassurance hypothesis) asserts that self-assurance upgrades or debilitates teenagers' inspiration and inherent assimilation, influencing their independence, character, and capability and prompting broken cell phone and web use to meet their mental necessities [43].The subsequent hypothesis, named the social bond hypothesis by scholastics, holds that parental mental control is related with lower levels of youngsters' social bonds, which obstruct the improvement of individual relational connections and, thus, increment their degree of cell phone and web fixation.As per the third hypothesis (natural framework hypothesis), parental mental control impacts youngsters' expert requirements.When children's professional needs do not match their parents' professional needs, they exhibit mental instabilities.In this instance, the youngsters may be predisposed to addictive behaviour. Positive family factors, like positive impression of the family climate, the glow inside the family, or closeness among guardians and kids [39], additionally act as defensive variables against computer game compulsion [35].Game addiction is associated with low emotional and affectionate family relationships.Game addiction is associated with higher motor impulsivity, and lower family suitability and determination.Remission is associated with decreased anxiety and hostility and increased adolescent emotional stability.These discoveries recommend that profound prosperity and family change might be pertinent to the compelling administration of game-playing conduct [40], [44]. Socio-economic conditions Newly emerging family types (multicultural/dual income) affect web-based game enslavement.[45], delinquency, and adolescent motive for engagement in online gaming [46].Teenagers from double pay homes scored a lot higher on all delinquent and compulsion risk markers.Besides, young people from multicultural families scored a lot higher on the compulsion factor "mind-set change".At last, young people from double pay families are headed to play web-based games to sit back, though youths from multicultural families play internet games to mingle. Environmental factors Conducive classroom atmosphere, support from the teacher, and good relations between students are significantly negatively correlated with adolescent game addiction [37], [41], [47].Internet accessibility and peer relationships will be associated with factors that reinforce adolescent game addiction [48].In other words, when adolescents feel positive support from teachers and from other students, they do not have contact with friends who deviate, which reduces the risk of online game addiction [49]. CONCLUSION The factors influencing online game addiction in adolescents, including: adolescent factors consisting, parental factors and environmental factors.This study yields important information about the factors that affect adolescents addicted to online games.These findings are expected to be the basis for parents, and adolescents to identify the source of problems to minimize negative impacts.Nurses as health care provider carrying out health promotion are expected to be able to identify the factors that cause online game addiction experienced by adolescents and develop interventions to help adolescents get out of online game addiction to help adolescents achieve better achievements and better social skills.
2023-10-06T15:06:09.652Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "b50c1bc984e75cc21e8de10294f426c1506b55db", "oa_license": "CCBYNC", "oa_url": "https://ijphs.iaescore.com/index.php/IJPHS/article/download/23260/13914", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "7fb8cc69c059daa2d7d2e5c15e6454c02d22b04c", "s2fieldsofstudy": [ "Psychology", "Computer Science" ], "extfieldsofstudy": [] }
84619504
pes2o/s2orc
v3-fos-license
Non-invasive genetic study and population monitoring of the brown bear (Ursus arctos) (Mammalia: Ursidae) in Kastoria region – Greece The brown bear (Ursus arctos) in Greece is considered endangered but little is known about the genetic status and the exact size of local populations. Non-invasive genetic sampling was used in this study to investigate the genetic diversity and genetic structure of the brown bear population in the Kastoria region (northwest Macedonia, Greece) and to estimate its population size. Estimation of demographic parameters was based on innovative, well-evaluated methods that can provide estimates from a single sampling session. DNA was extracted from hairs, scat and blood samples and subsequent amplification of 10 microsatellite loci allowed the identification of a minimum number of 75 living bears in the study area while the mark–recapture-based analysis resulted in a point estimation of 219 individuals. Relatively high diversity values, lack of heterozygosity deficiency as well as estimated effective population size, support the Kastoria bear population having good conservation status. Introduction The brown bear (Ursus arctos), the largest carnivore in Europe, has suffered severe population bottlenecks in the past, mainly because of human persecution and habitat degradation, resulting in shrinkage of both its geographical range and abundance. Despite these demographic and habitat pressures, brown bear populations in Greece have managed to survive and the population appears to be stable at around 190-400 individuals, which are split into two independent populations (Mertzanis et al. 2009a;Karamanlidis 2011). The most significant population, in the western part of the country, is part of the Dinaric-Pindos biological population (one of the largest in Europe estimated at 2800 individuals), and the smaller one in the northeastern part of the country is part of the Eastern Balkan population (Linnell et al. 2007). Significant conservation actions and measures during the last two decades, mainly under the LIFE Programme, seem to have substantially contributed to bear population recovery in Greece. Recent re-colonization of areas of historical distribution clearly indicate positive population trends at a local scale (Mertzanis et al. 2009b). Nevertheless, the brown bear is still considered endangered in Greece, mainly because of humanrelated threats such as poaching, traffic fatalities, or habitat fragmentation due to large infrastructure development, which continues to affect its survival potential (Mertzanis et al. 2009a). Census population size (N c ), effective population size (N e ) and genetic diversity are significant parameters to assess the population status. It is well known that small populations are more vulnerable to demographic and environmental stochasticity, genetic drift and inbreeding, which increase the probability of extinction (Soulé 1987). Reliable estimations of population size and evaluation of the genetic status are crucial for conservation planning and the proper management of endangered species or populations. Despite the conservation status and the necessity for targeted management projects, very few systematic studies have been conducted so far at a local scale in Greece for bear populations (e.g. Mertzanis 1994; Karamanlidis et al. 2012). The bear population of the Kastoria region is considered to be a significant part of the larger and geographically more extended bear population of the Pindos mountain range in western Greece. During recent years, increased bear-human interference incidents in this region are probably related to an increase in bear population density, and have multiplied bear-human conflict situations. Moreover, the recent construction of the Egnatia highway segment "Siatista-Krystallopigi" (KA45) added a new threat to the survival of the indigenous bear population because of numerous lethal bear-vehicle collisions. Lack of appropriate mitigation structures, as well as the existence of a non-bearproof highway fence, are considered to be the main reasons for these accidents. These problems combined with a lack of knowledge on basic bear population parameters in Kastoria have made the systematic monitoring and evaluation of its conservation status a necessity. Bears are difficult to monitor because they are elusive, solitary and occur at low densities. Non-invasive genetic sampling (NGS) has been proposed as a reliable alternative sampling method not only for the genetic study of rare or cryptic animal species but for the estimation of their abundance as well (Kohn and Wayne 1997;Kohn et al. 1999;Bellemain et al. 2005;Waits and Paetkau 2005;Luikart et al. 2010). The main advantage of NGS is that no capturing or handling of animals is required. Although at first there were significant concerns about the genotyping reliability of NGS and the inflating effect that genotyping errors may have on DNA-based abundance estimates (Taberlet et al. 1996;Gagneux et al. 1997;Creel et al. 2003;McKelvey and Schwartz 2004), numerous studies have shown that NGS can provide reliable and accurate information (e.g. Lathuillière et al. 2001;Waits and Paetkau 2005;Sawaya et al. 2011). As a result, most of the recent publications concerning ecology, demography, population genetics or phylogeography of bears have incorporated NGS methods (Gervasi et al. 2008;Pérez et al. 2009;De Barba et al. 2010a;Sawaya et al. 2012;Schregel et al. 2012). In addition, Karamanlidis et al. (2010Karamanlidis et al. ( , 2012 tested the efficiency of implementation and reliability of these methods on brown bear populations in Greece and proposed specific protocols. The purpose of this study, which is part of an ongoing LIFE project (LIFE09 NAT/GR/000333-Action A3), was twofold: (1) to estimate the genetic diversity and describe the genetic structure of the bear population in the Kastoria region, and (2) to estimate the population size (census and effective) based on NGS and genetic markrecapture data. The results of this study are expected to have direct implications in the management of bear populations in the study area while this paper can assist conservation efforts for the species in Greece, providing methodological and practical guidelines for larger-scale monitoring programmes based on NGS in the future. Study area The study area is located in northwestern Greece and to a great extent overlaps with the prefectural unit of Kastoria (Figure 1). Large parts of the Gramos and Voio mountains, which belong to the northwestern Pindos range, and the valleys of the Sarantaporos and Aliakmon rivers, are included. The study area is delineated westwards by the Sarantaporos river, eastwards by the Vernon and Askio mountains, southwards by the Aliakmonas river in the Kozani prefectural unit and northwards by the border with Albania and the prefecture of Florina. It covers 1720 km 2 at altitudes that range between 400 and 2520 m above sea level. The study area is covered by dense forests, partially forested areas, grasslands and cultivations. Forest vegetation is composed of black pine (Pinus nigra), oak (Quercus sp.) and beech (Fagus sp.) whereas cultivations comprise corn fields, wheat fields and orchards. A large part of the study area is characterized by medium-to high-density human settlements (villages of 50-100 and > 500 inhabitants). A high-density road network (1.5 km/km 2 ) supporting human activities, the recent construction and operation of highway KA45, and a relatively high level of hunting pressure are among the humanrelated disturbance and mortality factors. Sampling Recent non-invasive genetic studies of small bear populations have shown that the optimal sampling strategy should combine systematic hair trapping and opportunistic sampling of faeces and hairs, because the pooled data allow increased bear identification possibilities and they are suitable for population size estimation (Gervasi et al. 2008;Pérez et al. 2009;De Barba et al. 2010b). Hairtraps on power poles are one of the sampling procedures that have been suggested for non-invasive genetic studies of brown bears and they have been effectively implemented in Greece (Karamanlidis 2008;Karamanlidis et al. 2010). A permanent sampling network of 110 hairtraps was installed in the study area and was revisited monthly from July to December 2011. The initial selection of the poles was made after inspection of the local power pole network for recent bear signs (claw marks, hairs or mud). All hairs found on one barb of the barbed wire on a given power pole were considered as one sample and stored dry with silica gel at room temperature in a paper envelope. In total, 129 hair samples were collected. In addition, 42 hair samples were collected with barbed wire attached on the fence of highway KA45. The hairtraps in that case were placed on both sides of the highwayfocusing on the spots where bears could easily clear the fenceand revisited on a regular (every 2-3 days) basis for 2 months (October and November 2011). To further increase sampling size and coverage, we included 46 faecal samples opportunistically collected during regular field surveys from July to December 2011. For species producing large faeces (i.e. brown bear), it is generally not necessary to collect the whole scat and hence a subsample of it is often taken in the field. To maximize DNA quality, the outside portion of bear scats was preferably collected according to Stenglein et al. (2010). Faecal samples were stored in plastic tubes containing absolute ethanol and preserved at -20°C. Fifteen blood samples were also included in the analysis. These samples came either from bears that were victims of vehicle collisions (nine samples) or from live bears trapped for telemetry purposes (six samples). Blood samples were stored in collecting tubes at -20°C. DNA extraction Generally, 2-12 guard hairs per sample were used for DNA extraction. Under a stereoscope hair roots were cut and transferred to a 1.5-ml tube. DNA was extracted from hair roots and blood samples using the QIAmp Mini Kit of Qiagen (Hilden, Germany) following the manufacturer's instructions. For scat samples, a QIAmp DNA Stool Mini Kit from Qiagen was used with slight modifications of the manufacturer's protocol (available on demand). A small quantity of faecal material (180-220 mg) was scraped off the outer part of each sample using sterile spoons and was left to dry under a fume hood. All extractions from faecal samples took place in a separate facility to avoid contamination. In addition, extraction-negative controls were used. The low quantity of DNA obtained from hair and faecal samples does not allow for direct testing of successful extraction in agarose gel. For this reason, extracted samples were initially amplified with one pair of the selected microsatellite loci and then the polymerase chain reaction (PCR) product was visualized by agarose gel electrophoresis. The multi-tube approach was used in all samples to decrease the probability of genotyping errors. For this reason, two PCRs per locus were initially performed for each sample following the method of Adams and Waits (2007). An allele was accepted only after it was observed twice, otherwise a third PCR was performed and compared with the genotypes of the first two amplifications. Sex identification Sex identification was performed using the primers described by Pages et al. (2009), which co-amplify a bear-specific Y marker (SRY gene) and a bear-specific internal PCR control (ZF gene). PCR amplification was accomplished using the following conditions: initial denaturation at 94°C for 5 min, 40 cycles of strand denaturation at 94°C for 30 seconds, annealing at 55°C for 30 seconds and elongation at 72°C for 45 seconds. Final elongation was achieved at 72°C for 7 min. Amplifications were performed with a reaction volume of 20 μl, containing 2 μl of 10× Reaction Buffer, 0.2 μl of 10 × bovine serum albumin, 0.25 mM dNTPs, 1 pmol/μl of each SRY primer, 6 pmol/μl of each ZF primer and approximately 50-100 ng of template DNA. Visualization and control of the PCR products were achieved by electrophoresis in 1.5% agarose gel. Statistical analysis Genotyping reliabilitymarkers suitability DROPOUT software (McKelvey and Schwartz 2005) was used to determine whether a sample contained genotyping errors, the relative magnitude of the problem as well as the number of unique genotypes. For all the genotypes for which a discrepancy was detected between the first and the second PCR amplification the program RELIOTYPE (Miller et al. 2002) was used to detect if the reliability level of 95% was reached after the third PCR amplification (the minimum acceptable level). In the final data set, we retained only genotypes with average reliability scores ≥ 95%. To evaluate the suitability of the marker set for identifying individuals, the probability of identity (P ID ; Paetkau and Strobeck 1994) and the more conservative probability of identity among siblings (P ID-Sib ; Waits et al. 2001) were estimated using the software GIMLET v. 1.3.2 (Valiere 2002). Genetic diversity and inbreeding Observed (H o ) and expected (H e ) heterozygosity values for each locus were calculated using GENEPOP 4.0 (Rousset 2008). Deviation from Hardy-Weinberg equilibrium was tested using Fisher's exact tests (Rousset 2008) with unbiased p-values derived by a Markov chain method with the same software. The significance value for multiple significance tests was set using the sequential Bonferroni procedure (Rice 1989). CERVUS 3.0.3 (Kalinowski et al. 2007) was used to evaluate polymorphic information content, null allele probability and number of alleles for each locus. Population structure and demographic history The numbers of genetic clusters (K) was inferred using STRUCTURE 2.3.4 (Pritchard et al. 2000;Falush et al. 2007). The method uses a Bayesian clustering algorithm to partition individuals into a given number of populations (K) under the assumption of Hardy-Weinberg equilibrium and linkage equilibrium. The admixture model was used, allele frequencies were assumed to be independent and analyses were conducted with a burn-in period of 50,000 followed by 750,000 Markov chain Monte Carlo repetitions. We ran STRUCTURE setting the number of clusters (K) from 1 to 5 (with 10 runs for each K) to determine the most likely number of clusters representative of the data. The most probable value of K was inferred from the mean loglikelihood of the 10× [LnP(D)] values according to the criteria by Pritchard et al. (2000). K with the highest likelihood and consistency between runs was chosen as the most appropriate. A factorial correspondence analysis implemented in the program GENETIX v. 4.05.2 (Belkhir et al.1996(Belkhir et al. , 2004 was performed to graphically visualize the genetic relationship between individuals and inferred groups. In addition, an exclusion test (Cornuet and Luikart 1996) for detecting potential immigrants in the population was performed using the software GENECLASS v. 2.0 (Piry et al. 2004), applying the frequency-based method (Paetkau et al. 1995) and the simulation algorithm of Paetkau et al. (2004). To test for recent genetic bottlenecks, deviations from expected heterozygosity were inferred under the assumption of mutation drift equilibrium by either a stepwise mutation model or a two-phase model using the program BOTTLENECK 1.2.02 (Cornuet and Luikart 1996). The data were analysed with the recommended settings (Piry et al. 1999). Estimation of census (N c ) and effective (N e ) population size Census population size was estimated using the estimator implemented in the capture-mark-recapture-based programme for NGS CAPWIRE (Miller et al. 2005). In traditional trap-based mark-recapture studies, an individual may be captured only once per session. Estimating population size has focused on estimating the probability of capture for each individual in each session. An important difference in the data arising from DNA-based mark-recapture studies is that sampling is approximately done with replacement. That is, since an individual is not physically confined at any time, it may leave multiple hair tufts or scats at multiple locations during a sampling session. CAPWIRE accommodates data with multiple observations of an individual within a single session and has been shown to work well with capture heterogeneity and small populations (< 100 individuals), such as the one expected in our study area (Miller et al. 2005). Possible capture heterogeneity in our data, due to the collection of genetic samples from power poles (Karamanlidis et al. 2007(Karamanlidis et al. , 2010, necessitated the use of the two innate rates model for the calculation of population size. The effective population size N e of the brown bear population was estimated using one-point estimate methodologies implemented in NeESTIMATOR 1.3 software (Peel et al. 2004). N e was calculated using the linkage disequilibrium method option of NeESTIMATOR. Results were also compared with the one-point N e estimate given by ONeSAMP 1.2 (Tallmon et al. 2008). This software uses summary statistics and approximate Bayesian computation to estimate N e from a single sample. Individuals and sex identification The total number of samples collected in the field during the whole sampling season was 232 (171 hair samples, 46 scat samples and 15 blood samples). However, 18 hair samples had no hair root and they were not included in the analysis. DNA was successfully extracted from 116 hair samples (76%), 22 faecal samples (48%) and 12 blood samples (80%). The quality of the hair root was very important for the successful DNA extraction; in more than 10 cases the use of only two or three high-quality hair roots was sufficient to obtain amplification in all the selected microsatellite loci. The multi-tube PCR approach was used for hair root and faecal sample amplification to decrease genotyping error. Furthermore, multiplex PCR procedures, which co-amplify several loci in the same reaction and so decrease workload and cost, and enable a more efficient use of the template DNA (Skrbinsěk et al. 2010), were applied. Hence, from the successfully extracted DNA samples, 86 hair, 20 faecal and 12 blood samples were fully genotyped for the 10 loci and they were used for further analysis. From the 1060 genotypes obtained in non-invasive samples, only 28 (2.6%) differed in the alleles between the first and second PCR and so a third PCR was performed. In the final data set, 16 genotypes were rejected because they retained average reliability scores < 95%. In total, 82 unique genotypes were identified representing 82 different bears. No mismatches were recorded when analysing blood and hair or faecal samples from the same individual. Sex identification was achieved in 72 individuals and the majority of them (48 individuals) were males. The male : female ratio was 2 : 1. Genetic diversity All loci in the study were polymorphic, with the number of alleles per locus ranging between 3 and 10 and a mean of 5.8 (Table 1). Mu59 was the most variable locus with an observed heterozygosity value of 0.771, whereas G10X was the least polymorphic with an observed heterozygosity value of 0.085. More than 70% of the selected markers had high polymorphic information content, showing how informative these markers were in evaluating genetic diversity (Table 1). The probability of identity among siblings (P ID-Sib ) was < 0.01, recommending that the data can be used for population size estimation (Waits et al. 2001). Concerning Hardy-Weinberg tests per locus, only G10P and Mu26 loci show deviation from equilibrium at a nominal level of 5%, possibly due to null alleles (F = 0.146 and 0.124, respectively, Table 1). The population showed deviation from the Hardy-Weinberg equilibrium, but when the G10P locus was excluded from the analysis, the population was in equilibrium (P = 0.089). The inbreeding coefficient value over all loci was very low (F IS = 0.07) indicating a lack of heterozygosity deficiency. The mean observed heterozygosity was 0.584 and the expected was 0.548. Genetic structure and demographic history STRUCTURE analysis results (Figure 2a) provided the strongest support for the partitioning of the genetic variation into only one cluster, indicating that the bear population in the Kastoria region shows no sign of substructure and it can be regarded as panmictic. This result was also supported by factorial correspondence analysis (Figure 2b), as only one cluster of individuals was identified. One individual most likely did not originate from the sampled population (exclusion test, P < 0.0001) and was probably an emigrant. Tests for bottleneck phenomena were not significant for both stepwise and two mutation phase models for any sample (0.306 < P < 0.13 and 0.2 < P < 0.4, respectively) and showed normal L-shaped distributions in mode shift. Hence, the analysis showed no evidence of a recent genetic bottleneck. Census and effective population size In total, 82 individuals (Ursus1-Ursus82) were identified according to their composite genotype. Sixty-nine individuals were detected from hair or faecal samples, while six individuals were detected from blood samples taken from living bears that were caught for radio-tracking or survived from vehicle collisions. The remaining seven individuals were victims of collisions and so they were excluded from the living population. Therefore, the minimum population size of bears in the Kastoria region during 2011 was according to our results 75 individuals. The sampling locations of all living bears identified in this study are presented in Figure 1. To estimate the effective (N e ) and the census (N) population size only these 75 living individuals were used. The number of "captures" per individual ranged from one to six. Most of the individuals (58) were "captured" only once but 17 bears were "recaptured" once or more. Details about the number of, date of and maximum distance between recaptures are given in Table 2. For the majority of the cases, the "recapture" samples were as easier to capture (type A), and the remaining 202 as harder to capture (type B). Point estimates of N e using NeESTIMATOR gave a mean value of 48.7 (95% CI 37.1-65.1). When applying the OneSAMP software, values of N e were somewhat lower: 39.5 (95% CI 29.2-65.3). Reliability of NGS and sampling procedure Non-invasive sampling of hair and faeces (as performed in our study) proved to be an efficient method for obtaining adequate genetic data on the brown bear population in the Kastoria region, without handling or disturbing the animals. The choice of a combined sampling strategy (mainly systematic hair sampling but also opportunistic faecal sampling) allowed us to collect a sufficient number of samples providing a larger number of identified bears. In fact, the collection of 46 faecal samples allowed the identification of 17 more bears, which otherwise would have been excluded from the analysis. From a total of 217 non-invasively collected samples, we managed to fully genotype 106, resulting in an almost 50% total rate of success. These rates are within the range of values reported in other non-invasive genetic studies of bears and it can be considered acceptable in terms of cost effectiveness (Kohn et al. 1999;Solberg et al. 2006;Pérez et al. 2009;Sawaya et al. 2012). A major concern about NGS is the accuracy of results because genotyping of samples collected in this way is prone to genotyping errors (allelic dropout, false alleles) (Waits and Paetkau 2005). Genotyping errors can inflate the number of identified individuals in a data set, so checking for genotyping accuracy is necessary to avoid misleading results. Implementation of the multi-tube approach was chosen as an appropriate method for controlling genotyping errors (Pompanon et al. 2005). Consistency of genotypes in both PCRs for the vast majority of samples and final rejection of very few genotypes (1.5%) with average reliability scores < 95% strongly support the reliability of bear identifications in our study. High success rates of DNA extraction and amplification, low rates of genotyping errors and lack of mismatches between blood and non-invasive samples from the same individual confirm the reliability of the NGS methodology that can provide good-quality data. However, specific considerations for the sampling procedure that could improve the quantity of the collected data in future surveys would be useful to report at this point. The sampling period for hair trapping in our study lasted from midsummer till early winter. According to Karamanlidis et al. (2010) the best period for collecting hair from the power poles is late spring and early summer because of the intense marking and rubbing behaviour of brown bears during the mating season. However, specific technical constraints hampered the beginning of our sampling at this period. A better description of bear distribution and possible identification of seasonal movements are expected in the forthcoming sampling sessions, by taking advantage of the installed permanent hairtrap network for collecting hair samples throughout the year. In addition, the collection of more samples will provide more solid statistical analysis and more accurate estimations of population size. One of the main drawbacks of using hairtraps on power poles for collecting hair samples is that the availability and geographical distribution of power poles determine the sampling scheme and capture probabilities that may lead to uneven sampling intensity. In our study, most of the collected hair samples, and consequently most of the identified bears, were located in the central part of the Kastoria prefectural unit (see Figure 1) whereas very few samples were collected in the neighbouring mountainous regions. In the remote forested areas a dense power pole network does not usually exist. This fact hampered the establishment of an evenly distributed sampling network in our study area. Opportunistic sampling of faeces was used to overcome the problem, although the number of suitable faecal samples collected was not large enough for a detailed description of bear distribution throughout the study area. Hence, extensive scat sampling, baited hairtraps (Sawaya et al. 2012) or haitraps on rub trees (Woods et al. 1999) could be properalbeit more laboriousalternatives for collecting more genetic samples in those areas where the power pole network is sparse. Moderate levels of diversity and lack of genetic structure The bear population of Kastoria exhibits medium to high levels of nuclear genetic diversity, lower than that reported for large populations of north Europe, Romania or Russia but higher in comparison with some small populations in central and southern Europe (see Swenson et al. 2011). Isolated, endangered bear populations in Spain and Italy for instance that have suffered severe historical demographic bottlenecks show low values of observed heterozygosity (H o = 0.28-0.44) and very few alleles per locus (A = 1.7-3.3) (Lorenzini et al. 2004;Pérez et al. 2009). In contrast, the Kastoria bear population seems to retain significant levels of genetic diversity (H o = 0.58, A = 5.8), and our analysis with BOTTLENECK software failed to detect a recent bottleneck event. Although there is no information about the demographic history of bears in Kastoria, the possibility of an older or not severe bottleneck (not detectable by the software) and subsequent population recovery of the local population cannot be excluded. No evidence of substructure or heterozygosity deficiency was detected, indicating the existence of a panmictic population that faces a low risk of inbreeding. The KA45 highway disrupts the bear habitat in Kastoria but it is too early to test whether this acts as an effective barrier because it has only been in operation for 3 years. Preliminary telemetry data on bear spatial behaviour over a sample of six bears monitored in the study area in 2011 show that home ranges do not seem, at least at this stage, to be confined by the highway (Mertzanis and Iliopoulos 2011). Genetic recaptures of some individuals on both sides of the highway during this study confirm the occasional permeability of the KA45 highway. Especially for the part of the highway between Dryovouno and Vogatsiko villages, the frequency of crossings by bears seems to be significantly higher (Mertzanis and Iliopoulos 2011). Nine bears were "captured" at hairtraps attached on the highway fence of this section during a period of less than 2 months. Four of them were females, probably accompanied by cubs. Effective population size (N e ) estimated in our study (40-49 individuals, depending on the analysis), was close enough to the minimum threshold of 50 individuals, which is supposed to be adequate for avoiding inbreeding depression (Frankham et al. 2002). Hence, relatively high diversity values, lack of heterozygosity deficiency and the N e values, support the Kastoria bear population being in good conservation status and at least within the near future, it will not be at risk of genetic depletion. Kastoria region is part of the bear distribution range of northwestern Greece which encompasses the Pindos mountain range and the attached massifs. A bear population in central Pindos (Grevena region) has also been studied by Karamanlidis et al. (2012). Similar genetic diversity values (H o = 0.65, A = 5.6), and a lack of substructuring were also reported. Significant levels of ongoing gene flow between contiguous populations of the Pindos range (e.g. Tymfi, Smolikas, Grammos) and the attached massifs like Askio or Vernon into a meta-population concept (Hanski 1999) could be a possible explanation for the significant values of genetic diversity in the Kastoria and Grevena populations. In addition, close proximity to bear populations of southern Albania and FYROM probably allows a permanent connection between them and sufficient levels of gene flow. A genetic study of Greek bears at a greater geographic scale could delineate the population structure and reveal the connectivity patterns between the main regions of the current bear distribution. Estimated population size and suggestions for improving accuracy One of the main goals of this study was estimation of the size of the brown bear population in Kastoria. The NGS and the subsequent amplification for 10 microsatellite loci allowed the identification of 75 living bears in the region, indicating the presence of a minimum population of this size. Most of the identified bears were males (twice as many as females) possibly because of heterogeneity in capture probability. Rubbing and marking behaviour on power poles has been reported to be more intense among male bears than females, leading to unequal hair sample collection between sexes (Green and Mattson 2003;Karamanlidis et al. 2010). More intensive scat sampling in future surveys could help to overcome a possible underestimation of female bears. Analysis with the CAPWIRE program gave a point estimation significantly higher (about three times) than the minimum population of identified bears. Noninvasive DNA-based N c estimates with this method can be obtained from a single sampling session. The ability to estimate N c from a single sampling session is extremely helpful for species that are costly or time-consuming to sample. However, a major concern and a basic assumption in most DNA-based markrecapture models, including the CAPWIRE "urn" model, is population closure (Boulanger and McLellan 2001;Miller et al. 2005). The population closure assumption rests on there being no immigration, emigration, birth or death. No births occurred during our sampling period because bears give birth in late winter. In addition, to avoid positive bias in census population size estimation, we excluded from analysis those samples that were taken from dead bears (victims of vehicle collisions). On the other hand, we cannot exclude the possibility of bear deaths due to illegal hunting or poisoning in the study area during our sampling period because in most cases such incidents happen in remote forested areas and usually they remain unrevealed. The existence of at least one individual that seems not to belong to the local population, according to our results of exclusion tests and factorial correspondence analysis, indicates that emigration is another possibility that violated closure assumption. Significant closure violationresulting in a positive bias in population estimatescan also occur when animal home ranges are relatively large and exceed the sampling area (Boulanger and McLellan 2001). Movements of bears in and out of the sampling area can inflate the number of "captured" animals and negatively bias "recapture" probability. In these cases, the estimated with closed-model methods population size most probably describes the "superpopulation" of animals in the predefined sampling area but in the surrounding areas as well (Kendall 1999). According to telemetry data from six radio-collared bears in the study area during 2011, male bears have large home ranges (165-226 km 2 ) whereas females' ranges are significantly smaller (10-91 km 2 ) (Mertzanis and Iliopoulos 2011). Considering the continuous bear habitats in the surrounding areas as well as the fact that the mean recapture rate of 1.36 observations/individual was significantly lower than the 2-3 observations/individual recommended by Miller et al. (2005) we should not exclude the possibility of positive bias in our estimation.Hence, the estimated census size should be treated with caution, especially for density estimates, because it rather corresponds to a "superpopulation" inhabiting an area larger than the Kastoria prefectural unit. Repetition of the survey in the forthcoming years using the proposed improvements is expected to give more accurate estimates. In addition, extensive and simultaneous telemetry is recommended to index movements of bears in and out of the study area (Powell et al. 2000). The bear population of Kastoria region may be considered among the most robust bear populations in Greece. The identification of 75 bears in the study area during the present study is enough to highlight the significant abundance of this population. The minimum population size of 75 bears corresponds approximately to 30% of the minimum bear population size estimated for the whole of Greece by Mertzanis et al. (2009a). High abundance in conjunction with significant levels of genetic diversity and good conservation status make the Kastoria bear population a significant stock for conservation of the species in Greece. Long-term monitoring and management actions to maintain sufficient levels of gene flow must be of high priority.
2019-03-21T13:07:39.719Z
2015-02-24T00:00:00.000
{ "year": 2015, "sha1": "33b58245e29695f3cdc93508c6aafcacc60de401", "oa_license": "CCBY", "oa_url": "https://zenodo.org/record/4004112/files/source.pdf", "oa_status": "GREEN", "pdf_src": "TaylorAndFrancis", "pdf_hash": "75ea1ac6d61165fb2837c97035d5a6a97e387afa", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
227100552
pes2o/s2orc
v3-fos-license
Targeting Adenosine Receptors: A Potential Pharmacological Avenue for Acute and Chronic Pain Adenosine is a purine nucleoside, responsible for the regulation of multiple physiological and pathological cellular and tissue functions by activation of four G protein-coupled receptors (GPCR), namely A1, A2A, A2B, and A3 adenosine receptors (ARs). In recent years, extensive progress has been made to elucidate the role of adenosine in pain regulation. Most of the antinociceptive effects of adenosine are dependent upon A1AR activation located at peripheral, spinal, and supraspinal sites. The role of A2AAR and A2BAR is more controversial since their activation has both pro- and anti-nociceptive effects. A3AR agonists are emerging as promising candidates for neuropathic pain. Although their therapeutic potential has been demonstrated in diverse preclinical studies, no AR ligands have so far reached the market. To date, novel pharmacological approaches such as adenosine regulating agents and allosteric modulators have been proposed to improve efficacy and limit side effects enhancing the effect of endogenous adenosine. This review aims to provide an overview of the therapeutic potential of ligands interacting with ARs and the adenosinergic system for the treatment of acute and chronic pain. Introduction Today, although substantial progress has been made, many pathological pain conditions remain poorly understood and resist currently available treatments. There is, therefore, a need for novel molecular targets to develop new therapeutic agents with improved efficacy and tolerability. Many experimental reports have identified adenosine receptors (ARs) as potential targets for the management of acute and chronic pain. Adenosine is a ubiquitous endogenous autacoid that mediates its physiopathological effects by interacting with four G protein-coupled receptors (GPCR), namely A 1 , A 2A , A 2B , and A 3 ARs [1]. A 1 and A 3 AR are coupled with G i and G o members of the G protein family, through which they have an inhibitory effect on adenylyl cyclase (AC) activity, while A 2A ARs and A 2B ARs stimulate it by coupling to G s proteins. The consequent modulation of cyclic adenosine monophosphate (cAMP) levels activates or inhibits a large variety of signaling pathways depending on the specific type of cell involved. Although there are instances in which adenosine exerts detrimental effects in various pathological conditions, it is generally considered a protective and homeostatic mediator against tissue damages and stress conditions [2,3]. In physiological and unstressed conditions, the extracellular concentrations of adenosine are maintained low as a result of the rapid metabolism and uptake [4]. However, its levels rise considerably during conditions involving increased metabolic demand, hypoxia, inflammation, and tissue injury. In particular, increased levels of extracellular adenosine were observed in pathological conditions such as epilepsy [5,6], ischemia [7,8], cancer [9,10], [9,10], inflammation [11], and ultimately pain [12,13]. Although adenosine can be produced intracellularly, the main source of adenosine in pathological states is adenosine triphosphate (ATP), released by cells under stressful conditions and dephosphorylated from the combined action of two hydrolyzing enzymes termed ectonucleoside triphosphate diphosphohydrolase (CD39) and ecto-5′nucleotidase (CD73) [1]. Regarding nociception, these elevated levels of endogenous adenosine can alter pain transmission by actions at spinal, supraspinal, and peripheral sites. The extracellular action of adenosine can then be terminated by its transformation to inosine through adenosine deaminase (ADA) and/or by intracellular uptake via nucleoside transporters [14]. Intracellularly, adenosine is phosphorylated to AMP by adenosine kinase or deaminated to inosine by ADA. Given these regulation mechanisms of adenosine concentration, potential pain management can be obtained not only with specific ligand interacting with ARs but also by manipulating endogenous tissue levels of adenosine by modulating its metabolism or transport [13] (Figure 1). The main source of adenosine is adenosine triphosphate (ATP) released from various cell types in response to different stimuli. ATP is dephosphorylated to adenosine diphosphate (ADP)/adenosine monophosphate (AMP) and then to adenosine by two ectonucleotidases (CD39, CD73). In nociception, the elevated levels of adenosine may alter the pain signaling. Thus, the modulation of adenosine metabolisms, increasing its levels, could represent an alternative strategy for pain management. Soluble CD73 provokes long-lasting thermal antihyperalgesic and mechanical antiallodynic effects through A1AR activation. Prostatic acid phosphatase (PAP), acting as an ectonucleotidase, induces A1AR-dependent antinociceptive effects in inflammatory and neuropathic pain models. Extracellular adenosine is rapidly metabolized to inosine by adenosine deaminase (ADA). Inosine is able to bind A1ARs, with an affinity similar to that of adenosine, inducing antinociceptive effects. Another strategy to promote the accumulation of inosine is represented by the The main source of adenosine is adenosine triphosphate (ATP) released from various cell types in response to different stimuli. ATP is dephosphorylated to adenosine diphosphate (ADP)/adenosine monophosphate (AMP) and then to adenosine by two ectonucleotidases (CD39, CD73). In nociception, the elevated levels of adenosine may alter the pain signaling. Thus, the modulation of adenosine metabolisms, increasing its levels, could represent an alternative strategy for pain management. Soluble CD73 provokes long-lasting thermal antihyperalgesic and mechanical antiallodynic effects through A 1 AR activation. Prostatic acid phosphatase (PAP), acting as an ectonucleotidase, induces A 1 AR-dependent antinociceptive effects in inflammatory and neuropathic pain models. Extracellular adenosine is rapidly metabolized to inosine by adenosine deaminase (ADA). Inosine is able to bind A 1 ARs, with an affinity similar to that of adenosine, inducing antinociceptive effects. Another strategy to promote the accumulation of inosine is represented by the inhibitors of the enzyme xanthine oxidase such as allopurinol. In the extracellular space, adenosine can interact with its receptors. A 1 ARs stimulation with adenosine, adenosine metabolites like inosine, or synthetic agonists presents analgesic effects in acute, neuropathic, visceral, postoperative, and inflammatory pain. Activation of A 2A ARs by endogenous adenosine or exogenous agonists results in antinociception in case of inflammatory pain. While, A 2A ARs blockade shows analgesic effects in neuropathic pain. Regarding A 2B ARs, their stimulation has antinociceptive effects in neuropathic pain and their blockade is useful for acute pain treatment. Finally, A 3 ARs activation gives analgesic effects in different types of pain such as neuropathic, cancer, and visceral pain. Although adenosine and its receptors represent a clear target for pharmacological treatment of various diseases and pathological states including pain, very few drugs acting on the adenosinergic system have so far reached the market. The reason behind this discrepancy may be partly due to the ubiquitous distribution of ARs in almost every cell and tissue, making it difficult to avoid unwanted side effects. In recent years, many efforts have been made to improve our understanding of the role of adenosine in nociception and identify novel strategies to exploit the therapeutic potential of the adenosinergic system such as selective ligands, partial agonists, allosteric modulators, or adenosine concentration modulating agents. The focus of the present review is to describe the recent advances in our understanding of the role of ARs in nociception. For each receptor subtype, we will briefly summarize and discuss the preclinical experimental studies that investigated their role and mechanism of action in the modulation of acute and chronic pain. A1ARs The antinociceptive effect of adenosine has been primarily attributed to the activation of A 1 ARs [15] and various A 1 AR agonists or positive allosteric modulators have been shown to be effective in several preclinical models of pain (Table 1). The signaling pathway underlying A 1 ARs antinociception includes inhibition of cyclic AMP and consequently protein kinase A (PKA) activation, inhibition of Ca 2+ channels, activation of K + currents, and interactions with phospholipase C (PLC), inositol triphosphate (IP3), diacylglycerol (DAG), extracellular signal-regulated kinases (ERK), and β-arrestin pathways [3]. The prominent role of this receptor subtype in analgesic responses is due to its peculiar expression in different sites relevant to pain transmission. A 1 ARs are indeed located on peripheral sensory nerve endings in the spinal cord dorsal horn, and at supraspinal pain-processing structures [13,16]. Microglia represent another important localization for the antinociceptive action of A 1 ARs, especially for pain states involving glial activation [17]. The peripheral activation of A 1 ARs diminished inflammatory hypernociception caused by carrageenan intraplantar administration. Using specific inhibitors, the antinociceptive effect of the A 1 AR agonist CPA was shown to be dependent on the nitric oxide (NO)/cyclic guanosine monophosphate (cGMP)/protein kinase G (PKG)/K ATP signaling pathway [18]. The contribution of peripheral A 1 ARs to antinociception was further corroborated when the selective A 1 AR antagonist DPCPX reversed the antinociceptive effects of locally and systemically administered acetaminophen or tramadol in the formalin test [19]. A proof of the supraspinal antinociceptive action of A 1 AR has been reported in a study where the A 1 AR agonist 2 -Me-CCPA injected into the intra-periaqueductal grey (PAG) reduced pain behavior in the plantar and formalin tests. When microinjected into the PAG, 2 -Me-CCPA decreased the ongoing activity of the pronociceptive ON cells and increased the ongoing activity of the antinociceptive OFF cell in the rostral ventromedial medulla [20]. In neuropathic pain rats, the A 1 AR agonist CPA reduced thermal and mechanical sensitivity, while in naïve rats it decreased hypersensitivity to heat but not to mechanical stimuli. In this study, electrophysiological experiments suggested that spinal application of CPA depressed long-term potentiation of A-and C-fiber evoked field potentials while it depressed the baseline of C-fiber but not A-fiber response. To explain this different response, authors have hypothesized that A 1 ARs may be more expressed at C-fiber nerve endings than at A-fiber endings. [21]. In resiniferatoxin-induced neuropathy, the downregulation of A 1 ARs was suggested to contribute to nociception, while the intrathecal injection of adenosine attenuated mechanical allodynia, an effect abrogated by A 1 AR antagonism [22]. A 1 ARs also seem to be involved in visceral antinociception. Centrally injected agonist CPA increased the threshold volume of colonic distension-induced abdominal withdrawal reflex in conscious rats. Besides, the use of the A 1 antagonist DPCPX suggested that adenosinergic signaling via A 1 ARs is also involved in the central orexin-induced antinociceptive action against colonic distension [23]. In a subsequent study, the authors suggested that serotonin 5-HT 1A , 5-HT 2A , dopamine D1 or cannabinoid CB 1 receptors, and the opioid system might specifically mediate the CPA-induced visceral antinociception [24]. The potential role of A 1 ARs in postoperative pain was also investigated. Intrathecal administration of the A 1 AR agonist R-PIA decreased nonevoked spontaneous pain behavior and increased withdrawal thresholds after plantar incision. The opening of K ATP channels contributed to this antinociceptive effect [25]. In a mouse model of acute postoperative pain, ankle joint mobilization decreased hyperalgesia through the involvement of peripheral and central A 1 ARs [26]. In another report, intrathecal adenosine injection inhibited hyperalgesia in two neuropathic pain models but not in a postoperative pain model represented by the plantar incision. However, in this model A 1 AR mRNA and protein expression were decreased suggesting that the lack of antinociceptive effect of adenosine on postoperative pain was due to the decrease in A 1 ARs [27]. An intriguing connection has been uncovered between A 1 ARs and acupuncture, an invasive practice worldwide used to relieve pain. Many studies report that the antinociceptive effects of acupuncture are dependent upon A 1 AR activation. It was shown that extracellular adenosine concentration is increased during acupuncture in mice and A 1 AR expression is required for the adenosine-mediated analgesic effect of acupuncture [28]. The involvement of A 1 ARs in the reduction in neuropathic pain exerted by electroacupuncture was demonstrated by the intrathecal injection of the A 1 AR antagonist DPCPX in a chronic constriction injury (CCI) model. In this report, the effect of A 1 ARs was related to the inhibition of astrocyte activation. [29]. Similar results were obtained in a Complete Freund's adjuvant (CFA)-induced inflammatory pain mouse model, corroborating the involvement of A 1 ARs in electroacupuncture-mediated antinociception [30]. In another study, the analgesic effect of electroacupuncture was suggested to be mediated by overexpressed A 1 ARs in the spinal cord [31]. Different studies suggested that A 1 AR activation is required for the antinociceptive action of various natural compounds. Indeed, the A 1 AR antagonist DPXPC blocked the effect of norisoboldine, a benzylisoquinoline alkaloid isolated from Radix Linderae that diminishes pain response, in the formalin and writhing test [32]. In addition, A 1 AR is necessary to the analgesic effect of paeoniflorin, the major active component extracted from Paeonia lactiflora. In a study carried out in mice, paeoniflorin increased the mechanical threshold and prolonged the thermal latency after partial sciatic nerve ligation (SCNL), an effect abolished by the A 1 AR antagonist CPT or the genetic deletion of A 1 ARs [33]. In the hot plate test, the antinociceptive effect of (-)-linalool, a natural occurring enantiomer in essential oils, was blocked by both an A 1 and an A 2A AR antagonist [34]. D-Fructose-1,6-bisphosphate is an intermediate in the glycolytic pathway, inhibiting hyperalgesia induced by intraplantar injection of carrageenin and its mechanism of action seems dependent on adenosine accumulation that in turns exerts antinociceptive effects by activating peripheral A 1 ARs [35]. Adenosine is rapidly metabolized to inosine by ADA. Interestingly, different studies have identified inosine as a putative endogenous ligand of A 1 ARs and demonstrated the A 1 -mediated antinociceptive effect of the more stable metabolite of adenosine. In particular, inosine binds to A 1 ARs with an affinity resembling that of adenosine and induces antinociceptive, antiallodynic, and antihyperalgesic effects. In rats, both the A 1 AR antagonist DPCPX and the A 2A AR antagonist ZM241385 reversed the antiallodynic, and antihyperalgesic effects of inosine in models of mechanical and heat hyperalgesia induced by bradykinin and phorbol 12-myristate 13-acetate [36]. In the formalin test, inosine did not induce antinociception in A 1 ARs knockout (KO) mice and the A 1 AR antagonist DPCPX inhibited its effects [37]. In a subsequent study, DPCPX, but not the A 2A AR antagonist SCH58261, abrogated the antinociceptive effect of inosine in the intraplantar glutamate test [38]. A different strategy to promote the accumulation of purines like adenosine or inosine is by using the xanthine oxidase inhibitor, allopurinol. Indeed, it has been reported that intraperitoneal administration of allopurinol increased cerebrospinal fluid concentrations of adenosine and its metabolites inducing antinociceptive effects in different pain models. The selective A 1 AR antagonist DPCPX, but not the selective A 2A AR antagonist SCH58261, prevented allopurinol-induced anti-nociception [39,40]. Since extracellular adenosine is primarily derived from the hydrolysis of AMP, the antinociceptive effect of a soluble version of the recombinant CD73, the enzyme that converts AMP to adenosine, has been tested in different pain models. The results of this study revealed long-lasting thermal antihyperalgesic and mechanical antiallodynic effects that were dependent on A 1 AR activation [41]. Prostatic acid phosphatase (PAP) acts as an ectonucleotidase hydrolyzing extracellular AMP to adenosine in nociceptive dorsal root ganglia neurons [42,43]. Intrathecal injection of a secretory version of human PAP induced A 1 AR-dependent antinociceptive effects in inflammatory and neuropathic pain models [44,45]. Furthermore, the injection of PAP into the popliteal fossa-a common acupuncture point-reduces pain responses in mouse models that lasted up to six days after a single injection, an effect dependent upon A 1 AR activation [46]. Several papers in the literature proposed a link between opioid-mediated antinociception and A 1 ARs. In a rat with spinal cord injury (SCI), it was demonstrated a supra-additive interaction between the adenosine A 1 AR agonist R-PIA and morphine in the reduction in mechanical allodynia-like behavior [47]. In spinal cord neuronal nociceptive responses, the antinociceptive effects of the A 1 AR agonist CPA were associated with activation of κ-opioid receptors since the reversal of the CPA effect was observed with norbinaltorphimine (a selective κ-opioid receptor antagonist) but not with low doses of µ-opioid antagonist naloxone [48]. While the opioid antagonist naltrexone did not affect the antinociception induced by CPA in the formalin test, the activation of A 1 or A 2A AR counteracted the µ-opioid receptor increase induced by formalin in the spinal cord, confirming the interaction between adenosinergic and opioid systems [49]. In a rat model of nerve ligation injury, the intrathecal administration of morphine synergistically enhanced the antiallodynic effect of the A 1 AR agonist R-PIA, suggesting an interaction between µ-opioid receptors and A 1 ARs at the spinal level [50]. In addition, other works reported that the antiallodynic/antihyperalgesic effect of morphine is reversed in the presence of the selective A 1 AR antagonist DPCPX [51] or in A 1 ARs KO mice [52]. Beyond opioids, the involvement of A 1 ARs has been observed in the antinociceptive effect of non-steroidal anti-inflammatory drugs such as acetaminophen. In the formalin test, when acetaminophen was administered systemically or locally, its antinociceptive effects were reversed by the intraplantar injection of the A 1 AR antagonist DPCPX, suggesting a link between activation of peripheral A 1 ARs and acetaminophen effects [53]. The contribution of spinal A 1 ARs to the action of acetaminophen secondarily to the involvement of descending serotonin pathways and the release of adenosine within the spinal cord was also suggested [54]. The involvement of A 1 ARs was also demonstrated in the antinociceptive effects of amitriptyline [55,56], oxcarbazepine [57], levetiracetam [58], and neuropeptide S [59]. Collectively, these preclinical studies provide strong support for the therapeutic potential of A 1 AR agonists. However, limited clinical efficacy and relevant cardiovascular and central adverse effects have, to date, hampered the development of A 1 AR agonists as analgesic drugs. An alternative approach to increase selectivity and reduce the possibility of adverse effects exploiting the physiological action of endogenous adenosine is the development of A 1 AR-positive allosteric modulators [60,61]. These agents enhance the function of receptors activated by endogenous agonists, they are expected to have a much lower side effect potential than an exogenous orthosteric ligand, a low propensity for receptor desensitization, and a high selectivity for a given receptor subtype [62]. T62 was the first A 1 AR-positive allosteric modulator to be tested in animal models of pain. Intrathecal or systemic administration of T62 reduced mechanical hypersensitivity induced by spinal nerve ligation (SNL) [63,64], reversed thermal hypersensitivity in carrageenin-inflamed rats [65], and was effective for postoperative hypersensitivity following plantar incision [66]. More recently, TRR469 was characterized as one of the most potent A 1 AR-positive allosteric modulators so far synthesized being able to increase adenosine affinity by 33 fold [67][68][69]. TRR469 effectively inhibited nociceptive behaviors in the formalin and writhing tests, with effects comparable to morphine. Furthermore, it revealed an antiallodynic action in the streptozotocin (STZ)-induced diabetic neuropathic pain model without inducing locomotor or cataleptic side effects as the orthosteric-acting CCPA did [69]. A 2A ARs and Pain The presence of A 2A AR S both on neurons and on glial cells is at the basis of A 2A ARs implications in pain [70]. The relation between A 2A ARs and pain has been controversial with evidence sustaining either pronociceptive and antinociceptive activity depending on the receptors' localization and the kind of pain (Table 2) [13]. Studies supporting the pronociceptive role of A 2A AR report that the selective blockade of this receptor subtype by systemic administration of SCH58261, a selective A 2A AR antagonist, is able to counteract nociception; even the administration at the spinal level produced an equal effect [13,54]. These results are supported by experimental models of acute and nerve injury pain in A 2A ARs KO which showed a decreased algesic reaction to pain tests and even a reduction in markers of neural activity [71]. Moreover, the administration of caffeine, which is a well-known non-selective antagonist of ADA, avert the sleep deprivation due to hypersensitivity following surgical operation. A 2A AR selective blockade with ZM241385 has shown to decrease surgical pain levels and the thermal hyperalgesia caused by sleep deprivation in rats. These results support the hypothesis that A 2A ARs are implicated in the regulation of the interplay between sleep and pain [72]. The pronociceptive effect of A 2A AR stimulation was further corroborated in a study reporting that carrageenan-induced hyperalgesia was significantly reduced in A 2A AR KO mice compared to wild type controls. Interestingly, the A 2A AR inverse agonist ZM241385 injected into the hindpaw reduced the nociceptive behavior following carrageenan in female wild type mice, but not in males suggesting a sex difference in response to A 2A AR activation in the periphery [73]. In addition, a series of inverse agonists showing two different affinity values for the A 2A ARs with the high affinity value in the picomolar/femtomolar range was recently synthesized [74,75] and tested for their antinociceptive properties. In particular, one of these potent inverse agonists, namely TP455, proved to be more potent than morphine in writhing and tail immersion tests in mice [74]. Furthermore, the blockade of A 2A ARs could provide protection in cases of neuropathic pain, which is one of the most common kinds of chronic pain, and it is found in different disorders and could lead to nerve dysfunctionalities [76]. Neuropathic pain pathophysiology is extremely intricate because it comprises central and peripheral mechanisms such as changes in ion channel expression, neurotransmitter release, and pain pathways [77]. Even oxidative stress could play an important role in the neuropathic pain origin process [78]. A body of evidence reveals that, after SCI, there are events that trigger reactive oxygen species (ROS) formation pathways such as microglia activation and glutamate release [79,80]. The injury at the sensory nerves level also involves damage to nuclear and mitochondrial DNA, and loss of antioxidant enzymes [81][82][83]. In fact, numerous studies report that the anti-oxidant or ROS scavengers administration has analgesic effects in many in vivo models of neuropathic pain. Furthermore, neuropathic pain is often a consequence of antitumoral treatments containing platinum because these drugs can provoke peripheral neuropathy and chemotherapy-induced oxidative stress is one of the important pathogenic factors damaging peripheral sensory neurons [84]. Recently, it has been proved that novel A 2A AR antagonists featuring antioxidant moieties can reduce pain associated with oxaliplatin treatment in a mouse model of neuropathy reducing ROS level [85,86]. After peripheral nerve injury, A 2A ARs stimulation induces both activation and proliferation of microglia and astrocytes responsible for inflammation occurring in neuropathic pain, while genetic deletion of the A 2A ARs decreases all the behavioral and histological signs of pain [77,87]. Several studies also showed that systemic and spinal administration of the selective A 2A AR antagonist SCH58261 has antinociceptive effects in different preclinical models [54,74]. Notwithstanding the coherence of the studies testifying for a pronociceptive role of A 2A ARs, in the literature there is evidence even for an antinociceptive role. In particular, since A 2A ARs are expressed in immune cells where they exert a potent anti-inflammatory action, their stimulation may be helpful in cases of inflammatory pain [3,13]. A 2A ARs KO animals under prolonged inflammatory conditions show an up-regulation of markers of spinal cord neural activation. In these KO mice, the loss of the antinociceptive A 2A ARs on immune cells exceeds the decrease in pronociceptive A 2A ARs on nerve terminals leading to enhanced pain signaling [88]. It is well known that the stimulation of A 2A ARs has anti-inflammatory effects but less is known about A 2A AR agonists treatment and chronic inflammatory pain. Different studies report that A 2A ARs expression is up-regulated in lymphocytes of rheumatoid arthritis patients, these data should represent a basis for further investigations in this field [89,90]. The selective agonist of A 2A AR CGS21680 shows the ability to slow down disease progression in an in vivo model of arthritis [91]. Even in a rat animal model, it has been demonstrated that CGS21680 treatment was very effective in decreasing clinical features in comparison to standard antirheumatic drugs such as methotrexate and etanercept [92]. The treatment with the A 2A AR agonist CGS21680 was also able to inhibit the nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) activation and to reduce the release of inflammatory cytokines such as tumor necrosis factor-α (TNF-α), IL-1β, and IL-6. Besides, the A 2A AR stimulation leads to a decrease in metalloproteinases 1 and 3 [93]. Finally, in another mice model of monoarthritis, a new A 2A AR agonist, named LASSBio-1359, showed an important analgesic effect in response to inflammatory pain. This treatment was also able to reduce inflammation by decreasing TNF-α, inducible NO synthase (iNOS) expression, and joint damage [94]. The results of the above-mentioned studies highlight a role for A 2A AR agonists as a potential therapeutic tool in the management of inflammatory pain [89,93]. Different reports demonstrated an antinociceptive role of A 2A AR activation in models of neuropathic pain. An acute administration of A 2A AR agonists, such as ATL313 and CGS21680, leads to an analgesic effect that lasts for many weeks and reverses the mechanical allodynia and thermic hyperalgesia while decreasing the markers of microglia and astrocytes activation [95]. Interestingly, the effect of A 2A ARs activation was just specific for nerve injury or sensitized state suggesting a potential role of A 2A AR agonists for neuropathic pain. Moreover, the blockade of A 2A ARs by using a receptor antagonist in the presence of an anti-IL-10 antibody reverted the effect of ATL313, suggesting that the observed effects were due to the activation of A 2A ARs and the simultaneous enhanced IL-10 production [95]. In a subsequent study, ATL313 induced long-lasting protection against allodynia caused by CCI, SNL, and sciatic inflammatory neuropathy (SIN), through a mechanism involving PKA and protein kinase C (PKC) [96]. In a recent study, a single intrathecal injection of the A 2A AR agonists CGS21680 reversed mechanical allodynia in a rat model of SCI termed spinal neuropathic avulsion pain for at least 6weeks [97]. In the follow-up work, the peri-sciatic injection of the agonist ATL313 also demonstrated the efficacy of A 2A AR activation at the site of nerve injury. These anti-allodynic effects were accompanied by a reduction in interleukin (IL)-1β and NO release, and reduced expression of iNOS and sciatic markers of monocytes/macrophages [98]. These studies revealed that the agonism toward A 2A ARs was able to reduce different kinds of neuropathic pain such as inflammatory neuropathic pain and traumatic ones. In all these cases the A 2A ARs stimulation averted and reverted the nociceptive stimuli amplification [98]. Additionally, the long time span of the analgesic effect after a single treatment suggests that A 2A AR agonists could be useful for central neuropathic pain therapy. It is worth noting that in these studies, the antiallodynic effects of A 2A AR agonists were associated with diminished reactive gliosis. Glial cells have a pivotal role in starting and carrying on neuropathic pain, and for this reason, many studies are directed toward the discovery of new strategies in order to defeat the pain expansion directed by glia. In recent years, A 2A AR agonists have emerged as possible candidates for glial inhibition thanks to their capability to suppress inflammation in immune cells; consequently, A 2A AR agonists represent a promising tool for the treatment of chronic pain of neuroinflammatory origin [99]. The activation of the A 2A AR subtype also seems to be involved in the analgesic effect of neuropeptide S observed in the formalin test. Intracerebroventricular administration of this eicosapeptide reduced formalin-induced nociception during both phases 1 and phase 2 of the test, an effect counteracted by the non-selective AR antagonist caffeine or the selective A 2A AR antagonist ZM241385 [59]. Besides, an interaction between A 2A ARs and the opioid system was reported when the antinociceptive effect exerted by the intracerebroventricular injection of Adonis, an agonist-like monoclonal antibody with high specificity for the A 2A ARs, was counteracted by naloxone, a non-selective opioid antagonist [100]. A 2B ARs and Pain A 2B ARs are expressed both at the central level and in the periphery: among pain-relevant sites, they are localized on immune-inflammatory cells, where they have pro-inflammatory functions, in the spinal cord, and on astrocytes [1,101,102]. Since adenosine presents a lower affinity for A 2B ARs in comparison to other AR subtypes, A 2B ARs are more involved when adenosine concentration rises, for example in pathological conditions such as hypoxia/ischemia and inflammation [2,103]. Nonetheless, the different functions of A 2B ARs in various tissues and their involvement in the pathogenesis of pain are poorly known. As a consequence, more studies are needed in order to clarify their pro or anti-nociceptive actions in different types of pain conditions [11,101]. Unfortunately, studies on the relationship between pain and A 2B ARs are limited due to the lack of selective ligands ( Table 3). One of the first studies using selective A 2B AR antagonists reported an antinociceptive activity of A 2B ARs blockade in an acute pain model represented by the hot plate test. One of these ligands, PSB-1115, did not penetrate the blood brain barrier due to its polar sulfonate group, suggesting that peripheral A 2B ARs were implicated in the analgesic activity [104]. Interestingly, the efficacy of morphine was enhanced by subeffective doses of these A 2B AR antagonists. In a follow-up study, the systemic administration of PSB-1115 decreased the algesic response and edema in both phases of the formalin test [105]. In the same test, the selective blockade of A 2B ARs by using alloxazine resulted in a dose-dependent reduction in nociceptive behavior [106]. Moreover, it has been reported that the treatment with A 2B AR antagonists, MRS1754 and PSB-1115, was able to decrease pain in visceral hypersensitivity rat models [107,108]. PSB-1115 also reverted the antinociceptive effect of diphenyl diselenide, organoselenium compounds, in the hot plate test in mice [109]. A 2B ARs seem to be involved even in chronic pain, with evidence highlighting that these receptor subtypes stimulate the interactions between immune cells and neurons. It was reported that high extracellular adenosine levels activate A 2B ARs on myeloid cells, and that this leads to the activation of pain sensory neurons giving rise to hypersensitivity and chronic pain [110]. Intriguingly, the author demonstrated that A 2B AR stimulation caused nociceptor hyperexcitability and promoted chronic pain via soluble IL-6 receptor trans-signaling. From these results, it is possible to deduce that the blockade of A 2B ARs may repress the nociceptive activity. All these findings seem to testify for a pronociceptive role of A 2B ARs. However, it has been reported that even the activation of A 2B ARs, using a selective agonist (BAY606583), presented an analgesic effect in an accredited model of neuropathic pain, in a similar way to A 2A AR agonists treatment [96]. As it is well known, both A 2A ARs and A 2B ARs lead to increased cAMP accumulation and activation of downstream pathways; they also probably have a similar spinal mechanism of action. Normally, A 2B AR stimulation activates PKA and the pathway of PLC/IP3/DAG leading to changes in gene transcription, while β-arrestins are responsible for the receptor internalization mechanism [13,111]. A 3 ARs and Pain A 3 ARs are present at the peripheral level in many tissues including inflammatory cells; they are less expressed in the central nervous system, nonetheless their activation causes functional effects, in particular, in glial cells [112]. The possibility to exploit A 3 AR stimulation, using selective agonists, has been studied in different pathologies counting cancer and inflammation [112,113]. A 3 ARs involvement has also been investigated in relation to pain; the first pieces of evidence reported a pronociceptive role [114]. Further studies, using more selective ligands, overturned previous results showing that A 3 AR agonists present antinociceptive activity so, they can be useful as analgesics especially for neuropathic pain (Table 4) [113]. In fact, the systemic administration of selective A 3 AR agonists, such as IB-MECA, Cl-IB-MECA and MRS1898, reduced the mechanical allodynia in a model of neuropathic pain-especially IB-MECA was as efficacious as morphine. The specificity of this effect was demonstrated by blocking A 3 ARs with the selective antagonist MRS1523, which abrogated the analgesic effect of A 3 AR agonists [115]. Interestingly, the A 3 AR agonists have no effects in acute pain models, for instance, hot plate and tail flick tests [116]. Another A 3 AR selective agonist, named MRS5698, was demonstrated to be able to reduce mechanical allodynia in different models of neuropathic pain. MRS5698 had an analgesic effect in acute pain tests but its activity persisted with repeated administrations [117]. The mechanism of action of this agonist involves GABA signaling: the A 3 ARs activation normalizes the changes in GABA concentrations caused by nerve damages, thus restoring the GABA inhibitory effect on pain transmission [118]. Moreover, it has been noticed that A 3 ARs stimulation inhibits N-type calcium channel opening in isolated rat dorsal root ganglion neurons, causing a reduction in the neurotransmitter release and the neuronal excitation [119]. In another model of nerve injury that produces tactile allodynia, the daily administration of IB-MECA averted the appearance of hypersensitivities, the activation of glial cells and the altered transmission of nociceptive stimuli, resulting in an attenuation of neuropathic pain [120]. In a recent study, MRS7476, a prodrug with increased aqueous solubility compared with parent MRS5698, was found to be efficacious in reversing neuropathic pain induced by CCI [121]. Anticancer chemotherapeutic treatments often induce neuropathy as an adverse effect; the stimulation of A 3 ARs can help to decrease the pain in these cases. The A 3 AR agonist IB-MECA is able to reduce the allodynia and the hyperalgesia induced by different anticancer drugs such as paclitaxel, oxaliplatin and bortezomib without diminishing their antitumoral effectiveness; even other A 3 AR agonists, Cl-IB-MECA and MRS1898 present the same effects [115,116]. The pathway involved seems to imply NF-κB, ERK and p38 inhibition and the production of inflammatory cytokines. In particular, the treatment with A 3 AR agonists reduces the release of the pro-inflammatory cytokines TNF-α and IL-1β while increases the anti-inflammatory IL-10 [122]. Other mechanisms have been proposed to explain the antinociceptive activity of A 3 ARs; among these are the diminished activation of astrocytes, inhibition of cAMP, PKA, the interaction with the PLC/IP3/DAG and phosphoinositide 3-kinase (PI3K)/mitogen-activated protein kinase (MAPK)/ERK/cAMP response element-binding protein (CREB) pathways and the signaling via Gi [123]. Even in the A 3 AR subtype, the internalization of the receptor is mediated by β-arrestins [111]. Recently, in a model of cancer chemotherapy-induced neuropathic pain, the A 3 AR agonist MRS5698 attenuated pro-inflammatory IL-1β production and promoted anti-inflammatory and neuroprotective IL-10 expression by regulating the nucleotide-binding oligomerization domain-like receptor protein 3 inflammasome [124]. Besides the antinociceptive effect of A 3 AR agonists in cancer pain and neuropathic pain related to chemotherapy, they have also found to be potent antitumoral agents in many animal models of different forms of cancer (melanoma, prostate, colon, and hepatocellular carcinoma), where they are able to reduce tumor growth [113,125]. Their therapeutic potential has also been assessed in a model of bone cancer pain in which mammary gland tumoral cells were injected into the tibia [126]. In this model, the repeated administration of Cl-IB-MECA decreased tumor growth, mitigated the nociception and the bone degradation associated with cancer. In addition, the A 3 AR agonist was also effective in delaying the onset and the advancement of bone cancer with a major efficacy when the treatment with Cl-IB-MECA was done before the injection of cancer cells [126]. The involvement of A 3 ARs in diabetic neuropathy was also investigated. It has been demonstrated that IB-MECA treatment ameliorates mechanical hyperalgesia and thermal hypoalgesia in STZ-treated mice. Moreover, reduced expression or functionality of A 3 ARs promoted diabetic neuropathy development [127]. It is well established that long-lasting treatments with opioids lead to hyperalgesia and tolerance to drugs, reducing the analgesic effect of opioids in chronic pain [128,129]. In a rodent model, it has been reported that the opioid adverse effects seem to be linked to reduced A 3 ARs signaling. In fact, the stimulation with A 3 AR agonists ameliorates pain sensitivity suggesting that selective A 3 AR agonists may be used in addition to opioids for chronic pain management [130]. Importantly, it has been reported that the antinociceptive effects of A 3 AR agonists persist at least up to 2 weeks of treatment, suggesting that stimulation of A 3 ARs does not induce tolerance [87]. A recent study reports the effect of a new A 3 AR agonist, AL170, in a rat model of colitis. AL170 mitigates the colonic damage and inflammation, reducing the release of TNF-α, IL-1β, and myeloperoxidase. AL170 was demonstrated to have an efficacy comparable to that of dexamethasone, one of the most used drugs in the colitis treatment and other inflammatory bowel diseases [131]. The activation of A 3 ARs resulted able to decrease the infiltration of inflammatory cells and the production of inflammatory mediators thus softening visceral pain [131]. A further study revealed that the treatment with A 3 AR agonists is useful in another model of abdominal pain induced by colitis. In this model, Cl-IB-MECA and MRS5980 decreased visceral hypersensitivity in the postinflammatory phase as well as in the and persistence one and showed effectiveness comparable to that of linaclotide, a drug used for the treatment of irritable bowel syndrome [132,133]. Conclusions The modulation of ARs, especially their activation, induces potent antinociceptive effects in diverse preclinical models of acute and chronic pain. Nevertheless, the efficacy of AR ligands for the pharmacological treatment of pain in humans is still ambiguous and it has also yet to be determined whether ARs modulation could be exploited to inhibit spontaneous pain. Many hopes were initially placed on A 1 AR agonists, but cardiovascular side effects prevented their progress in the clinic, at least with regard to their systemic administration. To get around these obstacles, alternative strategies have been proposed to continue exploiting the huge potential of adenosine and its receptors in pain management. Partial agonist and allosteric modulators of ARs have been tested in preclinical settings revealing potent antinociceptive effects with fewer side effects than conventional full agonists. Furthermore, localized activation of ARs has been proposed as a valid alternative to systemic delivery to maintain efficacy and reduce side effects, especially considering the ubiquitous localization of ARs in the human body. Exogenous ectonucleotidases as well as metabolizing enzyme inhibitors could increase the extracellular concentration of the short-living mediator adenosine, enhancing its nociceptive effects. As reviewed here, these alternative pharmacological approaches have shown promising results in preclinical models of pain and could offer a means to overcome the issues encountered so far by AR ligands in the clinic. Overall, the data summarized in this review highlight the therapeutic potential of ARs as pharmacological targets for the treatment of acute and chronic pain and the need to develop new and more effective strategies to exploit this potential. Author Contributions: Conceptualization, F.V. and S.P.; writing-original draft preparation, F.V. and S.P.; writing-review and editing, P.A.B. and K.V.; visualization, F.V. and S.P.; supervision, P.A.B. and K.V. All authors have read and agreed to the published version of the manuscript. Funding: This research received no external funding. Conflicts of Interest: The authors declare no conflict of interest.
2020-11-19T09:13:15.590Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "1ad875d1b2c25ff71e3f3c7a7dc911149ec26214", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/21/22/8710/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7d34d24667cede93d72e02bd3b25e557f9cb296f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245374653
pes2o/s2orc
v3-fos-license
A New Material-Oriented TES for Land Surface Temperature and SUHI Retrieval in Urban Areas: Case Study over Madrid in the Framework of the Future TRISHNA Mission : The monitoring of the Land Surface Temperature (LST) by remote sensing in urban areas is of great interest to study the Surface Urban Heat Island (SUHI) effect. Thus, it is one of the goals of the future spaceborne mission TRISHNA, which will carry a thermal radiometer onboard with four bands at a 60-m spatial resolution, acquiring daytime and nighttime. In this study, TRISHNA-like data are simulated from Airborne Hyperspectral Scanner (AHS) data over the Madrid urban area at 4-m resolution. To retrieve the LST, the Temperature and Emissivity Separation (TES) algorithm is applied with four spectral bands considering two main original approaches compared with the classical TES algorithm. First, calibration and validation datasets with a large number of artificial materials are considered (called urban-oriented database), contrary to most of the previous studies that do not use a large number of artificial material spectra during the calibration step, thus impacting the LST retrieval over these materials. This approach produces one TES algorithm with one empirical relationship, called 1MMD TES. Second, two empirical relationships are used, one for the artificial materials and the other for the natural ones. These relationships are defined thanks to two calibration datasets (artificial-surface-oriented database and natural-surface-oriented database, Introduction In total, 54% of the world's population lives in urban areas, and an increase to 66% is expected by 2050 [1]. Moreover, a recent study highlighted that the mean air temperature rising over urban areas could reach 4.4 K by 2100 [2]. As such, one-third of the world population will be possibly subject to a higher risk of mortality due to the heat waves, and this amount can increase from 48% to 74% by 2100 [3]. Actually, this temperature rising is generally due to global warming and is accentuated in cities by the Urban Heat Island (UHI) effect, defined as the difference between the urban and rural (urban surroundings) mean air temperatures. The UHI effect has an impact on air pollution and can lead to sleep disorders or heat stresses for inhabitants, and the air temperature can be used to derive their thermal comfort [4][5][6]. Remote sensing data from the Thermal InfraRed (TIR) spectral domain allows to retrieve the Land Surface Temperatures (LSTs) leading to Surface Urban Heat Island (SUHI) quantifications, considered to be the difference between the mean LST of the central urban area and the mean LST from the surrounding rural area [7][8][9]. SUHI and UHI effects are linked by different thermodynamic phenomena but the LST and the air temperature were found to be coupled during the night, although they are decoupled during the day [10,11]. Thus, UHIs can be analyzed with remote sensing data via the quantification of SUHIs because variations of LST and air temperature are correlated together [10,[12][13][14][15][16][17]. The monitoring of SUHIs is also of primary interest in order to enhance the understanding of urban climate and the impact of global warming and urbanization and to help public policies to support climate change mitigation and urban planning activities [18][19][20]. Satellite remote sensing data in the TIR domain provides spatial and temporal variations of the LST at different scales worldwide. Furthermore, LST is also a key parameter to help in the understanding of physical processes other than the UHI effect, such as evapotranspiration, vegetation stress or water cycles [21][22][23][24]. For these purposes, new generation sensors/satellites such as Thermal infraRed Imaging Satellite for High-resolution Natural resource Assessment (TRISHNA), ESA-LSTM (Land Surface Temperature Monitoring), SBG (Surface Biology and Geology) from NASA or Gaofen-5 in China, all of them carrying onboard multispectral sensors with spatial resolutions under 100-m in the TIR domain, have been conceived. In particular, TRISHNA is an Indo-French joint-mission between CNES and ISRO that will be launched in 2025, following other aborted missions such as Mistigri or Thirsty [25][26][27][28]. Thus, this mission will be mainly dedicated to the monitoring of agricultural areas, coastal waters and urban areas. A radiometer onboard will cover Visible and Near InfraRed (VNIR), ShortWave InfraRed (SWIR) and TIR ranges with 5 bands in the VNIR-SWIR spectral domain and 4 bands in the TIR one. Moreover, the TRISHNA mission will have daytime and nighttime overpasses with a 3-day repeat cycle and a 60-m spatial resolution, which is better suited for urban studies [29]. Therefore, TRISHNA and the aforementioned future missions with high spatial resolutions in the TIR domain require the development of adapted LST retrieval methods [30][31][32][33][34][35]. Currently, the Temperature and Emissivity Separation (TES) algorithm is among the most commonly used methods for LST retrieval from remote multispectral data. It presents the advantage of estimating both LST and Land Surface Emissivity (LSE) [9,[36][37][38][39]. It can be applied with a minimum of three thermal bands so it is adapted to process TRISHNA images. However, TES needs a prior atmospheric correction, and so inaccuracies result in a larger spectral contrast with important effects over gray-body surfaces that have a weak spectral contrast [40][41][42][43][44]. Moreover, this algorithm has already been applied to retrieve the LST over urban areas [45][46][47]. Regardless of the aforementioned limitations, the LST retrieval over urban areas is not trivial because of several factors: (i) the strong 3D structure of the urban landscape leads to errors because the geometric effect is not taken into account [48,49], (ii) the mean size of urban objects is about ten meters, so the observed pixels from spaceborne sensors are not composed of only one material (mixed pixels), leading to some uncertainties in the retrieved LST [50][51][52], (iii) the adjacency effect [53], (iv) the anisotropy effect that can appear according to the solar position and the viewing angle [54], and (v) urban materials exhibit a strong heterogeneity, spectrally, spatially and even temporally, especially artificial materials [55][56][57][58]. In order to be applied, the TES algorithm needs a prior phase of calibration, achieved with a database of emissivity spectra. Most of the time, the amount of artificial materials in the database is very low, even for urban image processing [47,59,60]. Indeed, the first version of the TES algorithm considered 86 laboratory emissivity spectra of rocks, soils, vegetation, snow and water [36]. They also tested with other spectra, finding similar results, thus they concluded on the validity of the calibration method. Two main reasons explain why natural surfaces are more commonly used: (i) at the time of the development of the TES algorithm, urban studies were not as developed as nowadays, especially because high-spatial-resolution missions had not been launched yet, and (ii) the artificial materials exhibit a strong spectral heterogeneity, the LSE variability is higher than for natural materials, so the calibration phase can be challenging. Nevertheless, the TES algorithm has been applied over urban areas with knowledge of these limitations. For instance, a 7-band TES used to retrieve the LST over the Madrid urban area was calibrated with 108 natural materials [10]. Thus, changing the database could avoid inaccuracies of the TES algorithm when dealing with artificial materials. This article proposes two new versions of the TES algorithm based on two new approaches for the study of urban LST. The calibration of the TES algorithm is based on a non-linear regression of an empirical relationship. The first approach considers the calibration with an urban-oriented database in order to retrieve the parameters of the regression. We call this database urban-oriented because it contains similar amounts of both artificial and natural materials compared with the classical databases based on natural materials only. A unique empirical relationship is defined by this database and we call this approach 1MMD TES. The second approach considers two empirical relationships, one for the artificial materials and the other for the natural ones. The urban-oriented database is split in two for calibration. We call this approach 2MMD TES. An a priori under the form of a ground cover classification map is used to associate one observed pixel to the right empirical relationship. This article uses airborne images over Madrid (Spain) obtained during the ESA-DESIREX (Dual-use European Security IR Experiment) campaign in 2008. Then, from these airborne acquisitions, TRISHNA images at a lower resolution are simulated. The ESA-DESIREX campaign took place in Madrid during the 2008 summer period, with daytime and nighttime radiance images acquired at a 4-m spatial resolution by the AHS (Airborne Hyperspectral Scanner) sensor and ground measurements. These images have been processed to study the SUHI effect over Madrid city using the TES algorithm with seven bands [10,47,61]. These previous works, the configuration of the AHS sensor (10 spectral bands in the TIR range) and the characteristics of the ESA-DESIREX campaign make the dataset a good candidate to prepare the future TRISHNA mission and the development of adapted LST retrieval methods over urban areas by simulating TRISHNAlike images from AHS data. This study is among the first ones using TRISHNA-like data over urban areas and aims to obtain preliminary performances of this future mission for LST retrieval. It will help the development of algorithms for the future TRISHNA mission, when a maximum of four spectral thermal bands are used with a 60-m spatial resolution. Thus, the performance of two new versions of the TES algorithm, 1MMD TES calibrated with an urban-oriented database and 2MMD TES, is studied in order to highlight their advantages, their limitations and their possible improvements to enhance the accuracy of the retrieved LST. The following study is divided into six different sections: Section 2 presents the materials: the ESA-DESIREX campaign with both airborne and ground data in Section 2.1, the simulation of TRISHNA-like data in Section 2.2 and the spectral databases to calibrate the TES algorithm in Section 2.3. Then, the TES algorithm is presented in Section 3, focusing on the classical approach in Section 3.1 and our approach in Section 3.2. Section 4 deals with the results, and further discussion is provided in Section 5. Finally, conclusions and future works are highlighted in Section 6. The ESA-DESIREX Campaign: Airborne, Ground and Classification Data The ESA-DESIREX campaign was an urban experimental campaign that took place in Madrid during the summer period, from the 23 of June to the 6 of July 2008. During this campaign, airborne data were acquired with the AHS sensor operated by the Spanish Institute of Aeronautics (INTA for Instituto Nacional de Técnica Aeroespacial). The AHS sensor is a 80-band radiometer covering the VNIR, SWIR, and TIR spectral ranges. This article focuses on the last ten bands from 71 to 80 in the TIR range. The effective wavelengths in micrometers and the FWHM (Full Width at Half Maximum) for the ten bands of AHS in the TIR range are given in Table 1. In this work, daytime (around 11 A.M UTC) and nighttime (around 10 P.M UTC) radiance AHS images of the flight line going from north to south for the 28 of June, the 1 of July, and the 4 of July were processed and analyzed (i.e., 6 different images). The spatial resolution is 4 m and all the images were georeferenced [62]. Figure 1 illustrates a RGB image of the flight line going from north to south that was acquired on the 4 of July of 2008. Two areas were chosen for visual analysis: the Retiro Park in the center of Madrid and the Universidad Autónoma de Madrid (UAM) in the peri-urban northern area of the city. The numbers on the Figure 1 are those of the locations of the ground LST measurements, that are described below. Radiosoundings were made through free or captive balloons that were launched twice a day (daytime and nighttime) recording physical parameters such as pressure, air temperature, relative humidity, wind velocity and wind direction up to 25 km in altitude [62]. The knowledge of these parameters allows retrieving the atmospheric transmittance as well as improving the atmospheric correction of the remote sensing images, which is of high relevance for the TES algorithm [61]. For the ground measurements, calibration and validation sites were selected because of: (i) the stable atmospheric conditions during the ESA-DESIREX campaign, (ii) the low water vapor content, (iii) the homogeneity of both LST and LSE, (iv) absence of shadows and (v) the flat grounds avoiding the impact of the 3D structure on the measured radiance [62]. The sites over the processed flight line were on the one hand: green grass as a cold target in a rugby field (1) and bare soil as a hot target in a soccer field (2), both located at the "Universidad Autonoma de Madrid" (UAM). On the other hand, water at the "Retiro lake" (Estanque Grande del Retiro) (3) as a cold target in the center of the city was used. Surface radiometric temperature, emissivity and downwelling atmospheric radiance were measured on these sites with radiometers such as Heitroniks or a 5-band CIMEL [62]. Ground radiances were measured with a 5-band CIMEL instrument, and during the campaign, the TES algorithm was applied to obtain LST and emissivity measurements for bare soil and green grass. For water, a 1-band radiometer (Heitroniks) was used with an emissivity value of 0.99 [62]. In addition, fixed masts were located along different sites: in a rural/sub-urban zone at UAM (rugby field) (1), in an urban-dense zone at "CSIC" (4) and "Printing" (5) and in an urban-medium zone at "Urbanism Building" (6). These fixed masts measured air temperature, relative humidity and ground radiometric temperature with 1-band radiometers. The radiometric temperature from fixed masts was obtained with 1-band radiometers using the sky irradiance and an emissivity value of 0.9 for artificial surfaces to derive the LST. Table 2 gives the longitude and latitude coordinates of the calibration/validation sites and fixed masts. Four car transects were defined to drive throughout the city of Madrid and its surroundings to measure the LST and they are all covered by the AHS flightlines. The car transect 1 is along the north-south axis, the car transect 2 is in the old center of Madrid, the car transect 3 is in wide space and vegetated areas and the car transect 4 is in wide streets with new buildings and also the highway. The areas crossed by the car transects are not homogeneous in order to better observe the thermal variations. The different car transects are useful for retrieving the SUHI values over different parts of the city. Table 2. Figure 2 illustrates the two areas used for visual analysis that are the Retiro Park and the UAM with stars pointing out the locations for calibration and validation. These areas were chosen because they include both natural and artificial materials and some validation points. The Retiro Park is an urban dense zone, and the UAM is a rural sub-urban zone. Lastly, a supervised classification-based approach was applied during ESA-DESIREX on daytime AHS images for the 4 of July, with ground measurements to define the endmembers spectra and the Maximum Likelihood used as the decision rule. Twelve classes were selected with a 73% kappa coefficient: water (lakes and swimming pools), trees, green grass, bright and dark bare soils, roads with asphalt, other roads and pavements, roofs with red bricks/tiles, roofs with asphalt, roofs with concrete and roofs with metal. The details for this classification can be found in [63]. Figure 3 shows the classification map obtained at a 4-m resolution over the Retiro Park and the UAM with the legend for every classified type of material. Further information about the ESA-DESIREX campaign and its results over the city of Madrid can be found in the ESA-DESIREX 2008 final report [62]. Radiance Images In the TIR range (8-14 µm), the radiance at the sensor-level in a specific spectral band and close to nadir is expressed as: where λ is the considered wavelength, τ atm,sensor the atmospheric direct transmissivity between ground and sensor (the diffuse transmissivity is negligible in the thermal domain), L λatm↑,sensor the upwelling path radiance and L λ,BOA the radiance at the Bottom Of Atmosphere (BOA). The L λ,BOA can be expressed as: Furthermore, where L λatm↓ is the downwelling atmospheric radiance, L λ,sur f ace is the radiance at the surface, ε λ is the equivalent surface emissivity and L BB (T s ) is the radiance defined by the Planck's law for a black body at the temperature T s . Thus, the LST (T s ) is obtained by inverting the Planck's law: With c 1 and c 2 being the constants in the Planck's law, c 1 = 1.19104 W µ 4 m −2 sr −1 and c 2 = 14387.7 µm K. All the quantities in Equations (1)-(4) depending on the wavelength are mentioned as equivalent: they are integrated according to the spectral response of the considered sensor and normalized by it. As the number, position and bandwidths of the four thermal spectral bands for TRISHNA were not determined yet when the study was made, AHS bands were used to simulate TRISHNA ones. Four AHS bands were chosen according to the closeness of the central wavelength compared with the known central wavelengths for the TRISHNA sensor: AHS band 72 at 8.18 µm, 73 at 9.15 µm, 76 at 10.59 µm and 78 at 11.78 µm [26]. Figure 4 summarizes the simulation methodology. Step 1: From airborne level to TOA (Top of Atmosphere) level: To model the attenuation through the atmosphere, Equation (1) can be expressed for both the airborne level and satellite level, leading to a linear relationship between the AHS radiance and the TOA radiance. Both the slope and the intercept of this relationship depend only on the atmospheric conditions and the observation angle (nadir is considered in this study). Details of the calculation for both the slope and the intercept can be found in [64]. Simulations are performed with the radiative transfer code COMANCHE using the ESA-DESIREX atmospheric profiles, 75 synthetic emissivity spectra and 6 temperatures for each spectrum [65,66]. The temperature range is 273-335 K and the emissivity range is 0-1 to have a dense representation of the earth surface spectra. The slope and intercept parameters of the linear relationship are retrieved with the least squares fitting and then can be applied to the AHS images to obtain the radiances at the TOA level. • Step 2: Spatial aggregation (undersampling) and noise: The undersampling is made with a Function Transfer Model (FTM) and SNRs (Signal to Noise Ratios) that were defined according to the first TRISHNA sensor characteristics. The FTM is considered exponential in this spatial aggregation procedure. • Step 3: Atmospheric correction to obtain the BOA radiance: The atmospheric coefficients integrated into the four selected spectral bands are kept in order to perform the atmospheric correction allowing to pass from TOA to BOA radiances before applying the TES algorithm (see Equation (1). These coefficients are retrieved with the radiative code COMANCHE thanks to the ESA-DESIREX atmospheric profiles. In the end of this processing, daytime and nighttime BOA radiance maps at 4 and 60 m spatial resolutions are obtained. Figure 5 shows AHS radiance and TOA radiance at 4 m resolution together with TOA radiance at 60 m resolution for band 72 of AHS (band 1 of TRISHNA). Thus, it allows illustrating the different steps of the methodology. LST retrieval through the TES algorithm is described in Section 3. Ground Cover Classification Map The 4-m ground cover classification map was aggregated at 60 m by using the nearest neighbor approach with k = 15 neighbors and keeping the most prevalent class. Figure 6 shows the obtained 60-m classification map from the 4-m one. It is worth noting that this approach considers 60-m pixels as pure. This consideration is later discussed in Section 5. Urban-Oriented Database for TES Calibration This article focuses on the future TRISHNA sensor and proposes an urban-oriented database to calibrate the TES algorithm. Indeed, a database containing both natural and artificial surface emissivities is needed to perform a representative calibration for urban environments. In this study, emissivities were recovered from those spectral databases: ECOSTRESS (formerly ASTER, [67,68]) for artificial surfaces and bare soils, CAPITOUL [69] (laboratory measurements over artificial surfaces taken in Toulouse, France), SLUM [70] (ground measurements over artificial surfaces in London, UK), MODIS [71] and IPGP [72] databases. Whether the considered surfaces are artificial or natural, the data processing was slightly different. For natural surfaces, a similar approach to [73][74][75] was adopted: 1. All spectra that did not cover the spectrum from the VNIR to the TIR ranges were rejected. 2. To avoid any redundancies, the SAM (Spectral Angle Mapper) distance between each pair of spectra was computed [76]. To define any redundancy or not, the threshold was set to 1 • according to the study made by [75]. 3. The PRO-SAIL (Scattering by Arbitrarily Inclined Leaves combined with PROSPECT) model was used to simulate mixed spectra of soil and vegetation at the top of the canopy with different values of Leaf Area Index (LAI) and Average Leaf Angle [77]. More than 60,000 spectra result from the process. This simulation of natural surfaces was chosen because: (i) it is more representative of the pixel that is observed at the sensor level, (ii) it takes into account the canopy effect and iii) the TES algorithm was applied on this database with experimental satisfactory results [74]. Natural surfaces in our spectral databases are thus composed of pure and linearly mixed spectra of bare soil and vegetation. The PROSPECT model has been parametrized following [73][74][75]. 4. The SAM distance with a threshold of 1 • is computed in order to avoid any redundancies. 5. All spectra with an equivalent emissivity lower than 0.7 between 10 and 12 µm were rejected. For artificial surfaces, the following methodology was retained: 1. All spectra that did not cover the spectrum from the VNIR to the TIR ranges were rejected. 2. For the CAPITOUL database, the tiles were rejected as the laboratory measurements showed some errors. 3. The SAM distance with a threshold of 1 • is computed to avoid any high redundancies. 4. All spectra with an equivalent emissivity lower than 0.7 between 10 and 12 µm are rejected. Note that no low emissivity materials (lower than 0.7) have been used because of the known poor performance of the TES algorithm for these materials [40,78]. In the end, our spectral database consists of 266 emissivities of natural materials and 236 emissivities of artificial materials. In detail, artificial materials account for 47% and natural ones for 53%, which makes the spectral database representative of the urban areas and the rural surroundings. Next, this database is split into calibration and validation datasets. For natural surfaces, a random sampling is made among the 266 emissivities, while for artificial surfaces, a random sampling is made among the different spectral databases of artificial materials in order to avoid any kind of prevalence of a field instrument compared to another one. Finally, two independent databases with 251 emissivities each were used for calibration and validation, respectively. More precisely, each dataset contains 133 spectra of natural surfaces and 118 of artificial ones. Instead of looking for a heterogeneous TES calibration spectral dataset able to represent all the different surfaces appearing in an urban satellite image, two sub-datasets characterizing, respectively, artificial and natural surfaces can be built. We call these databases artificial-surface-oriented and natural-surface-oriented, respectively. Thus, the above urban-oriented spectral database can be divided into the artificial-surface-oriented one with 236 emissivities and the natural-surface-oriented one with 266 emissivities. Newly, these databases can be split into two independent calibration and validation datasets. This original approach leads to material-specific calibration of TES. LST Retrieval with the TES Algorithm For a number of N spectral bands, there are N observed radiances and N + 1 unknowns: N emissivities + 1 LST, consequently, the system is undetermined. Different methods use approximations or a priori information to derive the system unknowns. These methods can be divided into different categories whether they use one single band, two bands or more, a combination of daytime/nighttime observations or multi-angle observations. A review of the LST retrieval methodologies can be found in [79][80][81][82]. In this work, we focus on the TES algorithm that uses an empirical relationship in order to solve the system because this algorithm is commonly used and it can be applied with more than three bands. The Classical Approach: TES with One MMD Relationship or 1MMD TES The TES algorithm has first been introduced in [36] for ASTER data processing and is based on three modules: NEM for Normalized Emissivity Method, RATIO and MMD for Maximum-Minimum Difference. Thus, TES jointly retrieves LSE and LST [83]. This algorithm can be applied to any sensor with more than three thermal bands. The first module, NEM, uses an initial emissivity value (here 0.99) and iteratively corrects it. The RATIO module normalizes the new emissivities by the average of all found emissivities in all the thermal bands. This preserves the shape of the spectrum and minimizes the sensitivity to errors in temperature estimation. The third module converts normalized emissivities into actual emissivities using the empirical relationship called MMD relationship. The MMD relationship is expressed as: With ε min the minimum equivalent emissivity and MMD the maximum difference between equivalent emissivities in the considered spectral thermal bands (in this study, bands 72, 73, 76 and 78 of AHS). The values of the coefficients a, b and c needed for the TES algorithm are retrieved with the spectral databases presented in Section 2.3. The system is non-linear and non-convex so a Levenberg-Marquard minimization was used to retrieve the coefficients. TES is applied directly on BOA radiance L λ,BOA , and so an atmospheric correction is needed, see Section 2.2. The TES Algorithm with 2MMD Relationships or 2MMD TES Two independent TES algorithms are calibrated on different specific material subdatasets, leading to a artificial-surface-oriented TES and a natural-surface-oriented TES, see Section 2.3. Each calibration is performed independently following the scheme described in Section 3.1. Combining this approach with a ground classification map at the resolution of the TIR bands of the satellite, we can locally apply an adapted TES to each pixel. We call this approach the 2MMD relationship's TES algorithm, i.e., 2MMD TES. The last step requires to chose how to associate the ground classification map to the appropriate MMD relationship. It can appear trivial to separate artificial and natural materials, but it is worth noting that spectrally, both groups can overlap, and some artificial materials can exhibit a spectrum close to natural ones and vice versa. Thus, each class of the ground classification map was analyzed regarding the ground LSEs that allowed the classification process, the results from the 7-band TES used for AHS data and the results from the 1MMD TES. All these observations showed that the "bright bare soil" class had a higher LSE variability and a lower minimum emissivity than the other natural classes. Consequently, this peculiar class was considered as an outlier for the natural MMD relationship. The natural MMD relationship is then applied for the classes 1, 2, 3, 4 and 6, and the artificial MMD relationship for the other classes, see Figures 3 and 6. Figure 7 shows the statistical fits of the MMD relationship (Equation (5)) for the calibration dataset and the validation dataset that were used for the 1MMD TES with the urban-oriented-database. The MMD relationship obtained is illustrated with a red line. For the calibration dataset, the MMD relationship provides an RMSE value of 0.014, a standard deviation of 0.014, and a correlation coefficient R of around 0.96. For the validation dataset, the RMSE is 0.013, the standard deviation is 0.013 and the correlation coefficient R is 0.96. These values lead to an error on the LST of about 1 K. Similar performances were found in the literature. Thus, the study from [61] with 299 natural materials gives a standard deviation value of 0.019 and a correlation coefficient R of 0.96. The study from [10] on AHS images over urban areas with a database using 108 natural materials gives a standard deviation of 0.005 and a higher correlation coefficient R of 0.997. The study of [78] using 54 artificial materials gives an RMSE value of 0.024, which is higher than our RMSE value of 0.013. Therefore, mixing artificial and natural materials does not degrade the performances of the MMD relationship for the 1MMD TES compared with other studies that used only natural or only artificial materials in the calibration phase. It is worth noting as well that the aforementioned studies used more than four spectral bands. With these observations, the urban-oriented developed database including both natural and artificial materials appears thus to be suitable to retrieve LSTs over urban areas. Figure 8 shows the two MMD relationships statistical fits of (1) the artificial-surfaceoriented and (2) the natural-surface-oriented databases and both for their corresponding calibration and validation dataset. For both cases, the MMD relationship is illustrated with a red line. For artificial materials, the calibration dataset gives an RMSE value of 0.015, the standard deviation is 0.015, and the correlation coefficient R is almost 0.96, similarly to the results obtained on the urban-oriented database. For the validation dataset, the RMSE is 0.014, the standard deviation is 0.014 and the correlation coefficient R is around 0.97. For natural materials, the calibration phase gives an RMSE value of 0.004, the standard deviation has nearly the same value and the correlation coefficient is slightly higher than 0.99. The validation phase gives an RMSE value of 0.005, a standard deviation of 0.005 and a correlation coefficient of 0.99, which is similar to the results found in [10] that used seven bands. Thus, it can be highlighted that with only four bands, the performances of our 2MMD relationships become similar to those of a single MMD relationship with seven bands. Moreover, comparing Figures 7 and 8, the minimum emissivity can be underestimated for the natural materials when only one MMD relationship is used, which can lead to an overestimation of the retrieved LST. In addition, it is seen that the artificial materials used in our urban-oriented database tend to have a lower minimum emissivity than the natural materials, as well as a larger spectral contrast, in accordance with their high spectral variability. Separating both kinds of materials allows providing a more suited MMD relationship to retrieve the LST. From now on, to highlight the number of spectral bands, the 2MMD TES is called 2MMD-4-band TES, the 1MMD TES is called 1MMD-4-band TES and the 7-band TES used for comparison is called 1MMD-7-band TES. Table 3 provides the coefficients of the MMD relationship according to the database and the number of bands. LST maps obtained at a 4-m resolution from the 1MMD-7-band TES by [10,59,78] were used as references to evaluate and compare the performance of both 1MMD-4-band and 2MMD-4-band TES algorithms from AHS data at a 4-m spatial resolution. While [10,59,78] used a classical calibration-validation database, the two versions of the TES algorithm, 1MMD-4-band and 2-MMD-4-band, use the aforementioned urban-oriented database, and thus, this comparison will allow to better understand the advantages and limitations of such a database. At the satellite level, a temperature upscaling based on the Stefan-Boltzmann's law and applied on each 4-m LST map is considered as reference. Consequently, the 4-m LST maps of this study are spatially aggregated to 60 m to compare with TRISHNA-like LST maps obtained at a 60-m spatial resolution. The comparison of the TRISHNA-like LST with the aggregated one for each method is used to better understand how the decrease in spatial resolution impacts each method's performances. Calibration and Validation of the 1MMD TES and the 2MMD TES To quantitatively compare the images, the Root Mean Square Error (RMSE), the Mean Bias Error (MBE) and the Structural Similarity Index (SSIM) were chosen (all formulas for theses indexes can be found in [64]). First-order statistics of the LST maps were computed to help in the analysis, i.e., the mean and the standard deviation. Lastly, the local/pixel per pixel difference was used to highlight the largest differences between both 1MMD-4-band and 2MMD-4-band TES algorithms. In addition, as ground measurements are available (Section 2.1), local comparisons can be made by computing the RMSE and MBE between ground measurements and the corresponding pixel in the 4-m LST images. Results The 4 of July 2008 was chosen to show the LST maps provided in this section. Similar visual and quantitative results were found for the other acquisitions. However, the statistical analysis for the comparison of TES LSTs with ground measurements as well as the SUHI values are performed on all the acquisitions. Figure 9 shows the daytime and nighttime LST reference map in K at 4 m for the 4 of July over the two studied areas (the Retiro Park and the UAM). During the daytime, for both the Retiro Park and the UAM, spatial variations of LST are easily noticeable. For the Retiro Park, the Retiro lake presents the lowest temperature around 300 K, the vegetated area around 310 K, the left part of the Retiro Park is between 300 and 320 K and the right part of the Retiro Park is between 310 and 320 K. For the UAM, the rugby field is between 300 and 310 K, the soccer field around 315 K and some building roofs have high LSTs around 335 K. The surroundings of the UAM have an LST ranging from 300 to 325 K, with the highest LSTs over bare soil waste ground (classified as "dark bare soil"), and the coolest LSTs over vegetated areas (classified as "trees" or "green grass"). The roads have a LST value of around 315 K. LST Map Reference During the nighttime, for the Retiro Park, the water lake does not present LST variations between day and night with LST ≈ 300 K in agreement with the heat capacity of water. The streets have the highest LSTs around 305 K, and some other roads have a LST value of around 302 K. The vegetated area of the Retiro Park is around 298 K. However, for both daytime and nighttime, some unusual patterns can be seen with low LSTs, especially on one roof of the Atocha train station. For the UAM, unusual patterns can be seen in the center of the image within the campus with low LSTs under 285 K. Otherwise, the streets have the highest LSTs around 302 K and surroundings have an LST value ranging from 285 to 297 K. It is worth noting that the observed unusual patterns are seen during daytime and during nighttime. This observation is discussed in Section 5.1. Table 4 gives the mean and the standard deviation of the LST. Between the Retiro Park and the UAM, the mean LST difference is 1.6 K and the std difference is 2.3 K for daytime. The mean LST difference is 4.4 K for nighttime and the std values are the same. Between daytime and nighttime, the mean LST difference is 17.3 K for the Retiro Park and 20.1 K for the UAM. The std difference is 5.5 K for the Retiro Park and 3.2 K for the UAM. Figures 10 and 11 show the daytime and nighttime LST in K for both studied areas as retrieved with the 1MMD-4-band TES and 2MMD-4-band TES, respectively. For a statistical analysis of performances, Table 5 shows the RMSE, MBE and SSIM between the 1MMD-7-band TES from [10,59,78] and the 1MMD-4-band TES and the 2MMD-4-band TES of this study. This table also shows the mean and standard deviation of the LSTs obtained with the 1MMD-4-band TES and the 2MMD-4-band TES. LST Retrieval with the 1MMD-4-Band TES and the 2MMD-4-Band TES with TRISHNA-like Spectral Configuration at 4 m During both daytime and nighttime and for both the Retiro Park and the UAM, the obtained LST maps are similar to the LST map reference, see Figures 9-11. Thus, the same observations can be made. Looking at the mean and standard deviation of the LST in Table 5, for the 1MMD-4band TES, the mean LST difference is 1.8 K and the std difference is 2.5 K for the daytime between the Retiro Park and the UAM. The mean LST difference is 4.5 K and the std difference is 0.4 K for nighttime. For the 2MMD-4-band TES, the mean LST difference is also 1.8 K and the std difference is 2.4 K for daytime between the Retiro Park and the UAM. The mean LST difference is 4.6 K for nighttime, and the std difference is 0.1 K. Between daytime and nighttime, for the 1MMD-4-band TES, the mean LST difference is 17.7 K for the Retiro Park and 20.5 K for the UAM. The std difference is 5 K for the Retiro Park and 2.9 K for the UAM. For the 2MMD-4-band TES and between daytime and nighttime, the mean LST difference is 17.7 K for the Retiro Park and 20.5 K for the UAM, just like the 1MMD-4-band TES. The std difference is 5.1 K for the Retiro Park and 2.8 K for the UAM. These differences are similar for all TES algorithms, meaning that there is a physical coherence between the three versions. Considering the comparison with the LST map reference, during the daytime, the RMSE values are lower for the 2MMD-4-band TES than for the 1MMD-4-band TES of 0.22 K for the Retiro Park and 0.06 K for the UAM, meaning that there are larger discrepancies between the 1MMD-4-band TES and the LST reference. However, the MBE values are lower for the 1MMD-4-band TES than for the 2MMD-4-band TES. Both methods tend to retrieve a higher LST than with the 1MMD-7-band TES because the MBE is negative and the 2MMD-4-band TES over the studied areas provides higher mean LSTs than the 1MMD-4-band TES. It is important to remark that the MBE is a signed metric and so underestimations and overestimations of LST can compensate, leading to an MBE closer to zero. This can be the explanation of the better results for the 1MMD-4-band TES. The SSIM index is high as the value is 0.98 for both areas and methods. During the nighttime, the 2MMD-4-band TES provides a higher RMSE value higher of 0.27 K than the 1MMD-4-band TES for the UAM, but the RMSE value is lower, 0.17 K, for the Retiro Park. Looking at the MBE, values are lower for the 1MMD-4-band TES than for the 2MMD-4-band TES. The MBE values are lower than during the daytime, which can be explained by the absence of solar irradiance, i.e., the smaller variances in LST during the night. The SSIM values are very similar between daytime and nighttime, with little difference between the 1MMD-4-band TES and the 2MMD-4-band TES. Lastly, Figure 12 shows the pixel per pixel difference between the 2MMD-4-band TES and the 1MMD-4-band TES to highlight the pixels where the difference is high. It can be observed that during the daytime and nighttime, the difference is larger for artificial surfaces, especially the streets and the dense area at the left of the Retiro Park, and the classes "other roads and pavements" and "roofs with metal" for the UAM, with a difference of around 1 K for daytime and between 0.5 and 1 K during the nighttime. For the natural surfaces, such as the vegetated area of the Retiro Park, the water lake and the bare soils or trees, the difference is around −0.5 K for both daytime and nighttime, in agreement with the observations in Figures 7 and 8. Indeed, the 1MMD-4-band TES can overestimate the LST for natural materials. Thus, the 2MMD-4-band TES tends to be the optimal method to retrieve the LST for both kinds of materials. To go in deep with this observation, the comparison with ground LSTs is useful. It will allow assessing which TES algorithm is the most optimal. Comparison with LST Ground Measurements In order to validate the retrieved LSTs at 4 m, a comparison with ground measurements is performed. Two cold targets and four hot targets are chosen: green grass, water, bare soil and three different roofs located in different zones, see Figure 1. Ground LST measurements are selected according to the closeness in time with the daytime and nighttime flights. It gives a total of 30 measurements to compare ground LSTs and retrieved ones for all the acquisitions. Tables 6 and 7 show the comparison between ground LSTs and the three versions of the TES algorithm (1MMD-4-band TES, 2MMD-4-band TES and 1MMD-7-band TES from [78]), for the 4 of July, daytime and nighttime, respectively. A star points out the closest retrieved LST to the ground LST. During the daytime, the 1MMD-7-band TES provides the closest LST for the cold target "green grass" and the hot target "bare soil". On the other hand, for the three artificial materials located at the roofs as well as for the water lake at the Retiro Park, the 2MMD-4-band TES has the best performance. Thus for artificial materials, the 2MMD-4-band TES is the optimal method in this study. During the nighttime, the 2MMD-4-band TES outperforms the other methods for four out of five targets, because the cold target "water" was not measured this night. Again for the hot target "bare soil", the 1MMD-7-band TES provides the closest LST. 2MMD-4-Band TES LST (K) CSIC ( For both daytime and nighttime, the 1MMD-4-band TES never performs better. However, the differences are not very large and it can be seen that for artificial materials, the 1MMD-4-band TES is closest to the ground LSTs than the 1MMD-7-band TES except for the "Urbanism" site during the nighttime, with only 0.1 K between the 1MMD-7-band TES and the 1MMD-4-band TES. It is worth noting that some significant errors remain over the artificial materials for the three different versions of the TES algorithm. This observation is discussed in Section 5.1. In Table 8, we decided to compare retrieved and ground LSTs by combining all the six available remote sensing acquisitions and focusing on daytime, nighttime, artificial and natural surfaces separately. Thus, Table 8 gives the RMSE and MBE values in K for all the acquisitions, between ground LSTs and retrieved LSTs from each method, separating daytime, nighttime, artificial and natural materials. The same observations can be made: the 2MMD-4-band TES provides the best performances except for the natural materials where the 1MMD-7-band TES is better. However, for artificial materials and globally, the 2MMD-4-band TES provides better results because the RMSE decreases by 1.6 K over artificial materials and 1 K globally compared with the 1MMD-7-band TES and decreases by 0.5 and 0.4 K compared with the 1MMD-4-band TES. Method RMSE (K) MBE (K) 1MMD The same observations as at 4 m about the spatial patterns can be made. In the Retiro Park area, the water lake and the park are well discernible, as well as roads during the nighttime. For the UAM, the university structures are less visible due to aggregation. However, natural landscapes and some hot points (such as the parking lot) can be visible as well as roads. Table 9 shows the mean LSTs and the standard deviations for each aggregated map. The first-order statistics are pretty similar, with a difference of 0.4 K between the mean 1MMD-4-band TES LST and the mean 2MMD-4-band TES LST during the daytime and 0.3 K during the nighttime, both for the Retiro area. The differences of the averaged values on the UAM are, respectively, 0.2 and 0.1 K during the night and day. The standard deviation is slightly higher for the 2MMD-4-band TES than the 1MMD-4-band TES, indicating the ability of 2MMD-4-band TES to estimate a higher variability of LSTs than 1MMD-4-band TES. During the nighttime, LST variations are less important, thus the standard deviation is lower during the nighttime. In addition, the mean LSTs are very similar with those at a 4-m spatial resolution and the standard deviation values are lower, due to the aggregation, which tends to smooth the LST variations. Table 9. First-order statistics for the aggregated LST maps according to each method. Mean (K) Std (K) Mean (K) Std (K) Area Figure 15 shows the retrieved LST over the Retiro and the UAM with the 1MMD-4band TES at the satellite level and Table 10 shows their mean LST and standard deviations as well as the RMSE, MBE values between this LST and the aggregated LST from the same TES. This comparison allows highlighting the error due to the spatial resolution. Visually, the Retiro park and its lake are discernible, as well as the denser historic neighborhood at the west of the park and the newer one at the north-east. In addition, during the nighttime, the main roads/streets are also distinguished. For the UAM, the same visual results as for Figure 13 are obtained: natural landscapes and roads can be observed. However, it is worth noting that the LST values overs roads or in the UAM campus cannot be seen due to the spatial resolution. 1MMD-4-Band TES The RMSE values are 1.1 and 2.3 K for both areas during the daytime and nighttime. The MBE values are low during the daytime, with a value of −0.62 K for the Retiro Park and −0.34 K for the UAM. During the nighttime, the MBE values are higher, with −1.3 K for both study areas. The SSIM values are not high, 0.58 for the Retiro Park and 0.56 for the UAM, respectively, during the daytime, and −0.03 and −0.76 for the Retiro Park and the UAM, respectively, during the nighttime. The MBE values are negative, so the 60-m LST is lower than the aggregated one. In addition, the SSIM values are lower for the Retiro Park, due to the aggregation and the very dense spatial structure of the area. The first-order statistics are lower between the 1MMD-4-band TES and the aggregated LST map, the mean LST decreases by 0.5 and 1.3 K for the Retiro Park for daytime and nighttime, respectively. For the UAM, the mean decreases by 0.3 and 1.3 K for daytime and nighttime, respectively. The spatial variability of the LST is lower for the 1MMD-4-band TES than the aggregated map, meaning that the impact of the spatial resolution is noticeable on spatially averaged values, and the LST is smoothed. Table 10. RMSE, MBE and SSIM between the 60-m LST from the 1MMD-4-band TES and the aggregated 4-m to 60-m LST from the 1MMD-4-band TES and 1st-order statistics of the former. Figure 16 shows the LST retrieved from the 2MMD-4-band TES at the satellite level for both Retiro and UAM areas and Table 8 provides their means and standard deviations, the RMSE, MBE and SSIM. Visually, the same observations as for the 1MMD-4-band TES can be made. The RMSE values are between 1.25 and 2.5 K for both areas during the daytime and nighttime. The MBE values are −0.62 K for the Retiro Park and −0.42 K for the UAM for daytime. During the nighttime, the MBE values are higher, with −1.51 K for the Retiro Park and −1.61 K for the UAM. The SSIM values are not high, 0.59 for the Retiro Park and 0.56 for the UAM, respectively, during the daytime, −0.01 and 0.795 for the Retiro Park and the UAM, respectively, during the nighttime. The first-order statistics are lower for the 2MMD-4-band TES than the aggregated LST map. The mean LST decreases by 0.6 and 1.6 K for the Retiro Park for daytime and nighttime, respectively. For the UAM, the mean decreases by 0.4 and 1.4 K for daytime and nighttime, respectively. The LST spatial variability is not as high as with the aggregated map because of the lower values of the standard deviations. The impact of the spatial resolution is noticeable, the LST is smoothed (Table 11). Table 11. RMSE, MBE and SSIM between the the 60-m LST from the 2MMD-4-band TES and the 60-m aggregated LST from the 2MMD-4-band TES and 1st-order statistics of the former. Lastly, Figure 17 shows the pixel per pixel difference between the 2MMD-4-band TES and the 1MMD-4-band TES. The same observations as in Figure 12 can be made. Indeed, during the daytime over the Retiro Park area, the pixel per pixel difference is positive over artificial surfaces between 0.5 and 1 K for daytime and almost 0.5 K for nighttime. The difference is negative over natural ones, between −0.5 and −1 K for daytime and nighttime, in agreement with the comparison at four meters for both airborne images and ground measurements. The difference is lower at nighttime than daytime for artificial materials, which is in agreement with their thermal inertia. The SUHI Effect at 4 and 60 m The SUHI is usually computed with mean LSTs at night [8,47]. Then, two areas are defined to compute the SUHI of Madrid from TRISHNA-like images, just like in [84]: an area around the Retiro park as the central urban zone and an area above the UAM area as the rural surroundings (see also Figure 2). Thus, Table 12 shows the SUHI values obtained both at 60-m and 4-m spatial resolutions, for the three dates (28 of June, 1 and 4 of July) and with the three TES methods studied in this work. The SUHI values of the 1MMD-4-band TES and 2MMD-4-band TES are very similar. The 1MMD-4-band TES and the 2MMD-4-band TES provide higher SUHI values than the 1MMD-7-band TES and at a 60-m spatial resolution, SUHI values are slightly higher than at a 4-m resolution, except for the 2MMD-4-band TES for two dates out of three. Moreover, some ground LST measurements made with four car transects at nighttime (22h just like the flight lines) give SUHI values shown in Table 12 [62]. The LST difference between urban and rural was measured at 22h00 for the four transects until the 3 of July. Then, Table 12 only shows the results of the LST difference for the 28 of June and the 1 of July. These values were collected in Appendix 1 of the ESA-DESIREX 2008 final report. It is worth noting that the areas of the four transects are not exactly the same as the ones used to compute the SUHI from remote data. However, the ground values are in good agreement with the SUHI remote values. The absolute value of difference between 4 and 60 m values is around 0.2 K for both 1MMD-4-band TES and 2MMD-4-band TES. Figure 18 shows the SUHI map for the Retiro Park and the 4 of July at a 4-m spatial resolution and at a 60-m spatial resolution. For both spatial resolutions, the roads have the highest SUHI value. The average SUHI value for both spatial resolutions is between 6 and 7 K, in agreement with the results from Table 12. Both maps are very similar, but it can be observed at 60 m that the 2MMD-4-band TES has larger SUHI values than the 1MMD-4-band TES, especially for roads and the vegetated area of the Retiro Park due to the fact that the 2MMD-4-band TES is able to describe the high variability of LST, which strongly depends on the nature of the surfaces. Comparison between the 1-MMD-4-Band TES and the 2MMD-4-Band TES for LST and SUHI Retrieval For LST retrieval, at four meters, the LST maps show that visually, there is a good agreement between the three versions of the TES algorithm and that there are physical patterns that can be explained. For instance, the left area of the Retiro Park is denser than the right one, leading to higher LSTs. It can also be due to the used building materials, as the left area is a historic neighborhood and the right one was built more recently. In addition, high LSTs are mainly found in streets, bare soil waste grounds and some buildings, with cooler areas prevailing in the vegetated areas around and some roofs. The same conclusions are obtained from the UAM area, buildings and bare soils, which present higher LSTs. During the day, the LST spatial variability is higher for the Retiro Park, which is explained by the larger amount of different materials in this area, whereas UAM is covered by a large part of natural surfaces that are similar (Figures 2 and 3). Lastly, the spatial variability of the LST is lower at nighttime than daytime, explained by the absence of solar irradiance leading to the homogenization of the LST. Interestingly, some unusual physical patterns can be seen, especially over the Atocha train station where high and low LSTs are observed. It is mostly related to metals. Metallic materials are known to be poorly processed by the TES algorithm as the emissivity is very low and is considered as an outlier for the MMD relationship. Thus, the Atocha train station roof (south of the Retiro Park, see Figure 2) has a very low LST for both daytime and nighttime. This roof is classified as "roofs with metal" and "roofs with concrete". It is possible to check visually on "Google Earth" images that this roof is an open car park with metal square-roofs. Thus, there is a strong cavity effect coupled with metal materials, which explains the unusual physical pattern. In addition, the center of the UAM area presents high LSTs and low LSTs. These patterns are not physical and can be due to errors in the LSE retrieval, the cavity effect or the no-exact-nadir-view of the AHS sensor. Newly, the roofs of the university are classified as "roofs with metal". These roofs had a very low assigned emissivity in [63]. When comparing with ground LSTs, (Tables 6-8), the 2MMD-4-band TES outperforms the 1MMD-4-band TES over both natural and artificial materials. These results show the capacity of the double MMD relationship to recover a large variability of LST values, which become very important in urban environments where both natural and artificial materials are present. As expected, the largest discrepancies are seen for the artificial materials during the daytime. The ground LSTs are significantly higher than the retrieved LSTs, especially for the "CSIC" and "Urbanism" sites with a difference that can range from 14 to 17 K during the 4th of July, see Table 6. In addition, the RMSE value on daytime measures exceeds 7 K and it reaches 9 K for artificial materials. Even if the new 2MMD-4-band TES proposed in this study recovers a higher LST variability, it is still necessary to study it further in order to provide better results for very hot targets. Other than the intrinsic limitations of TES to account for extreme LST values, these differences in LST when observing very hot targets can be due to several factors. First, at a very fine scale, the LST can be influenced by turbulences. Thus, if the measurements are not perfectly synchronized, temperature differences can appear and discrepancies increase as the ground sample distance is finer [85]. Looking at the ground measurements, the LST around the flight hour (10 min before and 10 min after) can vary from 1 to 3 K and even 10 K for the "CSIC" site during the daytime and from 0.5 to 1 K during the nighttime. Second, the pixels at the 4-m spatial resolution can be mixed. The retrieved LST is integrated over the pixel, which is of very different size than the punctual ground measurements. However, in this study, ground measurements have been performed on large enough homogeneous surfaces to neglect this effect. Relatively to the LST maps, at a 4-m spatial resolution, the comparison with the 1MMD-7-band TES (used as a reference) gives a better RMSE value for the 2MMD-4-band TES but the MBE values are lower with the 1MMD-4-band TES, which can be explained by the fact that both the 1MMD-4-band TES and the 1MMD-7-band TES account for a lower LST variability than the 2MMD-4-band TES. In addition, the RMSE value difference between the 1MMD-4-band TES and the 1MMD-7-band TES can be due to the new urbanoriented MMD relationship that is better adapted to estimate LST on artificial materials but can overestimate the LST for natural materials, whereas the 1MMD-7-band TES tends to be optimal for the latter. Furthermore, MBE is a signed metric and so negative and positive errors can compensate leading to better values that are not exactly related to better local estimations. Figure 12 shows that the larger differences are seen over the artificial materials, which is in agreement with the ground LSTs. At a 60-m spatial resolution, the pixel per pixel difference in Figure 17 shows that the larger differences are positive and are seen over the artificial materials, and they are negative over natural ones. Actually, the 60-m pixels are most of the time mixed pixels with a large number of materials inside. This reduces the performance of TES (independently of the number of MMD relationships) when recovering LST. However, the mixed nature of 60-m pixels does not strongly impact the classification step of the 2MMD-4-band TES since this classification only considers natural and artificial pixels. Nowadays, in the Madrid city center, the amount of natural surfaces in the urbanized area (out of parks) is negligible, and so to consider that pixels in the Retiro park are natural and pixels in urbanized neighborhoods are artificial is close to reality. Moreover, the size of the Retiro Park is greater than the pixel size, so a lot of pure natural-surface pixels are considered. The 1MMD-4-band TES tends to overestimate the natural materials, whereas the 2MMD-4-band TES is more adapted. Furthermore, the emissivity of a mixed pixel is not trivial to estimate so both methods tend to be less performant. For SUHI retrieval, Figure 18, both methods provide similar patterns where the roads have the largest values. Compared with ground SUHI values, the 2MMD-4-band TES provides higher values than the 1MMD-4-band TES but both the methods provide close values. However, the 2MMD-4-band TES tends to provide the closest SUHI values to the 4-m maps than the 1MMD-4-band TES because the spatial LST variability is better retrieved, which is confirmed by the higher std values at 60 m for the 2MMD-4-band TES than with the 1MMD-4-band TES. Given these observations, the 2MMD-4-band TES can be considered as a more optimal method to retrieve the SUHI value. Indeed, the use of other urban campaigns with airborne images and ground measurements will help to highlight the 2MMD-4-band TES contribution. About the Use of a Ground Cover Classification Map Considering the prevalence of the artificial materials in the urban areas and the comparison with ground LSTs showing better retrieval for these materials, the 2MMD-4-band TES is the most optimal method in this study by using an a priori about the land cover to help the TES algorithm to better process artificial and natural surfaces. However, the 2MMD-4-band TES requires a ground classification map with a satisfactory performance. Land cover retrieval still needs investigation and is not always available and the computation cost is higher, which can be prohibitory for a real-time process. TRISHNA will have concomitant reflective and thermal data so a near-similar classification process will be possible. However, TRISHNA will have only five multispectral bands in the reflective domain so the accuracy of the derived ground cover map can be lower than the ground cover used in this study, based on the 80 bands of AHS. This can be considered as a limitation but also shows the possibility of using no-sensor-related ground cover maps. It is worth noting that no optimal classification methods exist and are dependent on the available data. The ground classification map used in this study was also made thanks to ground measurements, and these latter observations are not always available. In addition, the comparison with ground LSTs showed that the 2MMD-4-band TES was not the optimal method for the bare soil site, whereas the 1MMD-7-band TES provides better results. This can be due to the higher number of bands used in the 1MMD-7-band TES. However, during the daytime, the difference is only 1 K between the 2MMD-4-band TES and the 1MMD-7-band TES, and during the nighttime, the difference is only 0.5 K. Moreover, this class contains only 3% of the pixels so this does not significantly impact the global results. Lastly, a ground classification map can be less performant over mixed pixels according to the spatial structure and the spatial resolution because it considers pixels as pure. The results of this study show that the mixed pixels can be poorly processed by the TES algorithm independently of the number of MMD relationships. Other land-cover-related products, such as the imperviousness, can be analyzed to replace the ground cover map. TRISHNA Framework: Impact of the Spatial Resolution The comparison of TRISHNA-like LST data with aggregated LST maps shows that the physical patterns are similar but that the spatial resolution impacts the performance of the TES for spatially averaged values. Some artificial structures are still discernible at a 60-m spatial resolution, especially roofs for the most part but also roads. The mean statistics of the LST show a satisfactory agreement between the 4-m spatial resolution and the 60-m one with a maximum difference of 2 K, due to the spatial smoothing. In addition, the observations show that the LST spatial variability decreases when the spatial resolution increases. For the SUHI retrieval, the estimated values at the 60-m spatial resolution are in good agreement with the values at the 4-m spatial resolution with a difference of around 0.2 K. Figure 18 shows that for pure pixels, such as the vegetated area of the Retiro Park and its water lake, the spatial resolution does not strongly impact the SUHI values. On the contrary, the mixed pixels at 60 m have lower SUHI values, such as the roads with a difference of around 2 K between both spatial resolutions. Furthermore, the artificial materials provide higher SUHI values than natural ones, which is in agreement with the thermal inertia of the materials so a reliable analysis can be made. Conclusions and Future Works A new material-oriented TES algorithm has been developed through two new approaches: (i) the use of a more representative spectral database we called the urban-oriented database., which contains similar amounts of artificial materials and natural materials from laboratory or field spectra, (ii) the use of two MMD relationships instead of one by differentiating a MMD relationship for artificial materials (artificial-surface-oriented) and a MMD relationship for natural materials (natural-surface-oriented). An a priori under the form of a ground cover classification map is provided to the TES algorithm in order to choose the appropriate MMD relationship according to the land cover type. The observations show: (1) the urban-oriented database is representative of the urban areas and allows to take into account artificial materials contrary to former classical databases. Using two databases instead of one allows preventing the overestimation of the LST over natural materials. (2) The 2MMD-4-band TES outperforms the two other versions of the TES made for comparison and validation when compared with ground LST measurements. (3) At a 4-m spatial resolution, in agreement with the ground LST measurements, the 2MMD-4-band TES outperforms the other TES algorithms over the artificial materials. (4) At a 60-m spatial resolution within the TRISHNA framework, observations show that the impact of the spatial resolution is observed by smoothing the LST, thus decreasing the LST spatial variability, especially for mixed pixels. Due to the spatial resolution, pure natural-surface-oriented pixels are more precisely processed by the 2MMD-4-band TES. (5) For the SUHI retrieval, the 2MMD-4-band TES is more optimal to retrieve the LST variability. As a conclusion, the 2MMD-4-band TES is the best algorithm for this study, considering only four bands instead of seven, which is a great result for multispectral sensors. The future TRISHNA sensor will provide observations allowing the monitoring of the LST and the SUHI effect. Several ways of enhancement are identified: (1) Some studies about coupling TES and Split-Window (SW) algorithms have been conducted [86][87][88]. It can allow avoiding the emissivity a priori knowledge in the SW algorithm and the poor performance of the TES algorithm for low-spectral-contrast materials or metallic ones. The better knowledge of the impact of the 3D structure can lead to better LST retrievals especially for urban canyons [49], or the better knowledge of the adjacency effect [87]. Future works include the development of a hybridized TES algorithm to correct for both the spectral variability and the adjacency effect. (2) A ground cover classification map is not always available so other land-cover-related products should be investigated, such as the imperviousness. (3) Regarding the LSE retrieval to constrain the TES algorithm, multi-temporal acquisitions close in time or the link between visible indexes and thermal bands could improve the accuracy of this parameter [89]. (4) As the mean size of urban objects is less than the spatial resolution of the thermal satellite sensors, sharpening and unmixing procedures are necessary to be able to study the LST and the SUHI at finer scales [51,64,84,[90][91][92]. (5) The use of other airborne campaigns such as ESA-THERMOPOLIS 2009 over Athens, Greece or AI4GEO/CAMCATT 2021 over Toulouse, France, with ground measurements is of great interest to pursue the study over urban areas [93]. Furthermore, the processing is only for a Mediterranean city in the south of Europe with low humidity profiles. Applications for other cities with different climates that include tropical zones would be of great interest as TRISHNA will also be dedicated to tropical regions. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript:
2021-12-22T16:39:20.780Z
2021-12-17T00:00:00.000
{ "year": 2021, "sha1": "7486fdb5b6d47e506d4e74e53f0281464e7d6399", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/13/24/5139/pdf?version=1640052705", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "35316d125c534bc6f6a4fad7961ad7b3b69eb4b4", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
252893527
pes2o/s2orc
v3-fos-license
PepWise: Peptide Identification Algorithms for Tandem Mass Spectrometry Based on the Weight of Pair Amino Acid Fracture . Tandem mass spectrometry is the core of the high-throughput techniques for protein identification. Abundant of MS/MS data can be generated and need to be interpreted, although numerous of peptide identification algorithms have been proposed, most well-known algorithms have been prevailingly employed to predict fragment m/z value to assign peptide sequences to spectrum, such as X!Tandem, OMSSA, Sequest, SQID and ProVerB incorporate intensity information into algorithms to assist peptide identification. Hence, we can easily know, different algorithms would use different information from the same MS data sets. Here we describe a novel protein algorithm based on the weight of pair amino acid fracture, named PepWise, compared with Mascot, Sequest at 1% False Discovery Rate (FDR), which verified the more accuracy, robustness and compatibility. Introduction High throughput proteomics involves the analysis of large number of peptide spectra derived, the most common method is the database search, in which the mass spectrum is scored gist a corresponding database of all candidate peptides, then detect the effective matches [1,4,[6][7][8][9]. In order to ascertain the peptide sequences, generally, we need to consider the following four aspects: Firstly, the digested peptides are dissociated and ionized; Secondly, the intact mass of each peptide is measured by MS/MS; Finally, the peptide is mass-selected and fragmented to produce mass spectra and propose peptide identification algorithms to process spectra [2,3,10,11]. According to the description above, how to derive the peptide sequences and propose the reasonable identification algorithms is critical, although it has been improved in identification algorithms for deriving peptide sequences, anyway, a robust scoring function and consider the MS/MS feature information are still the heart of identification algorithms [5,11,13]. In the database search, which aims to evaluate the similarity between the experimental and theoretical MS/MS spectra [12].Many peptide identification algorithms with varies of concepts, during the course of similarity analysis, z m / value has been the main information to be integrated into the algorithms [2,8,17], e.g. Mascot [7], Sequest [3] and X!Tandem [6]. Despite they are commonly and widely used in protein identification, feature information used singly and the number of peptide identifications inadequate reflect the incompleteness of algorithms which mentioned on the above. Hence, MassWiz, Dispec [9], ProVerB [10] and SQID [2] integrated diversified feature information to scoring model to improve confidence and generate better identification [11][12][13][14][15]. To integrate more abundant and complete feature information and maximize the universality. Firstly, we statistic the matching information of varies ions type based on the partial of S. pneumoniae D39 data set which has been verified reliable MS/MS spectra; Secondly, define and quantify the weight of pair amino acid fracture; Finally, we integrate the feature information to score function, and propose a novel protein identification algorithm PepWise which is based on the weight of pair amino acid fracture. In order to verify effectiveness and robustness of PepWise, we use multiple MS data sets to compare with Mascot, Sequest at 1% FDR (False Discovery Rate), the results show significantly and stably higher than Mascot and Sequest. MS/MS Datasets The data sets of 18 protein standard mixtures can be download from public data sets web site (http://regis-web.systemsbiology.net/PublicData sets/), which contains four types instrument platforms: Thermo Finnigan LCQ DECA, Thermo Finnigan LTQ-FT, Thermo Finnigan LTQ and Micromass/Waters QTOF Ultima, in order to describe conveniently, the instrument names are abbreviated as follows: LCQ, FT, LTQ, QTOF, respectively. The data sets of S.pneumoniae D39 and E. Coli were obtained from LTQ-Orbitrap, and can be downloaded from the following web sites : http://bioinformatics.jnu.edu.cn /software/proverb/ and http://marcottelab.org/MSdata/Data03/, respectively. The partial of S. pneumoniae D39 proteome which has been verified valid by Mascot, Sequest and ProVerB are served as training dataset for the feature information of the algorithm model. Peaks Selecting Noise peaks are inevitable in each MS/MS spectra, we need to select reliable peaks to improve SNR (Signal-To-Noise Ratio). Various peptide identification algorithms have different methods to select peaks, such as Sequest [3] selects the highest 200 peaks from mass spectrum, Mascot [7] selects one peak from 14 Da, X!Tandem [6] selects the maximum of 50 peaks from all fragment spectrum and OMMSA [4] selects the top 5 peaks in each 100 Da window. Here, in order to improve SNR, we firstly remove isotope, which peaks closer to 1±0.25Da, secondly, divide the z m / range into ten parts, then select the highest of 20 peaks in each parts. Generate the Theoretical Spectra The core of the database search approach is to evaluate the similarity between the experimental and theoretical spectra. Therefore, generating the theoretical spectrum is critical for the peptide identification algorithm, generated rules as follows: residue. False Discovery Rate (FDR) All the highest rank candidate peptides are exported to calculate the FDR threshold [19], and the formula is as follows: Scoring Function The scoring function is the heart of MS/MS peptide identification algorithms. In this paper, we firstly define the weight of pair amino acid fracture, then construct the scoring function from three aspects: fragment ion matches, consecutive fragment ion matches and b/y fragment ion matches. Define the Weight of Pair Amino Acid Fracture The dataset of S. pneumoniae D39 is served as training dataset for parameters of the algorithm model, specific methods as follows: We consider six types of fragment ions, which are b , i T is the higher weight of the greater probability. Scoring Function for MS/MS Spectra How to evaluate the similarity between experimental spectra and theoretical spectra is crucial for peptide algorithms. For matching a spectrum against a candidate peptide Pep , the score of peptide matching is calculated as follows: is the primary score function and described as follows: Where: 0 S is the score for fragment ion matches: When the distance of experimental spectrum and theoretical spectrum less or equal than fragment error tolerance, they matched. q W is the weight of the th q − peak which had been matched against theoretical spectra, here, peak m and peak q is a consecutive matches 0.0279 is the random parameter which has been reported by ref 9, calculated by the following formula: sum of the random peptide consecutive matching number sum of the random peptide theoretical consecutive matching number 3 S is the score for y b / ions match: In a typical CID experiment, peptide bond is easy to fracture and generate C-terminal y ions and N-terminal b ions, the content of y ions and b ions reflects the degree of similarity between experimental spectra and theoretical spectra. Results All peptide identification algorithms need to be compared after FDR calculation. In this paper, we compare PepWise with two widely-used MS identification algorithms Mascot and Sequest, Six different data sets from ISB standard mixture of 18 proteins, the compared results as follows figures and table: S. pneumoniae D39 data set is simultaneous searched by Msacot, Sequest and PepWise, the number of identified peptides of the three algorithms that mentioned above are more than 3000 at 1% FDR (Fig.1), and higher overlap between Mascot and PepWise; In addition, the number of spectra which PepWise identified is the most highest (Fig.2), and also had higher overlap with others. In terms of the publicly available standard 18 peoteins dataset and E.coli (including E.Coli1, E.Coli2 and E.Coli3) are test under PSMs-Level FDR≤0.01, the histogram of the number of identified peptides and spectra are showed by Fig.3 and Fig.4. PepWise identified more peptides than Mascot in almost all MS/MS data, the result shows its robustness and high stability. Conclusion In this paper, we propose a new algorithm called PepWise based on the weight of pair amino acid fraction model, the detail value of the weight can be obtained support info table. According the analysis on the above, we validate its accuracy, robustness, and compatibility. Although PepWise has not been tested the data which is generated by HCD, it reflects the physicochemical attribute of peptide, we suppose the peptide identification algorithm can also support HCD. Conflict Of Interest One supplementary table can be found in supporting information, which is the weight of all pair amino acid fracture.
2022-10-14T15:37:41.736Z
2022-09-29T00:00:00.000
{ "year": 2022, "sha1": "70503db47024208b95168ecfbdab73d61cdc493a", "oa_license": "CCBYNC", "oa_url": "https://drpress.org/ojs/index.php/HSET/article/download/1772/1693", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "5ee84ca56d399202ba5e54bf7ae06c027302e43c", "s2fieldsofstudy": [ "Computer Science", "Chemistry", "Biology" ], "extfieldsofstudy": [] }
23406581
pes2o/s2orc
v3-fos-license
Comparison of GLUT1, GLUT2, GLUT4 and SGLT1 mRNA Expression in the Salivary Glands and Six Other Organs of Control, Streptozotocin-Induced and Goto-Kakizaki Diabetic Rats Background/Aims: The expression and localization of several distinct glucose transporters (GLUT1, GLUT2, GLUT4, and SGLT1) was recently characterized in the parotid gland of normal rats by quantitative real-time PCR analysis, immunohistochemistry and Western blotting. The major aims of the present study was to compare the mRNA expression of these glucose transporters in both the parotid gland and submaxillary gland of control rats, streptozotocin-induced diabetic rats and hereditarily diabetic Goto-Kakizaki rats. Methods: Quantitative real-time PCR analysis was performed in the parotid and submaxillary salivary glands and, for purpose of comparison, also in the heart, kidney, liver, lung, muscle and pancreas from control animals and either streptozotocin-treated or Goto-Kakizaki rats. Results: The expression of GLUT4, but not GLUT1 or SGLT1, mRNA was decreased in the diabetic rats. The results also allow comparing both the mRNA expression level of the four glucose transporters in salivary glands and six other organs, and the diabetes-induced changes in such an expression in distinct locations. Conclusion: The mRNA expression of the insulin-dependent GLUT4 transporter was the sole to be significantly decreased in the salivary glands of diabetic animals. The possible consequence of such a decrease in terms of the control of salivary glucose concentration requires further investigation. Introduction It is generally admitted that the salivary glucose concentration is higher in diabetic patients than in control subjects [1]. Nevertheless, several factors may affect the relationship between blood and salivary glucose concentrations, such as the retention of alimentary carbohydrates [2,3] and hexose utilization by oral bacteria [4], to mention only two examples. The increase of salivary glucose concentration in the diabetic patients may participate to the alteration of periodontal health often prevailing in these patients [5,6]. In a recent study, we investigated the expression and localization of several distinct glucose transporters in acinar cells of rat parotid glands obtained from normal rats [7]. The major aim of the present investigations was to compare the mRNA expression of GLUT1, GLUT2, GLUT4 and SGLT1 in the parotid and submaxillary salivary glands, as well as in six other organs, in samples obtained from either control rats, streptozotocin-induced diabetic rats and hereditarily diabetic Goto-Kakizaki rats. Materials and Methods Four Wistar rats, four streptozotocin-induced diabetic rats (STZ rats) and four Goto-Kakizaki rats (GK rats), all of comparable age (about 11 weeks), had free access to food and water up to the time of euthanasia, exsanguination and decapitation [7]. Diabetic STZ rats were obtained as described elsewhere [8], GK rats were obtained from the Paris colony, initiated by the end of the 1980s [9] from the original Japanese colony [10] and maintained from that time at the University Paris-Diderot animal core [11]. Parotid and submaxillary glands, heart, kidney, liver, lung, soleus muscle and pancreas were removed and processed for quantitative real-time PCR analysis as previously described, according to the delta Ct method, the gene expression level of each mRNA being normalized to relative GAPDH (glyceraldehyde 3-phosphate dehydrogenase enzyme) mRNA [7]. Plasma glucose concentration in nonfasted state was measured using the method recommended by Bergmeyer and Bernt [12]. All animal experiments were conducted in accordance with accepted standards of animal care as established by the French National Centre for Scientific Research Guidelines and Brussels local ethic committee rules and the European Communities Council Directive (86/609/EEC). All results are presented as mean values (± SEM). The statistical significance of differences between mean values found in control versus either STZ or GK rats, as well as STZ versus GK rats, was assessed by the use of Student's t-test. As a rule, comparable information was obtained by ANOVA and Bonferroni's multiple comparison test. Body weight and plasma glucose The body weight was significantly lower (p < 0.005) in GK rats than in control animals, whilst the plasma glucose concentration was much higher (p < 0.005) in STZ rats than in control animals ( Table 1). The plasma glucose concentration was also somewhat higher (p<0.025) in GK rats than in control animals. The expression of GLUT2 mRNA followed a liver > pancreas > kidney hierarchy, with much lower to negligible values in lung, muscle, heart and the two salivary glands (Table 3). The data listed in Tables 2 to 5 also document that, in salivary glands, the expression of distinct transporter genes yielded, in the control rats, the following hierarchy: GLUT Table 1. Body weight and plasma glucose concentration Table 2. GLUT1 mRNA expression. a: p < 0.1; b: p < 0.05; c: p < 0.02; d: p < 0.01; e: p < 0.005; f: p < 0.001 versus control Table 3. GLUT2 mRNA expression. a: p < 0.1; b: p < 0.05; c: p < 0.02; d: p < 0.01; e: p < 0.005; f: p < 0.001 versus control Table 4. GLUT4 mRNA expression. a: p < 0.1; b: p < 0.05; c: p < 0.02; d: p < 0.01; e: p < 0.005; f: p < 0.001 versus control 1 > SGLT1 > GLUT4 > GLUT2 with a difference of about one order of magnitude or more between successive transporters listed in such a hierarchy. A different situation prevailed in liver and pancreas with the following hierarchy: GLUT2>GLUT1>SGLT1>GLUT4. In kidney, the mRNA expression was highest for GLUT1 and lowest for GLUT4 with in-between values for GLUT2 and SLGT1. Last, in muscle, the mRNA expression of GLUT4 largely exceeded that of GLUT1, with negligible values for both GLUT2 and SGLT1. STZ rats No significant difference in GLUT1 data was found in either the parotid or submaxillary gland when comparing STZ rats to control animals ( Table 2). In most other organs, i.e. in heart, kidney, lung and pancreas, the mean values for GLUT1 were lower in STZ rats than in control animals. Thus, in the former STZ rats, the recorded values averaged 45.3 ± 6.2% (n = 15; p < 0.001) of the mean corresponding values found in control animals (100.0 ± 9.9%; n = 16). In this respect, the two sole exceptions were observed in liver, in which the GLUT1 measurements were twice higher in STZ rats than in control animals (p < 0.001), and in muscle, in which GLUT1 measurements were at least one order of magnitude lower than in other organs. Two individual values for GLUT2 found respectively in the pancreas and parotid gland of the same STZ rat exceeded the upper limit of the 95% individual confidence interval as derived from the recordings recorded in the 3 other STZ rats and, hence, were discarded when computing the mean values listed in Table 3. Even when the GLUT2 value found in the parotid gland of this STZ rat (257.10 -5 ) was taken into account, no significant difference was reached between the control animals and STZ rats. Incidentally, when taking all individual values into account, a significant positive correlation ( r = + 0.95; p = 0.05) was found in the 4 STZ rats between the individual data collected for the expression of GLUT2 in the pancreas, on one hand, and the parotid gland, on the other hand (n = 4 in both cases). No significant difference between control and STZ rats was observed in the case of GLUT2 in the parotid and submaxillary glands, kidney, heart and muscle. In other organs, the salient findings consisted in an apparent decrease of GLUT2 in the pancreas of STZ rats (p < 0.07) and an apparent increase in GLUT2 in liver and lung (p < 0.05 or less). In the STZ rats, the expression of GLUT4 mRNA was decreased in the parotid and submaxillary glands, heart, kidney and lung, in which organs it averaged 52.9 ± 4.4% (n = 20; p < 0.001) of the corresponding mean values recorded in the control animals (100.0 ± 5.0%; n = 20). It was not significantly different in STZ rats and control animals, whether in liver or muscle, but appeared increased in the pancreas of the STZ rats (Table 4). Last, as far as SGLT1 is concerned, a significant decrease in mRNA expression was observed in the kidney and pancreas. Such was not the case in other organs. For instance, in the salivary glands, the expression of SGLT1 mRNA averaged 1,313 ± 170.10 -4 (n = 8) in control animals, as compared (p > 0.3) to 1,091 ± 186.10 -4 (n = 8) in STZ rats. GK rats In one GK rats, the GLUT1 value in muscle (596.10 -5 ) and GLUT2 value in lung (682.10 -5 ) largely exceeded the upper limit of the 95% individual confidence interval for the readings recorded in the other 3 GK rats and, hence, were discarded from the further analysis of data. The other results recorded in the GK rats were often comparable to those found in STZ rats. As a matter of fact, only the following significant differences were found between these two groups of rats. First, in the case of GLUT1, the values found in kidney and pancreas were higher (p < 0.005) in GK rats than in STZ rats. Thus, in these two organs, as well as in heart, the mean values for GLUT1 yielded a control > GK > STZ hierarchy. Second in the case of GLUT2, no significant difference was found between STZ and GK rats in any of the 8 locations under consideration. It should be stressed, however, that the mean values found in these diabetic rats were higher than those recorded in control rats in the parotid gland, liver and lung where they averaged 314 ± 48% (n = 21; p < 0.005) of the mean corresponding control values (100 ± 10%; n = 12), whilst a mirror image prevailed in the pancreas, in which case the values found in the diabetic STZ and GK rats represented no more than 19 ± 2% (n = 7; p < 0.005) of the mean corresponding control values (100 ± 30%; n = 4). Third, in the case of GLUT4, once again no significant difference was found between STZ and GK rats in any of the 8 locations under consideration. The salient findings were the lower values found in the parotid and submaxillary glands, heart and kidney of diabetic rats, as compared to control animals, and the higher mean values found in the pancreas of diabetic rats, as distinct from control animals. Last, in the case of SGLT1, the sole statistically significant difference (p < 0.05) between STZ and GK rats concerned the pancreatic gland. Even so, however, the overall value found in the pancreas of diabetic rats did not differ significantly (p > 0.4) from that recorded in the control animals. The sole salient finding consisted in the lower SGLT1 values recorded in the parotid gland and kidney of diabetic rats, as compared to control animals, the former values averaging 47.0 ± 2.6% (n = 16; p < 0.001) of the corresponding mean control values (100.0 ± 12.9%; n = 8). Discussion The present study affords three major new pieces of information. First, it extends the comparison in control rats between the level of mRNA expression for the four glucose transporters under consideration in salivary glands and six other locations. In such a respect, the present data are in close agreement with recent findings restricted to the comparison between the parotid gland and one selected positive and one selected negative control [7]. The results indicate that GLUT1 mRNA expression is one order of magnitude higher in salivary glands than in kidney or pancreas. Likewise, SGLT1 mRNA expression is also much higher in the salivary glands than in kidney or pancreas. In the salivary glands, GLUT2 mRNA expression is negligible in sharp contrast to the situation found in liver or pancreas. Last, GLUT4 mRNA expression is higher in the submaxillary gland, heart and muscle than in the parotid gland. Second, the present results allow comparing the level of mRNA expression of distinct glucose transporters in the same tissue. For instance, the measurements recorded in salivary glands document a GLUT 1 > SGLT1 > GLUT4 > GLUT2 hierarchy, with a close-to-one order of magnitude difference or more between two successive transporters listed in this hierarchy. Third, the present study reveals that GLUT4 mRNA expression was significantly decreased in the salivary glands of diabetic rats, as compared to control animals. Such was also the case for this insulin-dependent transporter in heart and kidney, as well as in lung at least in the STZ rats. Cellular Physiology and Biochemistry Another diabetes-induced change in mRNA expression in the salivary glands merit to be underlined. In the parotid gland, but not so in submaxillary gland, the mean mRNA level of SGLT1 was lower (p < 0.05) in the diabetic rats (699 ± 51.10 -4 ; n = 8) than in the control rats (1, 327 ± 361.10 -4 ; n = 4), at variance with a recent observation [13]. It should be stressed, however, that, in the latter observation, the sex of the diabetic rats (male rats), the diabetogenic agent (alloxan) and the reference gene (β-actin) differed from those used in the present study. Moreover, in the recent investigation reported by Sabino-Silva et al. [13], the increase of SGLT1 mRNA in the parotid and submandibular glands observed in the diabetic rats contrasted with a lower corresponding protein content of SGLT1 in the same salivary glands of the same diabetic animals. Inversely, the mean GLUT2 mRNA was higher in the parotid gland of diabetic rats (41 ± 12.10 -5 ; n = 7) than in the parotid gland of control animals (12 ± 1.10 -5 ; n = 4), but such a difference did not achieve statistical significance (p > 0.5). Such remained the case (p > 0.2) even after inclusion of an abnormally high value (217.10 -5 ) recorded in one STZ rat. As a rule, the results recorded in the present study in other organs than salivary glands, were in fair agreement with current knowledge, whenever available. For instance, the high level of GLUT2 mRNA expression in liver and presumably endocrine pancreas of control rats and the opposite effects of diabetes to increase GLUT2 mRNA expression in liver and to decrease GLUT2 mRNA expression in presumably endocrine pancreas are indeed consistent with current knowledge [14]. Although the diabetes-induced decrease of GLUT4 mRNA expression in salivary glands should not be ignored, the present results suggest that an alteration of glucose transporters expression in salivary glands may only represent a limited determinant of the increase in salivary glucose concentration usually prevailing in diabetic patients [1]. Nevertheless, further functional investigations are obviously desirable to assess whether changes in selected aspects of glucose transport may possibly coincide with the present findings.
2018-04-03T04:52:17.334Z
2013-01-14T00:00:00.000
{ "year": 2013, "sha1": "1256664869f13c77bc3149d27869be43cda950d8", "oa_license": "CCBYNC", "oa_url": "https://www.karger.com/Article/Pdf/343347", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "844a4898f2310080fd0ddd98ac93690b74273fe1", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
234336050
pes2o/s2orc
v3-fos-license
Policies and regulations for promoting manure management for sustainable livestock production in China: a review Livestock numbers in China have more than tripled between 1980 and 2017. The increase in the number of intensive livestock production systems has created the challenges of decoupled crop and livestock systems, low utilization of manures in croplands, and subsequent environmental pollution. Correspondingly, the government has enacted a series of policies and regulations to increase the sustainability of livestock production. This paper reviews the objectives of these policies and regulations and their impacts on manure management. Since 2017 there have been two policy guides to speed up the appropriate use of manures, three action plans for increasing manure recycling, and one technical guide to calculate nutrient balances. Requirements of manure pollution control and recycling for improved environmental performance of livestock production systems were included in three revised environmental laws. Most recent survey data indicate that the utilization of livestock manures was 70% in 2017, including that used as fertilizer and/or for production of energy. The targets for manure utilization are 75% in 2020 and 90% in 2035. To achieve these targets and promote ‘green livestock production', additional changes are needed including the use of third-party enterprises that facilitate manure exchange between farms and a more integrated manure nutrient management approach. Introduction Livestock products contribute 17% to the total food energy consumption and 33% to the total protein consumption by humans globally but there are large differences between developed and developing countries [1] . In developed countries the per capita rate of consumption of livestock products is plateauing, but in developing countries the consumption of livestock products (and hence livestock production) is increasing [1][2][3] . The livestock industry in China has experienced rapid growth and a vast transition driven by economic incentives over the last four decades [3,4] . The number of livestock (livestock units) has increased threefold between 1980 and 2017. Livestock production has increased even more; total meat, egg and milk production increased 6.0, 10.7 and 12.0 times between 1980 and 2017 [5,6] . The spatial distribution of manure nitrogen (N) and phosphorus (P) excretion by livestock (calculated by the total number of animals per category per province [7] and animal categoryspecific N and P production coefficients [8] ) indicates that 'hotspots' of manure production are concentrated on the North China Plan and in central, south and east China. Increased specialization of livestock systems has resulted in the decoupling of crop and livestock production systems [11,[18][19][20] and in a reduction in manure use efficiency. Therefore, a greater integration of crop and livestock production systems is one of the key strategies for increasing the resource use efficiency of livestock manures [21] . To address the environmental challenges of livestock production, many countries including the United States of America (USA) and EU countries have introduced agri-environment policies and action plans to promote efficient and low emission livestock production systems since the 1990s [22][23][24] . This paper reviews the objectives and targets of agri-environment policies and regulations in China and assesses their impact on manure management. Based on the review, we propose a number of suggestions to further improve manure utilization. These suggestions provide a reference for the sustainable development of livestock production in developing countries including China. Challenges of livestock manure utilization in China In recent years the Ministry of Agriculture and Rural Affairs (MARA), Ministry of Finance (MF), Ministry of Science and Technology (MST) and other ministries have actively promoted the utilization of livestock manure [25][26][27] . However, over 70% of farms including crop production and/or livestock production in China were specialized crop production systems or specialized livestock production systems in 2017 [28] . This specialization has created barriers for effective manure utilization in croplands as indicated below. Small crop farms separated from large livestock farms Nutrients from organic manures supplied over 90% of the total nutrients applied to croplands in China in the 1950s [29] . The increase in demand for livestock products caused by the increased population and consumption per capita has led to a rapid increase in the proportion of large-scale livestock farms without cropland [4] . This has contributed to the separation of crop and livestock farms both in space and scale [30] . The proportion of pigs reared in farms with > 500 finishing pigs per farm increased from 8% in 1998 to 47% in 2017. In contrast, crop production is still dominated by small-scale farmers who often work part-time in the cities [31] . This situation is likely to continue for some time [31] . The area cultivated on farms with > 50 mu (3.3 ha) accounted for about 18% of the total cultivated area in 2016 [31] . According to the Technical Guideline for the Calculation of the Land's Capacity to Receive Livestock Manure, an area of 50.0 mu (3.3 ha) can receive manure N produced by 500 finishing pigs [32] . The gap between the proportion of livestock farms with > 500 finishing pigs and the proportion of crop farms with over 50 mu (3.3 ha) results in a low utilization of livestock manure in croplands. Currently, only 30% of total excreted N and 48% of excreted P are returned to the land via manure application [11] . Variable and low nutrient contents in manures Manure is an excellent source of the major plant nutrients including N, P, and K [33] and it provides many of the micro nutrients that plants require [34,35] together with organic matter for soil amendment, all of which contribute to boosting crop yields and nitrogen use efficiency (NUE) [36][37][38][39][40][41] . However, manure nutrient content and availability are often highly variable and unknown. Labels for composted manure may only provide a sum of all nutrients [19] , and this is not sufficiently informative for precision fertilization. Furthermore, the cost of applying manures is high, about twice that of mineral fertilizers [42] , due to the large volume, high water and low nutrient contents, and especially the high transport costs, compared with mineral fertilizers. There is also a requirement for liquid manures to be treated before they can be used on croplands [43] , and this increases the costs and may also reduce the nutrient value (especially N) if the manures are composted. Therefore, the market competitiveness of manures is much lower than that of mineral fertilizers. Risks of heavy metals and antibiotics in manures Another issue is the possible contamination of manures with trace metals (loid) and antibiotics [44,45] . The content of heavy metals (notably copper and zinc) of some pig and chicken manures exceeds the national requirements of the Control Standard for Pollutants in Sludge from Agricultural Use in some regions due to the use of feed additives [43,46] . Frequent applications of manure will result in the accumulation of heavy metals and antibiotics in soils and crops [44] , with potential adverse effects on human health. Lack of enterprises and services for manure redistribution In the Netherlands, private companies overseen by the government redistribute manures between livestock farms and crop farms. Livestock farmers have to pay 10 to 25 Euro per tonne of manure to these trading companies, and crop farmers receive 1-14 Euro per tonne, depending on manure quality and the distance between farms [47] . Each truck load of manure (about 30 tonnes) is weighed and the N and P contents are determined. This type of trading/management scheme is not yet available in China. There are many challenges in establishing a professional manure redistribution system, such as (i) who should pay for the prolonged manure storage and application infrastructure, (ii) who should pay for the manure transportation, (iii) who should receive government subsidies, and (iv) who should monitor and control the flow of manure and money. The lack of appropriate manure application equipment and manure storage facilities and the high costs of manure transportation [48,49] are barriers to the use of manures by crop farmers. All of these factors have contributed to a fractured manure management chain and to low manure nutrient utilization in Chinese croplands. Experiences in the United States and the European Union Many developed countries including the USA and some EU countries experienced a period of serious pollution during the rapid development of intensive livestock farming in the second half of the twentieth century. In response, governments and livestock industries implemented a series of policies, action plans and guidance to reduce the environmental burden from livestock production. Livestock manures are now considered as valuable sources of nutrients [33] and organic matter [38] which have to be stored in contained manure stores and have to be applied to croplands during the growing season according to crop nutrient demand. Major improvements have also occurred through increased animal productivity, animal breeding, and improved animal feeding and housing. However, the separation and concentrations of specialized livestock farms and crop production systems remains a bottleneck in improving manure management in these countries (see other papers in this special issue) In the USA, farmers or their advisers have to make comprehensive nutrient management plans for addressing potential water quality and public health impacts associated with animal feeding operations [50,51] . When the nutrient supply via livestock manure exceeds the nutrient demand of crops, farms have to choose other methods to treat and use the manure. However, only small proportions of manures are processed in practice, for example by composting or anaerobic digestion [52] . Livestock farmers in the EU also have to make a nutrient management plan and have to comply with manure application limits and additional regulations related to manure storage and manure application timing and method, according to action plans to comply with the Nitrates Directive [24] . A uniform manure application limit of 170 kg N ha -1 yr -1 has been set across the EU, although a derogation is possible for N-demanding crops with a long growing season (permanent crops) [24] . Application of fertilizers and manures is restricted to the growing season only. Further, farmers are encouraged to use low-protein animal feeds and reduce ammonia emissions by low-emission animal housing and manure storage systems, and by lowemission manure application (through injection, trailing hose or rapid incorporation into the soil) [53] . Government policies, laws and regulations in China Until 2015 the main objective of government policies, laws and regulations related to manure management in China was simply pollution control. Since 2015 the objectives of the policies, laws and regulations have expanded to include pollution control and enhanced resource use efficiency. The environmental problems caused by the rapid development of livestock production have emerged rapidly (Table 1). The NH3 emissions from livestock production have increased from 2.9 Tg in 1980 to 4.8 Tg in 1997 [17] . Non-CO2 GHG emissions from livestock production have increased from 149 Tg in 1980 to 212 Tg in 1997 [6,54] . The nutrients from organic manures as a percentage of to the total amounts of nutrients from fertilizers applied to croplands decreased from 47.1% in 1980 [55] to 23.6% in 2008 [56] . In consequence, N use efficiency in the food system has decreased steadily [4,57] . As a result, increasing quantities of livestock manures, sewage sludge and other organic resources have been neglected and not effectively re-utilized in crop production, but have instead been lost to soils, waters and air [9][10][11][12][13][14][15][16] . b Percentage of manure comprehensive utilization refers to the ratio of utilized manure as resources (like compost, biogas and land application) to the amount of fresh manure production. c Value according to Zhang, 2001 [55] ; d Value according to Niu and Ju, 2017 [56] e Value according to EPA, 2002 [72] ; f Value according to MARA, 2019 [73] ; In response, the government has gradually implemented policies and regulations related to manure management from 2000 (Table 2a) [59] , and the adoption of the sustainable development goals (SDGs) by all United Nations member states in 2015 have boosted the thinking about sustainable livestock production in China [60] . Gradually, manure pollution control measures have been replaced with policy and advice to promote manure resource recycling and use (Table 2b). (Table 2b). The government has also provided a technical guide to calculate the maximum number of animals per farm based on the land area per farm and the balance of manure nutrient supply and crop nutrient demand. Further, nitrogen vulnerable zones have been proposed [30] . In 2017 a policy guidance document was issued to accelerate the utilization of livestock manures in croplands. This involved the establishment of a livestock manure use system and the development of mechanisms to improve integration of crop and livestock systems by 2020 [27] . It also set the target for manure utilization to be at least 75% by 2020. Hence, a minimum of 75% of produced livestock excreta needs to be applied to cropland by 2020. Further, at least 95% of large-scale livestock farms must have infrastructure for manure treatment [27] . Another guide to promote manure application to cropland and to strengthen pollution control was issued in 2019 [25] . This guide sets the targets for comprehensive utilization of The Environmental Protection Law revised in 2014 introduced the regulatory regime for discharge permits [62] . Enterprises and institutions discharging pollutants in excess of the prescribed national or local discharge standards are required to pay levies that are used for the treatment of polluted surface waters. The Law on the Prevention and Control of Atmospheric Pollution was revised in 2018. The revised version stipulates the development of circular agriculture and provides support for manure treatment [63] . This law requires all livestock farms and communities to collect, store, transport and use sewage sludges, manures and livestock carcasses in a timely manner and use safe treatment practices to prevent the emissions of odor, nitrogen oxides (NOx) and GHGs. The Law on the Prevention and Control of Water Pollution (2018) supports the construction of facilities for the safe treatment of manures and wastewaters on livestock farms and in communities [64] . It stipulates that livestock farms and communities must ensure the normal operation of their facilities for the safe treatment of manures and ensure that wastewater discharge meets the required standards (total N, 40 mg L -1 ; NH3-N, 25 mg L -1 ) [65] or meet the quality standard for cropland irrigation water in China [43] . If livestock wastewater is applied through irrigation channels the water quality of the nearest irrigation water intake downstream must be guaranteed to meet the water quality standard for irrigation. Local governments are responsible for organizing the collection, centralized treatment and subsequent use of livestock manures in counties with small scattered livestock farms. Additional actions for improving manure management In addition to the governmental policies, laws and regulations discussed in section 3.2, a series of additional actions and financial incentives have been implemented to improve the effectiveness of the government policies, laws and regulations. The Action Plan for Water Pollution Control (2015) aims at closing or relocating livestock farms in designated (prohibited) areas (Table 3). This plan has led to the transfer of numerous livestock farms from areas close to rivers and lakes to areas with dryland and relocation of pigs from the south to the north of Chain to protect watercourses [66] . An action plan for the recycling (land application) of livestock manures (2017-2020) introduced seven technological options: (1) full manure collection and land application, (2) specialized biogas plants, (3) composting of solid manures, (4) treatment of manures with high-rise fermentation beds, (5) litter recycling, (6) wastewater recycling, and (7) treatment of manures to meet discharge standards [26] (Figure 1). Livestock farms must choose one or more of these seven options. [67,68] . In addition, subsidies have been provided to poultry farms to reduce the spillage of drinking water, and to pig farms to reduce the amounts of flushing water used [58] . Subsidies have also been provided for odor control and for closed storage and treatment of liquid manures from 2020 [69] . Another action plan has stimulated the replacement of mineral fertilizers by organic fertilizers in the production of fruit, vegetables and tea in 100 counties through subsidies worth one billion RMB [70] . The goal is to reduce mineral fertilizer consumption by 20 to 50% in fruit, vegetable and tea production by 2020. A Technical Guideline for the Calculation of the Land's capacity to Receive Livestock Manure was released in 2018 and provides a standard calculation method for recycling manures [32] . It is based on the nutrient input-output balance. The nutrient requirements of crops are determined according to crop type and yield, soil fertility level, and the proportion of nutrients supplied by manures. The nutrient supply from livestock manures is determined according to the number of livestock, livestock-specific excretion factors, and the manure collection and treatment. Main effects Total manure recycling was < 60% in 2015 [71] and as much as 70% by the end of 2017. Manure treatment facilities for large-scale farms were present on < 20% of the total number of large farms in 2000 [72] and on 80% in 2019 [73] ( Table 1). Water use and spillage have been greatly reduced on pig and poultry farms. As a consequence, the total volume of manure has also decreased dramatically. This has facilitated the transport of manure to crop farms [58] . The amount of mineral fertilizer used in tea, vegetable and fruit production in the aforementioned 100 counties has dropped by 18%, and organic fertilizer use has increased by 50%, and this has increased resource use efficiency [70] . Organic fertilizers have also contributed to increases in soil organic matter content [36][37][38][39][40][41] and to an improved quality of agricultural products (higher contents of micronutrients) [36][37][38][39][40][41] . However, farmers often perceive that the subsidies are too small and the pilot areas that have benefited from the subsidies are often limited in scale and duration. There is still a lack of suitable manure application machinery because of high purchase costs. Critical appraisal Although a series of policies for enhanced manure recycling have been implemented, most of these have targeted large intensive farms only. While this is understandable with an increasing proportion of livestock being reared in these intensive systems, there are still numerous less intensive livestock farms in China [42] . Another complicating factor is the lack of knowledge about typical manure composition, and best manure management practices among farmers, advisors and government officers. This also leads to imprecise definitions that are not operational in practice. For example, in China manure recycling has been defined as the fraction of manure produced by livestock that is applied to croplands, and a target of 90% manure recycling has been set for 2035. In practice, manures will be stored and treated before application and significant fractions of the nitrogen, organic carbon and water will be lost via volatilization, leaching, decomposition and evaporation. The question is then how to verify that 75% of all livestock excreta will be applied to croplands by 2020, 80% by 2025 and 90% by 2035. It is unclear whether 'manure recycling' has been defined in terms of manure volume, mass, or nitrogen or phosphorus content. A rethink is also needed regarding the government incentives to improving manure management. Subsidies are now provided to large livestock farms to build manure stores and manure treatment facilities that result in commercial organic fertilizers. However, there are no specific measures to ensure the continued operation of treatment facilities once built, and there are too few incentives to promote the effective utilization of the manure products generated. The main barriers against crop farmers using manures are the high costs of transportation, lack of manure application equipment, and lack of qualified labor (lack of trained/educated farmers to make the best use of manure nutrients) for manure application [19] . As a result, treatment facilities on livestock farms are often not in operation due to technical failures, high operational costs, lack of product marketing, and lack of control of product quality. And although government reports indicate that manure recycling has increased from < 60% in 2015 to 70% by the end of 2017 [71] , it remains unclear how manure recycling has been defined. Conclusions and outlook The livestock sector has developed very rapidly in China during the last two decades in response to the increasing demand of animal-sourced food. This development has created a manure nutrient surplus at the farm and regional levels, especially as a result of concentrations of large landless livestock farms which import feed from elsewhere. The government has strongly facilitated the modernization and intensification of the livestock sector and has also implemented a series of policies and incentives for the appropriate handling and treatment of livestock manures. However, manure management remains a major challenge. The main bottlenecks to effective manure utilization are (i) the spatial separation of large intensive landless livestock farms from the many small-holder crop farms, (ii) the lack of third-party organizations, governmental institutions and appropriate technology for transporting manure from livestock farms to crop farms, (iii) the lack of small-scale manure spreading machinery and a manure nutrient recommendation system and associated training to guide farmers and advisers on manure nutrient application to crops, and (iv) the lack of governmental incentives for the end-users to adopt the use of animal manure products. Improved integration of crop and livestock production systems is fundamental to increasing the effective use of animal manure resources. This integration is also an important means of reducing agricultural non-point source pollution and delivering national agricultural 'green' development goals [74][75][76][77][78] . Improvement of manure recycling and effective utilization in crop production have the potential to greatly reduce the use of synthetic fertilizers, to improve soil quality and crop nutrition and maintain high yields (through the supply of organic matter and secondary nutrients and micronutrients), and to decrease the eutrophication of surface waters and ammonia and GHG emissions to the atmosphere. This, however, requires great improvements in manure management practices, and hence in government policy measures. A rethink of current policies is needed. It is imperative to develop policies and support programs to enhance the sustainability of livestock production and manure management practices. This will require: (1) establishment of comprehensive region-specific and farm type-specific nutrient management plans, based on accurate accounts of nutrient input-output balances; (2) a partial redirection of financial support from manure producers and manure treatment industries to manure users, and from investments in treatment facilities to end-users or third-party contractors (support for operational costs); (3) an institutional framework for the effective control and transportation of manures from producers to users, involving intermediate third-party enterprises, with governmental coordination using different specialized service models; and (4) strengthening third-party service organizations (contractors) to promote manure application to croplands in an economical and environmentally sound manner. The diversity among livestock farms is very large, from smallholders with a few pigs to large farms with thousands of pigs, and very large industrial enterprises with millions of pigs. These different enterprises have their own challenges in manure management and will need to adopt different specialized service models [42] (Table 4). Government support for the livestock sector has to be embedded into policies aimed at improving manure management based on nutrient accounting through the entire manure management chain [19,48] to ensure that all manure nutrients from intensive livestock production are properly collected, stored, and applied to arable land at appropriate rates and times and with appropriate methods. A recording system is needed, supported by government and research institutes, to assist accurate manure nutrient accounting. Improving the performance of livestock production and manure management is an important target of the Agricultural Green Development program. With great attention and strong support of manure resource recycling program from policy, research, and farm, vast improvements can be made in the sustainability of livestock production and manure management with benefits for cropping farms, soil health and reduced negative impacts on water and air quality. Table 4 Linking crop and livestock production, as a function of livestock farm size [42] Size of livestock farm Households and small farms Medium and large farms Huge livestock company Organization modes of linking crop and livestock production Linking crop and livestock production through farmer cooperatives Linking crop and livestock production through specialized service (e.g., contractors, manure treatment center)
2021-05-11T00:06:52.805Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "2322aa41691af9a09531ad04d47d08d545856b25", "oa_license": "CCBY", "oa_url": "https://doi.org/10.15302/j-fase-2020369", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "68b70ff14a7b04004bcb6731232896c5f810ca34", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Business" ] }
219557960
pes2o/s2orc
v3-fos-license
Global existence of classical solutions for two-dimensional isentropic compressible Navier–Stokes equations with small initial mass In this paper, we consider the initial-boundary value problem of two-dimensional isentropic compressible Navier–Stokes equations with vacuum on the square domain. Based on the time-weighted uniform estimates, we prove that the classical solution exists globally in time if the initial mass ∥ρ0∥L1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\|\rho_{0}\|_{L^{1}}$\end{document} of the fluid is small. Here, we do not require the initial energy or the upper bound of the initial density to be small. Introduction In this paper, we consider the following two-dimensional isentropic compressible Navier-Stokes equations in the Eulerian coordinates: ⎧ ⎨ ⎩ ρ t + div(ρu) = 0, (ρu) t + div(ρu ⊗ u)μ u -(μ + λ)∇ div u + ∇P(ρ) = 0, (1) where t ≥ 0 is the time, x ∈ Ω = [0, 1] × [0, 1] is a spatial coordinate. ρ = ρ(x, t), u = (u 1 , u 2 )(x, t) and case, Nash [1] and Itaya [2] established the local existence and uniqueness of classical solutions in the absence of vacuum in 1962 and 1977, respectively. In 1995, Hoff [3,4] proved the global existence of weak solutions when the initial density would be close to a constant in L 2 and L ∞ norm, and the initial velocity be small in L 2 norm and bounded in L 2 n norm (n is the space dimension). In 1998, Lions [5] obtained the global existence of weak solutions when the adiabatic exponent γ is suitably large, the main restriction on initial data is that the initial total energy is finite, similar results can be found in [6] given by Feireisl. A few years later, Hoff [7][8][9] obtained a new type of global weak solutions with small energy, which have more regularity information than the works in [5,6]. On the other hand, when vacuum is allowed, Cho and Kim [10,11] proved the existence of unique local strong solutions in bounded and unbounded domains in 2003. In 2012, Huang, Li and Xin [12] established the global classical solutions with small energy but possibly large oscillations. In the same year, Duan [13] generalized the result in [7] and proved the global classical solutions to the half-space problem with the boundary condition proposed by Navier provided the initial energy is small. In 2016, Yu and Zhang [14] studied the nonhomogeneous equations with density-dependent viscosity in a smooth bounded domain and the vacuum is allowed. The global well-posedness of strong solutions is established for the case when the bound of the density is suitably small, or when the total mass is small with large oscillations. Later, in 2017, under the same condition in [12], Yu and Zhao [15] studied the global existence in a cuboid domain, some new ideas being applied to establishing a time-uniform upper bound for the density. Recently, Si, Zhang and Zhao [16] established the global existence of classical solutions with a small initial density but possibly a large energy in the case of ρ 0 ∈ L γ , γ ∈ (1, 6) and ρ 0 ∈ L 1 , γ > 1, respectively, which extends the results in [12]. Compared with the three-dimensional case, there are few results in the two-dimensional space. The pioneering work can be traced back to [17] in 1995, as Vaigant and Kazhikhov first proposed the initial-boundary value problem with the special viscosity coefficients, that is, shear viscosity μ being a positive constant and bulk viscosity λ(ρ) = ρ β , β > 3. (2) They proved the existence of global strong solution with no restrictions on the size of initial data. In 2012, Luo [18] studied the Cauchy problem and proved local existence and uniqueness of classical solutions with initial density containing vacuum when viscosity coefficients μ and λ are constant. For the case of a viscosity depending on the density, we refer to a later work by Li and Liang [19]. In 2013, under the condition (2), Jiu, Wang and Xin [20,21] proved the global classical solutions on the torus and in the whole space, respectively, where the initial data may contain vacuum in an open set. In the same year, Ducomet and Necasova [22] studied the initial-boundary value problem with a vorticitytype boundary condition and prove that the results of [17] hold in any smooth bounded domain. In 2014, Zhang, Deng and Zhao [23] established the global classical solutions to the Cauchy problem with smooth initial data under the assumption that the viscosity coefficient μ is large enough. In 2016, Huang and Li [24] relaxed the power index β in (2) to be β > 4 3 and studied the large-time behavior of the solutions, also see a recent work [25] for Cauchy problem. In the same year, Fang and Guo [26] established the global existence and large-time asymptotic behavior of the strong solution to the Cauchy problem in the case of β ∈ [0, 1] provided that the initial data are of small total energy. In 2018, Ding, Huang and Liu [27] obtained the global classical solutions to the Cauchy problem with β ∈ [0, 1] under the condition of small initial density, which extends the earlier work [26] with small initial energy. From the well-known results mentioned in the above paragraph, we can see that in the two-dimensional space, the existing work mainly discussed the global existence of system (1) under the condition of density-dependent viscosity, β > 4 3 with general initial data, β ∈ [0, 1] with small initial energy or small initial density. However, whether the unique local classical solution can exist globally for constant viscosity with small initial mass on a bounded domain is still unknown at present. Inspired by the analysis of [12] and [15], in this paper, we consider Dirichlet problem of (1) with the following initial-boundary conditions: We hope to establish the global existence of strong solutions for (1), (3)-(4) with constant viscosity on the square domain. Before stating the main results, we explain the notations and conventions used throughout this paper. Notations: • The standard Lebesgue and Sobolev spaces are defined as follows: •ḟ = f t + u · ∇f denotes the material derivative of f . • The symbol ∇ l with an integer l ≥ 0 stands for the usual spatial derivatives of any order l. We define • Positive generic constants are denoted by C, which may change in different places. Now, our main results in this paper can be stated as follows. Theorem 1.1 For given numbersρ > 0, M > 0 and q > 2, suppose that the initial data and the following compatibility conditions: for some g ∈ L 2 . Then, there exists a positive constant ε 0 depending onρ, M, μ, λ, and some other known constants but independent of T , such that, if the initial-boundary value problem (1), (3)-(4) admits a unique global classical solution (ρ, u) in Ω × (0, +∞) satisfying, for any 0 < T < +∞, Remark 1.1 Cho and Kim [10,11] proved the existence and uniqueness of local strong solution to (1), (3)-(4) with initial vacuum in the three-dimensional space, where Ω can be bounded domain or the whole space. If Ω is a bounded domain in R 2 and the initial data (ρ 0 , u 0 ) are smooth enough, and u satisfies the boundary condition (4), it is not difficult to verify that the proofs in [10,11] are still valid for local existence of classical solutions in two-dimensional space. Remark 1.2 In Theorem 1.1, we give the global existence of classical solution to the initialboundary value problem (1), (3)-(4) provided the initial mass ρ 0 L 1 is small. In fact, if we take the same vorticity-type boundary condition (Navier-slip boundary condition) in [15] instead of Dirichlet boundary condition, by applying the same method in threedimensional space, similar results in Theorem 1.1 can also be proved. Thus, our results extended the one due to Yu and Zhao [15], where the global well-posedness of classical solutions with small initial energy was proved. Moreover, under the condition (7), we can prove the global existence of classical solution to the Cauchy problem in threedimensional space by using effective viscous flux method, which extend the results of [12] for small initial energy and [16] for small initial density. We now make some comments on the global existence of classical solution to the isentropic compressible Navier-Stokes equations. Compared with the three-dimensional case, it causes some essential difficulties. Similar to the procedure of [12,15,16], a key ingredient in our proof is to obtain a uniform priori upper bound for the density function. However, due to the invalidity of the Sobelov embedding inequality u L 6 ≤ C ∇u L 2 , and there is no boundary information of effective viscous flux F (2μ + λ) div u -P in the two-dimensional bounded domain, time-weighted estimates are needed the ensure the better integrability of the velocity, which is quite differs from three-dimensional Cauchy problem. In this paper, we use the Poincaré inequality and the following decomposition of the velocity u = v + w to overcome this difficulty, where v solves the elliptic system: Then, from the momentum, Eqs. (1) and (9), we can see that w satisfies Hence, ∇u L p , p ≥ 2, is controlled by the standard L p -estimate of elliptic system (9) and (10). On the one hand, under the condition of (7), we have the following key observation: which is derived from (1) 1 and (1) 2 . Then, by applying the method in [15], we get the uniform bound for ∇u L 2 and time-dependent bound for ∇ u L 2 (t 1 ,t 2 ;L 2 ) , by which, together with Zlotnik inequality, we have the uniform upper bound of density. It is worth mentioning that, these boundness can be obtained by the smallness of the initial mass ρ 0 L 1 instead of the smallness of the upper bound of the density in [16] and the initial energy in [12,15], respectively. At last, higher-order regularity estimates for (ρ, u) can be proved by standard methods after some modifications, see [12] for example. Finally, after all the required a priori estimates obtained, by using the continuity argument, we can extend the local classical solution to a global one. The rest of the paper is organized as follows: In Sect. 2, we list some elementary inequalities which will be used in later analysis. Section 3 is devoted to deriving the necessary a priori estimates on classical solution which extend the local solution to a global one. Preliminaries In this section, we recall some well-known inequalities, which will be used frequently throughout this paper. First, we give the Sobolev-Poincaré lemma [28]. Lemma 2.1 There exists a positive constant C depending only on Ω such that every func- Next, we give some regularity results for the following Lamé system with the Dirichlet boundary condition (see [29]): Suppose U ∈ H 1 0 is a weak solution to the Lamé system, we could denote U = L -1 F due to the uniqueness of solution. Lemma 2.2 Let r ∈ (1, +∞), then there exists some generic constant C > 0 depending only on μ, λ, r and Ω such that ( where BMO(Ω) stands for the John-Nirenberg space of mean oscillation whose norm is defined by In the following, we give two critical Sobolev inequalities of logarithmic type, which are originally due to Brezis-Gallouet [30] and Brezis-Wainger [31]. Lemma 2.3 Let Ω ∈ R 2 be a bounded Lipschitz domain and f ∈ W 1,q with q > 2, then we have with a constant C depending only on q. Lemma 2.4 Let Ω ∈ R 2 be a smooth domain and f ∈ L 2 (s, t; H 1 0 ) ∩ L 2 (s, t; W 1,q ), with some q > 2 and 0 ≤ s < t ≤ ∞. Then we have with a constant C depending only on q. Finally, we give the following lemma arises from Zlotnik [32], which will be used to prove the uniform upper bound for the density. Global classical solution In this section, we establish some necessary a priori estimates for the classical solutions of initial-boundary value problem (1), (3)-(4). We assume that, for any T > 0, let (ρ, u) be a classical solution of (1), (3)-(4) in the solution space (8) with the initial data satisfying (5) and (6). In Sects. 3.1 and 3.2, we will show the lower-order and the higher-order estimates of the solutions, which guarantee the local classical solution can be extended to a global one. Lower-order estimates of the solutions First, we give the following proposition to prove the uniform upper bounds of ∇u L 2 and ρ. Proposition 3.1 Assume that the initial data satisfy (5)- (6), and the local classical solution satisfies where Then there exists depending onρ, M, μ, λ, and some other known constants but independent of T such that sup 0≤t≤σ (T) provided that m 0 ≤ ε 2 is suitable small. In order to prove Proposition 3.1, we give the following mass conservation identity and the uniform bound of ∇u L 2 (0,T;L 2 ) , which are the foundation of our proof in this paper. provided there exists a positive constant ε 1 such that m 0 ≤ ε 1 . Next, in Lemma 3.3, we give the uniform upper bound of ∇u L 2 . In Lemma 3.4, we will give the bound for t 2 t 1 σ 2 ∇u 2 L 2 ds which will be used to prove the uniform upper bound of ρ. It should be noted that the constant C on the right-hand side of (39) and (40) is independent of time. for any t 1 , Proof Operating ηu j (∂ t + div(u·)) to (1) j 2 , summing with respect to j, and integrating the resulting equation over Ω, we obtain It follows from integration by parts and using Eq. (1) 1 that Similarly, we get where ∇u 4 L 4 can be estimated as Substituting the estimates J 1 , J 2 , J 3 and (45) into (41), we arrive at In order to prove (39), taking η = σ 2 i in (46), integrating (46) over (i -1, t) and taking (27) into consideration, we get which proves (39). Furthermore, from (47), we can see that, if we take η = σ 2 , then integrating (46) over (t 1 , t 2 ) ∈ [0, T], we have ≤ Cm where we have used (23), (20) and (39). In order to estimate the second term on the right-hand side of the above inequality, taking η = σ in (29), integrating (29) over (t 1 , t 2 ) ∈ [0, T], we get Inspired by the methods in Refs. [12,15], in the following lemma, we use the Zlotnik inequality to prove the uniform upper bound of the density ρ. Proof For any given (x, t) ∈ Ω × [0, T], denoting X(s; x, t) the solution to the initial value problem d ds X(s; x, t) = u(X(s; x, t), s), 0 ≤ s < t, It is easy to verify that d ds ρ X(s; x, t), s + ρ X(s; x, t), s div u X(s; x, t), s = 0, due to (1) 1 . This gives where and C(t) = (2μ + λ)divv -P. Next, we use Lemma 2.5 to prove the uniform upper bound of the density. We have In the following, we estimate the terms on the right-hand side of Eq. (53) one by one. In order to estimate C(t), from Eq. (9), we have ) and the boundary condition (4) implies Then, we have (∇ × (∇ × v)) · n = 0 a.e. on ∂Ω and div(∇ × (∇ × v)) = 0. Multiplying (54) by ∇((2μ + λ) div v -P) and integrating the resulting equation over Ω, we arrive at ∇ (2μ + λ) div v -P L 2 = 0, which implies that there exists C(t) such that Using (9), we have ∇v L 2 ≤ C P L 2 . Integrating (56), we get Then, we have provided there exists a constant ε * 3 , m 0 ≤ ε * 3 such that In order to estimate K 2 , we consider the following three cases: where we have used Lemma 3.3. It remains to estimate the term on the right-hand side of inequality (59). To do this, we taking η = σ in (46) and integrating the resulting inequality over (0, σ (T)), we have From (59) and (60), we can see that provided there exists a constant ε * 4 such that m 0 ≤ ε * 4 . Proof Taking η = 1 in (46), integrating (46) over (0, σ (T)], we get which combines (39) and (40), and we obtain Next, applying the operator ∇ to (1) 1 , and multiplying the resulting equation by p|∇ρ| p-2 ∇ρ, p > 2, we obtain then we have where the terms on the right-hand side of the above inequality can be estimated as and Inserting (72) and (73) into (71), we have Both sides of (74) divided by ∇ρ p + e lead to d dt ln ∇ρ L p + e ≤ C ∇u L 2 + e ln e + ∇ρ L p + C ∇u L 2 + e . Then, by using the Gronwall inequality and (69), we have Moreover, from (69), we have This completes the proof of Lemma 3.6. Higher-order estimates of the solutions For completeness of our proof, we list the higher-order estimates of the solution (ρ, u) below, which can be derived in a similar manner to those obtained in [12] after some modifications. Proof Estimate (78) follows directly from the following simple facts that where in the last inequality we have used the Sobolev embedding inequalities and Lemma 3.6. Next, we prove (79). P satisfies which together with (1) 1 yields where in the last inequality we have used (72) and the following standard L 2 -estimate for elliptic system (9) and (10) Then, combining (83), Lemma 3.6 and the Gronwall inequality, we have (79). Proof First, from (82) and Lemma 3.6, we obtain Furthermore, differentiating (82) yields which together with Lemma 3.6 and Lemma 3.7, one gets The combination of (86) and (88) implies Note that P tt satisfies P tt + u t · ∇P + u · ∇P t + γ P div u t + γ P t div u = 0, from which, together with (89) and Lemma 3.7, we have Next, we differentiate (1) 2 with respect to t, then multiplying the resulting equation by u tt , one gets after integration by parts The terms on the right-hand side of Eq. (92) can be estimated as follows: where we have used Lemma 3.6, (84) and the Poincaré inequality. We have At last, integrating (92) over time (0, T), and inserting estimates (93)-(97), we have from which, together with the Gronwall inequality, one obtains (85) immediately. This completes the proof of Lemma 3.8. Lemma 3.9 Let (ρ, u) be a classical solution of (1), (3)-(4) on Ω × (0, T], under the condition of Theorem 1.1, the following estimates hold: Proof It follows from Lemma 3.8 that which together with Lemma 3.6 gives The standard H 1 estimate for elliptic system (28) yields Then, as a consequence of (67) and (103), we have Moreover, the standard L 2 -estimate for elliptic system (28) and Lemma 3.8 yield which together with (85) implies On the other hand, applying the standard H 2 -estimate for the elliptic system (28) again leads to where and In order to estimate the third term on the right-hand side of (107), applying ∇ 3 to (82) and integrating the resulting equation over Ω, we obtain which, together with the Gronwall inequality (106), implies that sup 0≤t≤T ∇ 3 P L 2 ≤ C. Taking (106)-(111) into consideration, we have It is easy to check that similar arguments work for ρ by using (112). Hence the proof of Lemma 3.9 is completed. Finally, by using the continuity argument, we can extend the local classical solution to a global one, and thus Theorem 1.1 is proved.
2020-05-21T00:08:29.914Z
2020-05-13T00:00:00.000
{ "year": 2020, "sha1": "c73f2c3cc30a7edec66d608615af641537b48b84", "oa_license": "CCBY", "oa_url": "https://advancesindifferenceequations.springeropen.com/track/pdf/10.1186/s13662-020-02675-0", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9fa297ae1b2123a0f1f32697d45b6bc0aed45019", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
5244526
pes2o/s2orc
v3-fos-license
Teaching old drugs new tricks to stop malaria invasion in its tracks Malaria is a common and life-threatening disease endemic in large parts of the world. The emergence of antimalarial drug resistance is threatening disease-control measures that depend heavily on treatment of clinical malaria. The intracellular malaria parasite is particularly vulnerable during its brief extracellular stage of the life cycle. Wilson et al. describe a screen targeting these extracellular parasite stages and make the surprising discovery that clinically used macrolide antibiotics are potent inhibitors of parasite invasion into erythrocytes. See research article: http://www.biomedcentral.com/1741-7007/13/52 Resistance at the front line Artemisinin combination therapies, which are combinations of potent but short-lived artemisinin with long acting partners, have been very successful in combating malaria. However, this progress is under grave threat because of emergence of artemisinin-resistant P. falciparum [2]. Clinical resistance to artemisinin is not readily spotted in short-term parasite growth assays, but reveals itself in the peculiar ability of the parasites to 'hibernate' in the presence of drug, ready to rebound as soon as treatment is discontinued. The hallmark of resistance is a reduced rate of parasite clearance. Mutations in the K13 kelch propeller domain protein have been associated with this phenotype through genetic analysis of resistant parasites selected deliberately in the laboratory and collected from natural infection in the field [3]. While clearly associated with the mechanism of resistance, this protein is likely not the direct molecular target of the drug. Proteins that associate with the kelch protein are among the candidates [4], but the case is far from closed at this point. Regardless of mechanism, to preserve the gains against clinical malaria in the face of the parasite's remarkable ability to develop drug resistance it is essential that we keep step with a deep portfolio of new drugs ready to take over when inevitable resistance breaks through. Common antibiotics as invasion-inhibitory anti-malarials In research recently reported in BMC Biology Wilson and colleagues [5] seek to add to the anti-malarial portfolio with a screen for inhibitors of parasite host cell invasion. Surprisingly, among the best compounds to emerge from this effort are well known macrolide antibiotics, including azithromycin, erythrocymicin, and roxithromycin, that were found to inhibit invasion of red blood cells by the malaria parasite [5]. Azithromycin and its analogues had the most potent effect. The use of antibiotics is not new in the treatment of malaria [6]; in particular, inhibitors of bacterial protein translation are known to kill P. falciparum. This initially may be unexpected for a eukaryotic pathogen, but the discovery of the parasite plastid or apicoplast revealed a possible target [7]. Like all plastids, this parasite organelle evolved form cyanobacterial ancestors and is susceptible to inhibition of its prokaryotic ribosomes [8]. The apicoplast is an essential organelle required for the production of several important metabolites [9]. However, inhibition of protein translation in the apicoplast leads to a 'delayed death' [8]. This peculiar feature of antibiotic action on parasite growth and the resulting lag phase in the onset of efficacy may limit the usefulness of these drugs against acute infection. So did Wilson et al. rediscover plastid inhibitory activity of macrolides or are they on the trail of something new? In a set of time-limited drug exposure experiments they convincingly demonstrate a fast invasion inhibitory activity that is independent of and in addition to the slow plastid effect of azithromycin ( Fig. 1) [5]. Observing this phenomenon was made possible by a recently developed method for isolation of viable invasive stages of P. falciparum [10]. Incubating purified invasive stages with red blood cells for short periods followed by drug washout resulted in almost complete loss of parasite invasion [5]. On the other hand, similar incubation of postinvasion life stages with azithromycin had no effect on parasite growth [5]. The two-target hypothesis was also supported by medicinal chemistry and structure function analysis: some macrolide analogues showed increased activity against merozoite invasion while their antiapicoplast activity remained unchanged from that of azithromycin [5]. The promise and problems of antibiotics as dual action anti-malarials A fast acting antibiotic could be very attractive for malaria therapy. Antibiotics are well-worn tools of the medical trade and their established clinical profile, good safety record, and moderate cost could fast track new treatments. Relatively high concentrations of azithromycin are required to block invasion, but these may not be as easily or safely reached in vivo [5]. Conceptually, focusing on invasion narrows the opportunity for chemical interference to a very important but also very brief time period. Invasion takes place in about 120 seconds of the 48 hour growth cycle. Lastly, Plasmodium and the related apicomplexan parasite Toxoplasma gondii have demonstrated significant flexibility and quickly adapt to experimental insults directed at their invasion machinery [11]. This includes the genetic deletion or chemical removal of ligands and adapters from the parasite and the host, revealing a buffer of redundancy around the essential event of invasion. Insight into the mechanism of action and the potential redundancy of the specific molecular target will be crucial to understand whether azithromycin is a bullet that parasite invasion ultimately can or cannot dodge. How do antibiotics block invasion? Forward genetics would be the weapon of choice to attack the mode of action of the antibiotic invasion block. The malaria parasite is haploid and isolation of resistance mutants followed by genetic mapping has been a highly successful way to define drug targets [12]. In a decade long campaign, the Wellems laboratory at the NIH pioneered this genetic mapping method to discover the mutations responsible for chloroquine resistance. The advent of low cost whole genome sequencing has been truly transformative, yielding high-density single Fig. 1. Two independent targets for macrolide antibiotics in Plasmodium falciparum. Azithromycin inhibits protein synthesis in the apicoplast (green). Loss of translation in the plastid ultimately starves the parasite (grey) for the essential isoprenoid precursor isopentenyl-pyrophosphate (IPP). Wilson et al. describe a second mode of action in which azithromycin blocks an early step in the process used by the parasite to invade red blood cells (RBC, red). This effect is much faster, but requires higher concentrations of drug nucleotide polymorphism maps to compare sensitive and resistant lines with reasonable investment. At the same time, transfection experiments to directly test whether a mutation is the cause of resistance have become more and more powerful. CRISPR/Cas9 systems now allow marker-free genome editing to rigorously validate such mutations. Unfortunately, the dual-mode of action of azithromycin makes the isolation and analysis of invasion-specific azithromycin resistance mutants a non-trivial undertaking. The target of azithromycin appears broadly conserved as the drug also inhibits invasion of other apicomplexan species into their respective host cells [5]. Azithromycintreated parasites bind to but then let go of their host without productive invasion. In the presence of drug, they appear unable to form a moving junction, a unique parasite induced structure linking host and parasite membranes and used by the parasite to propel itself into the red blood cell. There are several possible ways by which azithromycin might inhibit parasite invasion, an essential process required for parasite propagation and spread. Azithromycin may interfere with vital ligandreceptor interactions. Disruption of parasite invasion in an analogous fashion by targeting the host protein basigin with recombinant chimeric antibodies cures established infections in a humanized mouse model [13]. As invasion relies on secretion and delivery of parasite factors to the host this may be another potential target. Interference with the complex regulatory machinery of the invasion process provides additional candidates. A new stem cell-derived model to dissect Plasmodium invasion opens the door to manipulation of both host and parasite components at a level previously not attainable [14]. By the same token, using azithromycin as a chemical biology tool compound could prove to be a highly complementary approach to understand not only its mode of action but the invasion process in general. Overall, this study should reignite interest in antibiotics as anti-malaria drugs and specifically spur future studies into macrolides and invasion. Utilizing these antibiotics in combination with artemisinin may even slow the spread of artemisinin resistanceand in this context their dual activity may be an asset. Since macrolides are already in clinical use, there is a wealth of information and experience in their use, adding to the list of urgently needed drug candidates at a time when the frontline antimalarial, artemisinin, is losing its efficacy.
2015-09-18T23:22:04.000Z
2015-09-08T00:00:00.000
{ "year": 2015, "sha1": "06e6d90fa5a4af23b6327e8eff6fad20d38be95c", "oa_license": "CCBY", "oa_url": "https://bmcbiol.biomedcentral.com/track/pdf/10.1186/s12915-015-0185-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2e2e3a3d043d71b7e4d39bb8f62222e4a84b3434", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
15249661
pes2o/s2orc
v3-fos-license
Characteristics of coronary artery disease in symptomatic type 2 diabetic patients: evaluation with CT angiography Background Coronary artery disease (CAD) is a common and severe complication of type 2 diabetes mellitus (DM). The aim of this study is to identify the features of CAD in diabetic patients using coronary CT angiography (CTA). Methods From 1 July 2009 to 20 March 2010, 113 consecutive patients (70 men, 43 women; mean age, 68 ± 10 years) with type 2 DM were found to have coronary plaques on coronary CTA. Their CTA data were reviewed, and extent, distribution and types of plaques and luminal narrowing were evaluated and compared between different sexes. Results In total, 287 coronary vessels (2.5 ± 1.1 per patient) and 470 segments (4.2 ± 2.8 per patient) were found to have plaques, respectively. Multi-vessel disease was more common than single vessel disease (p < 0.001), and the left anterior descending (LAD) artery (35.8%) and its proximal segment (19.1%) were most frequently involved (all p < 0.001). Calcified plaques (48.8%) were the most common type (p < 0.001) followed by mixed plaques (38.1%). Regarding the different degrees of stenosis, mild narrowing (36.9%) was most common (p < 0.001); however, a significant difference was not observed between non-obstructive and obstructive stenosis (50.4% vs. 49.6%, p = 0.855). Extent of CAD, types of plaques and luminal narrowing were not significantly different between male and female diabetic patients. Conclusions Coronary CTA depicted a high plaque burden in patients with type 2 DM. Plaques, which were mainly calcified, were more frequently detected in the proximal segment of the LAD artery, and increased attention should be paid to the significant prevalence of obstructive stenosis. In addition, DM reduced the sex differential in CT findings of CAD. Background Diabetes mellitus (DM) is a disorder of carbohydrate, protein and fat metabolism. Chronic hyperglycemia in DM damages various organs and leads to a series of complications. Blood vessels are the commonly affected targets and relevant complications are the leading causes of death in patients with type 2 DM [1]. Among cardiovascular complications, coronary artery disease (CAD) has been observed most frequently, and it imposes a huge health burden in all countries [2,3]. The severity of CAD in diabetic patients may be determined by its characteristics associated with DM. Therefore, it is necessary to study the manifestations of CAD in diabetic patients to comprehensively understand this complication. Coronary angiography is regarded as the golden standard for evaluation of coronary artery stenosis, but this method could not depict the type of plaque and had some risks during manipulation. Magnetic resonance angiography enabled the assessment of plaque composition and may have reflected the real culprit, but it could not be widely used in the near future because of high costs and complex methodology [4]. However, multi-detector row CT (MDCT), especially dual-source CT (DSCT), could accurately determine plaque composition and assess the degrees of luminal narrowing [5][6][7][8]. The purpose of this study is to determine the characteristics of CAD in diabetic patients by using DSCT, thus increasing the understanding of the severity of this complication. Study population From 1 July 2009 to 20 March 2010, 138 patients with type 2 DM underwent coronary DSCT angiography (DSCTA) examination because of chest pain (66%), shortness of breath (23%), palpitation (10%) and syncope (1%). Patients with coronary artery plaques and those who had complete clinical data and laboratory results were included in this study. A total of 133 (96.4%) patients met the above-mentioned criteria except five (3.6%) who had no plaques but had myocardial bridges. The exclusion criteria were poor CT scan quality that could not be used for analysis (6 cases) and a history of CAD, stenting or bypass (14 cases CT protocols Coronary CT angiography (CTA) was performed using a Siemens DSCT scanner (SOMATOM Definition, Siemens Medical Solutions, Forchheim, Germany). Betablocker preparation was not used for reducing the heart rate. The scanning scope was from the tracheal bifurcation to 20 mm below the inferior cardiac apex. A 70-90-mL (dependent on body mass index) bolus of iodinated contrast agent (iopamidol, 370 mg of iodine/mL; Bracco Sine Pharmaceutical Corp. Ltd, Shanghai, China) was injected into the antecubital vein at a flow rate of 5 mL/ sec. Next, a 20-mL saline chaser was injected at the same rate. Scan parameters were tube voltage 100-120 kV (adapted to body mass index); tube current, 220 mAs; collimation, 64 × 0.6 mm; rotation time, 0.33 s and pitch, 0.2-0.5 (adapted to the heart rate). Retrospective electrocardiographic gating was used to eliminate cardiac motion artefacts. Data acquisition was completed within 8-10 s. Image analysis An initial data set was reconstructed and a group of images with optimal quality was transferred to a post-processing workstation (Syngo-Imaging, Siemens Medical Solution Systems, Forchheim, Germany) for image analysis. Alternative image reconstruction methods for evaluation of coronary artery plaques included maximum intensity projection, multiplanar reconstruction, curvature plane reconstruction and volume reconstruction. Two cardiovascular radiologists independently analyzed the images. Discrepancies in their interpretations were resolved by consensus. Both observers were blinded to the medical histories, clinical diagnoses and results of other investigations for all patients. Number of diseased coronary vessels and segments, number and types of plaques and grading of stenosis caused by plaques were evaluated. In this study, coronary arteries were divided into four branches: left main (LM), left anterior descending (LAD), left circumflex (LCX) and right coronary artery (RCA) (Figure 1). According to the standard of the American Heart Association, the left and right coronary arteries were divided into 15 segments [9]. Plaques were classified as calcified plaque (plaques with higher CT density than contrast-enhanced lumen) ( Figure 2); noncalcified plaque (plaques with lower CT attenuation than contrast-enhanced lumen without any calcification) ( Figure 3) and mixed plaque (non-calcified and calcified elements in single plaque) ( Figure 4) [10]. Overall, coronary artery stenosis caused by plaques was classified as obstructive and non-obstructive using a 50% threshold of luminal narrowing. In addition, grading of stenosis was further classified as normal appearing (<25%), mild (25%- 49%), moderate (50%-74%) and severe (≥75%) narrowing [11]. The degree of stenosis was assessed on the basis of two orthogonal views. Statistical analysis Clinical data, laboratory results, number of diseased coronary vessels and segments, as well as number and types of plaques and grading of luminal narrowing were analyzed statistically in each patient. Continuous variables were expressed as mean ± standard deviation and categorical variables as number and percentage. The chi-square test was used to compare the difference between multiand single-vessel diseases or plaque distributions among different vessels and segments as well as the different types of plaques and degree of stenosis. Independent sample t-test was used to compare the manifestations of CAD between male and female diabetic patients. The statistic was used to calculate the interobserver variability. Statistical analysis was performed using the SPSS statistical package (version 13.0 for Windows, SPSS Inc., Chicago, Illinois, USA). Two-tailed p value of less than 0.05 was considered statistically significant. Results The 113 patients who met the criteria had good image quality, and their coronary CTA was used to analyze plaque composition and assess grading of stenosis. The mean radiation dose from CTA examination per patient was 4.9 ± 1.7 mSv (range, 1.8-8.9 mSv). There was perfect agreement between the two observers on the type of plaques ( = 0.92) and grading of stenosis ( = 0.90) observed on CT scan. Clinical and laboratory characteristics of the patient population The baseline clinical data and laboratory results are summarized in Table 1. Blood glucose level was controlled with oral hypoglycemic agents (e.g. Repaglinide, Acarbose, Glibenclamide and Metformin) in 78 (69.0%) patients and with insulin in 21 (18.6%) patients. Fourteen patients (12.4%) did not use any hypoglycemic agents but had adjusted their diets or their diabetic status was discovered for the first time in this study. Types of coronary artery plaque and coronary artery stenosis Different types of plaques and grading of stenosis caused by plaques are shown in Table 2. A total of 480 plaques (4.3 ± 2.9 per patient; range, 1-13) were detected. Calcified plaques (48.8%) were more frequently detected than mixed or non-calcified plaques (p < 0.001). Figure 6 shows the percentages of different types of plaques in different age groups. As patients aged, the proportion of calcified plaques increased and that of non-calcified decreased significantly. Furthermore, the calcium score increased as patients aged, with 67.2 ± 110.3, 83.4 ± 185.3, 219.2 ± 319.5, 334.0 ± 621.6 and 584.5 ± 792.5 for age groups 40-49 years, 50-59 years, 60-69 years, 70-79 years and 80-89 years, respectively. Among the different degrees of stenosis, mild narrowing (36.9%) was most common (p < 0.001). However, no significant Comparison of CT findings of CAD between different sexes Comparison of extent of CAD, types of plaques and degrees of stenosis observed on CT between male and female diabetic patients are shown in Table 3. CT findings of CAD between men and women were almost similar in all aspects except that men had more calcified plaques (p < 0.05). Discussion This study had four main findings. First, DSCTA could depict coronary plaques and their morphology as well as assess grading of stenosis. Patients did not require excessive preparations to obtain high-quality images. Second, the diabetic patients had a high plaque burden that was mainly distributed in the LAD artery and proximal segment of each coronary vessel. Third, an analysis of plaque composition revealed a relatively high proportion of calcified plaques. Fourth, obstructive stenosis was as prevalent as non-obstructive stenosis. These findings indicated that non-invasive DSCTA was a valuable modality for depicting and evaluating possible coronary atherosclerosis in symptomatic diabetic patients. In addition, results also showed that DM reduced the sex differential in CT findings of CAD. Three-fourths of the diabetic patients had multi-vessel disease and the plaques involved multiple coronary segments, which indicated that CAD in symptomatic diabetic patients was extensive. This finding was in agreement with those of previous studies [12,13]. The heavy plaque burden in diabetic patients is probably because they have more cardiovascular risk factors resulting from metabolic syndromes [14][15][16]. In addition, current treatments for DM have limited impact on cardiovascular risk [17]. Multiple coronary plaques in diabetic patients may be related to the increased risk of major adverse cardiac events. It has been established that diabetic patients had a similar risk for cardiac mortality as non-diabetic patients with a history of myocardial infarctions [1]. Our results showed that plaques were more prevalent in the LAD artery and the proximal segment of each vessel in diabetic patients. This finding is similar to those observed in the general population [18][19][20]. Different susceptibilities of different coronary vessels and segments to atherosclerosis may be explained by their different hemodynamics [21]. However, the precise pathogenetic mechanism still needs further study. Although the plaques in the proximal segments of the vessels may not result in significant stenosis in a short time due to their larger calibre, myocardial ischemia or infarction would be extensive and serious once the lumens were occluded. Regarding plaque composition, the most frequently detected type in this series was the calcified type followed by the mixed type. This was similar to results of previous studies [13,22,23]. However, one study has shown that non-calcified plaques were the main type of plaques in asymptomatic diabetic patients [24]. In addition, the current study indicated that the proportion of calcified plaques and calcium score increased and that of non-calcified plaques decreased as patients aged. Therefore, the calcium score may underestimate the risk of CAD in diabetic patients, especially in relatively young or asymptomatic individuals. The future adverse event rate was significantly higher in patients with any coronary plaque than in those with a normal MDCT scan [25]. This may be due to the possibility of each type of plaque causing acute or chronic obstructive stenosis. Non-calcified plaques, which are unstable plaques, were vulnerable and frequently detected in patients with acute coronary artery syndrome [26,27]. Patients with a higher likelihood of stenotic CAD were more likely to have a higher underlying burden of calcified and mixed plaques [28]. Diabetic patients are at a higher risk of CAD: hence, it is important to timely evaluate the potential CAD and treat the remediable plaques. In this study, the mild narrowing was the most common degree of stenosis, but nearly a half the plaques caused obstructive stenosis in symptomatic patients. This result was consistent with that of a previous study [23]. Obstructive stenosis was seen as a significant indicator of poor prognosis [25]. However, plaques in asymptomatic diabetic patients were usually non-obstructive [24]. The lesion may have been very severe in diabetic patients when symptoms of CAD developed because of the following two reasons. First, the patients may have had DM for many years before it was diagnosed because of lack of typical clinical symptoms [29,30]. Second, painless myocardial ischemia may have developed in a higher percentage of patients and which masked the progress of CAD [31,32]. Therefore, people with risk factors for DM and diabetic patients with cardiovascular risk factors should pay more attention to their blood glucose levels and potential cardiovascular complications. This study also showed that manifestations of CAD displayed on CT were very similar between men and women. It may be because DM is a major independent cardiovascular risk factor with almost the same risk level in men and women. This result could partly explain the reduced sex differential in CAD mortality and acute CAD risk revealed in previous studies [33,34]. Other studies also showed that the impact of DM on the risk of fatal CAD was significantly greater in women than in men [35,36]. It is believed that DM eliminated the advantage that women had for being at a much lower risk for CAD mortality than men. Therefore, increased attention should be paid to CAD in female diabetic patients. In light of the severity of CAD in diabetic patients, it is necessary to take measures to prevent or delay its occurrence and development. Diabetic patients should always control their cardiovascular risk factors and recognize the symptoms and signs of potentially fatal CAD as early as possible. Individualized risk estimates and lifestyle advice on physical activity are expected to reduce cardiovascular diseases in high-risk group patients [37]. In contrast, impaired glucose tolerance and type 2 DM should also be suspected in patients with CAD having no previous diagnosis of DM. However, an oral glucose tolerance test was not recommended performing very early after ST-elevation myocardial infarction due to its high false positive rate [38]. As a non-invasive modality, MDCTA has been well established for identification of CAD [5][6][7][8]. It is worth mentioning that DSCT not only ensured high-quality images but also promised an impressive reduction in radiation dose [39]. The mean radiation dose for patients who underwent DSCT examination in this study (4.9 mSv) was significantly lower than that in patients who underwent 16-slice (9.8 mSv) or 64-slice (8.6 mSv) MDCT examinations [40]. This study was a cross-sectional study and only diabetic patients with plaques were enrolled. Thus, there was a selection bias. In addition, the patients in this study also had some co-existent cardiovascular risk factors besides type 2 DM, which may affect the results. However, several previous studies had confirmed that the difference in CAD between diabetic and non-diabetic patients was independent of cardiovascular risk factors other than DM [13,22,23]. Thus, the present results demonstrated the current condition of CAD in diabetic patients, which may be more consistent with the practice because diabetic patients often had other concomitant cardiovascular risk factors. Conclusion Coronary CTA detected a high plaque burden in symptomatic patients with type 2 DM. The plaques, which were mainly calcified, were more frequently detected in the proximal segment of the LAD artery. Increased attention should be paid to the obstructive stenosis, which had a similar prevalence to non-obstructive stenosis. In addition, DM reduced the sex differential in the CT findings of CAD. Thus, DSCT can be used to detect potential CAD in symptomatic diabetic patients and provide additional information for evaluating its severity and managing treatment.
2018-05-08T17:51:43.774Z
2010-11-10T00:00:00.000
{ "year": 2010, "sha1": "0d3b77c61b9e4a32f39b1254532c1f07bdb6767e", "oa_license": "CCBY", "oa_url": "https://cardiab.biomedcentral.com/track/pdf/10.1186/1475-2840-9-74", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ab50ae9d9b11b5e6825cdaa77a06a8baabe9ee2a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
248833151
pes2o/s2orc
v3-fos-license
Antibody response to the COVID-19 ChAdOx1nCov-19 and BNT162b vaccines after temporary suspension of DMARD therapy in immune-mediated inflammatory disease (RESCUE) Objective To assess the antibody response to disease-modifying antirheumatic drug (DMARD) therapy after the first and second dose of the ChAdOx1nCov-19 (AstraZeneca (AZ)) and BNT162b (Pfizer) vaccines in patients with immune-mediated inflammatory disease (IMID) compared with controls and if withholding therapy following the first vaccination dose has any effect on seroconversion and SARS-CoV-2 antibody (Ab) levels. Methods A multicentre three-arm randomised controlled trial compared the immunogenicity of the Pfizer and AZ vaccines in adult patients on conventional synthetic (csDMARD), biologic (bDMARD) or targeted synthetic (tsDMARD) therapy for IMID (n=181) with a control group (n=59). Patients were randomised to continue or withhold DMARD therapy for 1–2 weeks post first dose vaccination only. Serum SARS-CoV-2 IgG detection (IgG ≥1.0 U/mL) and titres against the S1/S2 proteins were measured at baseline, 3–4 weeks post first vaccination and 4 weeks post second vaccination. Results AZ vaccination was given to 47.5%, 41.5% and 52.5% in the continue, withhold and control groups, respectively while Pfizer vaccination was given to 52.5%, 58.5% and 47.5% among the continue, withhold and control groups, respectively. Seroconversion rates following the first dose in the AZ and Pfizer groups were only 27.3% vs 79.2% (p=0.000) and 64.58% vs 100% (p=0.000), respectively in the IMID groups who continued therapy compared with the AZ and Pfizer controls, respectively. Withholding DMARD therapy following the first vaccination dose resulted in higher seroconversion to 67.7% and 84.1% in the AZ and Pfizer groups, respectively. Following the second AZ and Pfizer vaccinations when all DMARDs were continued, despite a slightly lower seroconversion rate (83.7% vs 100%, p=0.000 and 95.9% vs 100%, p=0.413), respectively, the mean SARS-CoV2 IgG Ab titres were not significantly different in the csDMARD and bDMARD groups compared with the controls regardless of hold while it was significantly lower in patients taking tsDMARD (12.88 vs 79.49 U/mL, p=0.000). Conclusions Following the first vaccination dose, antibody responses were lower in IMID on DMARD therapy, however the final responses were excellent regardless of hold with the exception of the tsDMARD group where withholding therapy is recommended. At least 2 vaccinations are therefore recommended preferably with an messenger RNA vaccine. Trial registration number ANZCTR: 12621000661875. INTRODUCTION Around the globe, COVID-19 has spread uncontrollably with an estimated 476 million cases and over 6.1 million cumulative deaths as of March 2022. 1 Quickly developed vaccines WHAT IS ALREADY KNOWN ABOUT THIS SUBJECT? ⇒ It is known that the immunogenicity of the Pfizer and AstraZeneca COVID-19 vaccines are reduced in patients with immune-mediated inflammatory disease (IMID) who take disease-modifying antirheumatic drug (DMARD) therapy. ⇒ It is therefore vital that vaccination strategies are developed for these patients. WHAT DOES THIS STUDY ADD? ⇒ The antibody response in patients with IMID treated with DMARD therapies are impaired following the first vaccination compared with the controls however after the second dose of the vaccine, the antibody responses were not significantly different to the controls with the exception for those on targeted synthetic DMARD (tsDMARD) therapy. ⇒ The antibody response was also influenced by vaccine type. HOW MIGHT THIS IMPACT ON CLINICAL PRACTICE OR FURTHER DEVELOPMENTS? ⇒ Full vaccination with at least two doses preferably with a messenger RNA vaccine is recommended in those with IMID. ⇒ Withholding tsDMARD therapy specifically after SARS-CoV-2 vaccination is a recommended strategy to improve antibody response. RMD Open RMD Open RMD Open have demonstrated protective immunity in the general population as characterised by the detection of SARS-CoV-2-specific antibodies. [2][3][4] Patients with immune-mediated inflammatory disease (IMID) have not been included in efficacy studies of SARS-CoV-2 vaccines and it has become apparent that the vast majority of patients with IMID on disease-modifying antirheumatic drugs (DMARDs) still respond to SARS-CoV2 vaccination, however the antibody responses maybe delayed and reduced, especially on regimens including mycophenolate, abatacept or rituximab. [5][6][7] In January and February 2021, the BNT162b (Pfizer/BioNTech) COVID-19 messenger RNA (mRNA) and the ChAdOx1nCov-19 (AstraZeneca (AZ)/Oxford) vaccines, respectively, were provisionally approved for use in Australia by the Therapeutics Goods Administration. Both these COVID-19 vaccines target the spike protein of SARS-CoV-2 leading to the inhibition of binding to the ACE-2 receptor and hence viral entry into the host cell. They both have been shown to be safe and effective in the normal population. 2 3 8 9 The COVID-19 Global Rheumatology Alliance physician registry has shown that age, male sex, chronic lung disease, cardiovascular disease combined with hypertension and high disease activity are factors associated with an increased risk for COVID-19-related death while methotrexate (MTX) or biological monotherapy are not associated with adverse COVID-19 outcomes. 10 Patients with rheumatoid arthritis (RA) who were treated with rituximab or a Janus kinase inhibitor (JAKi) had poorer COVID-19 outcomes compared with tumour necrosis factor inhibitors (TNFi) as characterised by higher hospitalisation and death rates. 11 12 In addition, glucocorticoids (>10 mg/day), rituximab, sulfasalazine and immunosuppressant therapy (azathioprine, cyclophosphamide, ciclosporin, mycophenolate or tacrolimus) were also associated with COVID-19-related death. 10 Pausing MTX for at least 10 days after second vaccination has been shown to improve immunogenicity of COVID-19 vaccination in patients ≥60 years with rheumatic disease. 13 RESCUE i is an investigator-led three-arm randomised controlled trial aiming to investigate the effect of DMARD therapies in patients with IMID compared with a control group following the first and second doses of the AZ and Pfizer vaccinations and to evaluate whether withholding therapy after the first dose improves immunogenicity. Study design and patient population Consecutive participants at their routine clinic visit were recruited at a private Perth-based rheumatology practice at St John of God Hospital and at the IBD unit at St Vincent's Hospital Melbourne between the 1 May 2021 and 30 September 2021. The eligibility criteria were i Antibody RESponse to Covid-19 ChAdOx1nCov-19 and BNT162b vaccines after temporary suspension of DMARD therapy in immUne-mediated inflammatory diseasE age 18 years and over, have a diagnosis of an IMID and deemed to be in clinical remission on DMARD therapy with no disease flares for >4 weeks prior to enrolment. The inclusion criteria for an IMID were: RA/American College of Rheumatology (ACR)/EULAR 2010 classification criteria; psoriatic arthritis (PsA)/classification criteria for PsA; axial spondyloarthritis/Assessment of SpondyloArthritis International Society classification criteria; systemic lupus erythematosus/1997 ACR criteria; Crohn's disease and ulcerative colitis: European Crohn's and Colitis Organisation criteria. The exclusion criteria were prior vaccination to COVID-19, history of COVID-19 infection, prednisolone use within 4 weeks of COVID-19 vaccination, inability to have the COVID-19 vaccine or previous thromboembolism, myocarditis or pericarditis. Volunteers who did not have a diagnosis of IIMD and who were not DMARDs were also recruited as the controls. These participants consisted of patients with non-inflammatory rheumatic disease and/or their partners, health professionals, friends and family. Procedures Demographic data which included age, sex, ethnicity, height and weight, smoking status, DMARD therapy, concomitant medications and disease activity were collected at the baseline visit. At study commencement, the Australian Technical Advisory Group on Immunisation (ATAGI) on COVID-19 vaccines recommended for those aged 16 to under 60 years to have the Pfizer as the preferred vaccine due to a higher risk of thrombosis and thrombocytopenia syndrome related to the AZ vaccine. 14 15 From 16 September 2021, eligibility for the Pfizer vaccine was expanded to people over age 60 years. The Pfizer and AZ vaccines were given 3 and 12 weeks apart, respectively and were administered by the participant's general practitioner or government run vaccination hub. The DMARDs were grouped into conventional (csDMARD), biological (bDMARD) and targeted synthetic (tsDMARD) therapies (table 1). Subjects on combination cs/bDMARDs were grouped according to the csDMARD as the influence of COVID-19-related death from MTX and sulfasalazine are known to the exceed that of bDMARDs (eg, TNFi). 10 Patients on combination cs/tsDMARDs were considered solely in the tsDMARD group. Approximately 50% of participants were randomised using a random allocation table allocated to each DMARD group uploaded into the REDCap database hosted at the University of Western Australia to withhold their current immunosuppressive therapy. Participants on csDMARDs withheld therapy for 2 weeks after the first vaccine dose. MTX was administered weekly and the vaccination was attempted to be timed on the day the dose was due however the MTX dose was paused on the day of vaccination for two cycles. If however the first dose of the vaccine was <1 week following the last dose of MTX, then the dose prior to vaccination was withheld for two cycles. Participants on daily DMARDs withheld therapy for 1 week starting on the day of first vaccination. Participants on bDMARDs delayed their therapy by 1 week following their usual injection or infusion cycle. For example, for a bDMARD administered fortnightly, the vaccination was timed at the end of the 2 weeks and then restarted 1 week later leaving an interval of 3 weeks. All participants withheld therapy following the first vaccination dose only. For the subjects randomised to withhold their usual DMARDs, the withhold dates were calculated and adherence was confirmed and checked by the dates recorded by each participant. Any disease flares or adverse reactions to the COVID-19 vaccines were recorded by the participant and the outcomes were followed up via a phone consultation or scheduled clinic visit. If a participant flared then their current DMARD therapy which was withheld was immediately reinstituted. Blood samples were collected within 1 week prior to the first vaccine dose, 3-4 weeks after the first dose (just before the second dose in those receiving the Pfizer vaccine) and 4 weeks after the second dose. The baseline test prior to the vaccination was to ensure the participants had not had prior infection with SARS-CoV-2 (figure 1). Laboratory methods SARS-CoV2 IgG antibody was measured using the Siemens ADVIA Centaur sCOVG assay which is a twostep sandwich immunoassay using indirect chemiluminescent technology. This assay detects antibodies against the S1-RBD antigen and can be used for qualitative and quantitative detection of SARS-CoV2 IgG. The results are given as U/mL with the cut-off for positivity defined as ≥1.0 U/mL. 16 Sample size and statistical analysis A prospective power calculation was performed according to equation 4 specified in the study by Whitley and Ball RMD Open RMD Open RMD Open to determine the number of subjects per group required to detect a difference between proportions of 20% assuming a power of 80% and a two-sided significance level of 95%. 17 Based on observations that neutralising activity against wild-type SARS-CoV-2 was significantly lower in patients receiving MTX and targeted immunosuppressive therapy (median 50% inhibitory dilution) than in controls, 18 the calculated sample size required was 180 patients (81 AZ and 100 Pfizer) to detect a difference in the SARS-CoV-2 IgG levels in the study group. For categorical variables, the Fisher's exact parametric test was used to assess the seroconversion rates between the DMARD and control groups while the Wilcoxon-Mann-Whitney U non-parametric test was used to assess the continuous variable of antibody levels between the different groups. The data are given as a frequency (%) or mean with SD. Initial univariate logistic regression analysis was performed to assess for associations between patient characteristics and odds of achieving protective SARS-CoV2 IgG antibody titres. Multivariate analysis was carried out using multiple logistic regression in the variables which were found to be significant on univariate analysis. Statistical analyses were performed using STATA (StataCorp, USA) and SPSS (IBM, USA). A p value of <0.05 was considered statistically significant. Outcome measures The primary outcome was antibody seroconversion rates defined as the detection of SARS-CoV-2 antispike (S) protein receptor-binding antibodies (IgG titre ≥1.0 U/ mL) between the controls and subgroups of patients with IMID 3 weeks after the first vaccination and three to 3-4 weeks after the second vaccination. The secondary outcomes were SARS-CoV2 antispike (S) protein receptor-binding antibody titres. Patient characteristics The IMID cohort consisted of 73.2% females with a mean age of 54.2 years (±13.3, range 18-84) and 82.5% were Caucasian. Overall, 53.7% of patients had RA, 32.9% had PsA and 7.3% had ankylosing spondylitis. The controls were 59.6% females with a mean age of 54.4 years (±12.6, range 26-78) and 69.0% Caucasian. Eighty-one (44.5%) patients received the AZ vaccine while 100 (55.0%) received the Pfizer vaccine; 41.5% and 58.5% of the AZ and Pfizer IMID participants, respectively, withheld DMARD therapy. Eleven (4.5%) patients in the withhold group were excluded from the study for not following the hold protocol. Twenty-nine (12%) participants missed the baseline testing. Immunogenicity was evaluated at a mean duration of 32.6 and 29.8 days in the AZ group and 22.7 and 31.1 days in the Pfizer group following the first and second doses, respectively. The mean age of the IMID group taking MTX was 57.6 years (±14.6) with the mean dose being 16.93 mg weekly. The mean treatment duration was 9.04 years for the bDMARD, 3.39 years for the csDMARD and 3.82 years for the tsDMARD groups. Table 2 contains detailed participant characteristics. Infections Infections Infections Four patients on tocilizumab were changed to an alternative bDMARD or tsDMARD during the study due to an Australia-wide critical shortage of the drug however in all cases this occurred after both doses of the AZ or Pfizer vaccines had been administered and hence did not affect the post second vaccination SARS-CoV-2 IgG antibody (Ab) seroconversion and IgG Ab level analysis. SARS-CoV-2 vaccination responses in AZ and Pfizer compared with controls A total of 207 patients were included in the analysis of the response following the first dose vaccine and 210 patients following the second dose vaccine due to missing serology tests. In the AZ vaccine group following the first vaccine dose, the seroconversion rate was significantly higher in the withhold group (67.7% vs 27.3%, p=0.002) while In the Pfizer vaccination group following the first vaccine dose, the seroconversion rate was significantly lower in the continue group compared with the control (64.58% vs 100%, p=0.000). Following the second vaccine dose, there was no significant difference in the seroconversion rate compared with the controls regardless of hold (p=0.413). Following the first and second vaccine doses, the mean SARS-CoV2 Ab titre however were significantly lower in the continue therapy compared with the control group (4.94 vs 11.05 U/mL, p=0.0000 and 76.18 vs 133.26 U/mL, p=0.033). There was no significant difference in the SARS-CoV2 Ab titre levels in the group that withheld therapy during the first dose and followed through to the second dose (136.02 vs 133.58, p=0.980) (table 4 and figure 2). Comparison of SARS-CoV-2 vaccination responses within each of the DMARD groups When stratifying to each of the DMARD groups, in the participants who continued with therapy in the perivaccination period, there was a statistically significant lower rate of seroconversion following the first dose only and not the second dose with the tsDMARD group showing the lowest seroconversion rate among the different DMARDs (p=0.000) (table 5). In the continue DMARD group, when analysing the mean SARS-CoV2 IgG Ab titres there was a significant difference following the first and second vaccinations among each of the DMARD groups (p=0.000 and p=0.009, respectively) (table 5). In the withhold group, there was no significant difference in the seroconversion rate between each of the DMARD groups following the first and second vaccine doses (p=0.070). In addition, there were also no significant differences in the mean SARS-CoV2 IgG Ab titres following the first and second vaccine doses among the DMARD groups (p=0.110 and p=0.617, respectively) (table 5). Infections Infections Infections When comparing intervention within each of the DMARD classes, withholding tsDMARD resulted in a significantly higher mean SARS-CoV2 IgG Ab titre following the first and second vaccinations (p=0.000 and p=0.001), respectively while significance was only reached in the csDMARD group following the first vaccination (p=0.018). There was no difference in vaccine response observed between the groups who withheld or continued bDMARDs (table 6). Comparison of SARS-CoV-2 vaccination responses in DMARD groups compared with controls Compared with the controls, the continue therapy group SARS-CoV-2 IgG seroconversion rates were significantly lower in csDMARD and tsDMARD following the first dose (40.91% vs 90.20%, p=0.000 and 19.23% vs 90.20%, p=0.000), respectively while following the second dose the bDMARD and tsDMARD groups had slightly lower seroconversion rates (88.24% vs 100%, p=0.025 and 88.46% vs 100%, p=0.039, respectively). Of those who mounted a serological response in the withhold group, only the tsDMARD group had a significantly lower seroconversion rate following the first dose (64.3% vs 90.20%, p=0.007) along with a lower mean SARS-CoV-2 IgG Ab titre (3.4 vs 8.91 U/mL, p=0.002). All DMARD groups who withheld therapy after the first dose seroconverted following the second vaccine dose (table 5). For both the csDMARD and bDMARD groups, withholding therapy during the first vaccine dose did not result in a significant difference in the seroconversion rate compared with the control group (76.19% vs 90.20%, p=0.143 and 91.67 vs 90.20%, p=0.603, respectively). Following the second vaccine dose, there was no significant difference in the mean SARS-CoV2 IgG Ab titre in all the DMARD groups compared with the controls (table 5). Analysis of factors affecting vaccine response We analysed factors that influenced the immunisation response and found that patients receiving the Pfizer vaccine were at substantially higher odds of mounting a protective response compared with patients receiving the AZ vaccine (OR=16.24, 95% CI: 8.18 to 32.25, p=0.000). In addition, withholding DMARD therapy was found to confer higher odds of seroconverting (OR=2.55, 95% CI: 1.33 to 4.90). In contrast, there was a slightly reduced odds of seroconversion in those with longer durations between vaccinations (OR=0.952, 95% CI: 0.941 to 0.964, p=0.000) (online supplemental table 1). After controlling for the impact of age, sex, BMI, lymphocyte count, renal impairment, IMID and vaccine type on vaccine response, only vaccine type and withholding DMARD therapy remained significantly associated with protective IgG antibody levels. Patients Side effects and flares following the SARS-CoV-2 vaccination The tolerability of both the Pfizer and AZ vaccines were similar across the IMID and HC (Healthy Control) groups except for the higher incidence of rash in the group that continued with DMARD therapy (p=0.0478). The most common adverse event in all groups was injection site pain and fatigue. Ten versus six patients (8.3% vs 6.52%) in the withhold and continue therapy groups, respectively had an IMID flare however was not statistically significant (p=0.294) (table 7). DISCUSSION Our study demonstrates that the antibody response following the first dose of the Pfizer and AZ vaccines in the IMID patients who continued with DMARD therapy were delayed and reduced compared with the patients that temporarily suspended DMARD therapy. Despite this, the mean SARS-CoV2 IgG levels in the AZ group were not significantly different between the patients and controls after the second vaccine dose, irrespective of DMARD hold. In the Pfizer group however, holding therapy resulted in higher SARS-CoV2 IgG antibody levels than the continue group, which was comparable to the controls. The seroconversion rates and SARS-CoV-2 IgG Ab levels were higher in the Pfizer than the AZ group across all study arms suggesting that the Pfizer vaccine is more immunogenic. Receiving a second vaccine dose appears to have an additive effect on cumulative immunogenicity sufficient to mitigate the effect of being on a csDMARD but not tsDMARD hence supporting the need for two full vaccinations for efficient vaccination responses. 19 Our study also suggests that DMARD therapy has an immunomodulatory effect on SARS-CoV-2 IgG antibody production with the most important period following the initial vaccination when naïve T cells are being primed. In the group who continued DMARD therapy, the post first dose seroconversion rate was reduced but then increased significantly after the second vaccination suggesting a delayed antibody response. Given the recognised patterns of antibody production in COVID-19 infection, it is biologically plausible that interrupting DMARD therapy following the initial vaccination improves the antibody response and resultant trajectory of antibody production following the second vaccination dose. It is postulated that if logistically feasible, withholding DMARD therapy following the second vaccination may result in further increases in SARS-CoV-2 IgG antibody levels with the boosting of the already primed T cells, however this must be weighed with the potential increased risk for disease flare. Alternatively, offering an additional or third vaccine dose as part of an extended primary series may also help patients on DMARDs to achieve a sufficient protective immune response. Drug elimination is dependent on a number of pharmacokinetic parameters, which include age, distribution, renal and hepatic function, genetic variation, smoking, route of administration and half-life. 20 Withholding each drug dependent on these factors would be complicated and the duration required would be different for each subject and medication. Hence for practical considerations, the DMARDs were withheld for 1 week after the first vaccine dose with the exception of MTX, which was withheld for two doses based on data from influenza vaccines. [21][22][23] The risk for potential flares Data are presented as proportions (%) or mean (SD) and median (IQR). RMD Open RMD Open RMD Open were also considered and hence therapy was not withheld for longer than 2 weeks nor following the second vaccination. 24 In addition, given the complexity of withholding therapy around each of the two vaccine doses for the Pfizer vaccine which are spaced 3 weeks apart, the DMARDs were withheld following the first dose only. The withhold regimen was an effective strategy for all DMARD groups, however most so in the group taking tsDMARDs. This could be due to the relatively short halflives of these medications being 3.2, 12.5 and 9-14 hours for tofacitinib, baricitinib and upadacitinib, respectively. [25][26][27] JAKs mediate signal transduction for numerous cytokines including those involved in T-cell activation and proliferation. The JAK-signal transducer and activator of transcription (STAT) pathway is important for both innate and adaptive immunity. 28 STAT2 deficiency has been described to increase the susceptibility to viral infections as it is required for type 1 interferon signalling and studies with tofacitinib and baricitinib have been shown to diminish the responsiveness to the pneumococcal vaccine. [29][30][31] When tofacitinib therapy was interrupted for 2 weeks before the pneumococcal vaccine, there was no significant difference however a higher dose of tofacitinib at 10 mg twice daily was used in that study suggesting that the most important time to withhold therapy is immediately after vaccination. 29 Our study was not sufficiently powered to test whether there were any differences in SARS-Cov-2 IgG antibody responses with JAK inhibitor selectivity. In contrast to another study, our results support the withholding of JAKi in relation to the COVID-19 vaccination as recommended by the ACR COVID-19 vaccination guidelines. 32 33 JAKi with baricitinib and corticosteroids in patients with moderate-to-severe SARS-CoV-2 pneumonia have been associated with greater improvement in pulmonary function compared with corticosteroids alone. 34 A systematic review has also found the all-cause mortality rate at day 28 was lower among patients receiving JAKi compared with the control group. 35 The discordance in findings that JAKi reduces SARS-CoV-2 IgG antibody responses with the improved clinical outcomes of patients with COVID-19 could come down to timing. Patients with severe COVID-19 present with an exaggerated immune response characterised by the increased production of interleukin (IL)-6, IL-2, IL-7, IL-10, granulocyte-colonystimulating factor, interferon-gamma, macrophage inflammatory protein 1α and tumour necrosis factor-α. JAK1 and JAK2 inhibitors may inhibit the signalling of type I interferon, IL-6, interferon-gamma and IL-2 dampening the effects of the immune dysregulation and cytokine storm, however when given too early may conversely impair the immune response to SARS-CoV-2. 36 Despite the heterogeneous cohort of immunosuppressive medications used, they can be grouped by similar mechanisms of action enabling the interrogation of drug-specific classes on the effects of vaccine immunogenicity. Our study showed that seroconversion is lower in patients receiving the AZ vaccine compared with the Pfizer vaccine in both the control and IMID cohorts, however the average age of the AZ group was older given the ATAGI age preference for vaccine eligibility being >60 years old at the time of study enrolment. 15 Immunosenescence especially in those aged over 70 years have been shown to result in lower total IgG against the RBD spike protein and neutralising antibody titres than younger subjects. 9 The safety of both the AZ and Pfizer vaccines were reassuring for a good safety profile with most adverse Infections Infections Infections events being mild and temporary consistent with other studies and registry data. 32 37 38 Withholding the DMARD contemporaneous to the first vaccination did not drive disease flare ups significantly. The importance of both cellular and humoral immunity for the protection against SARS-CoV-2 infection remains yet to be fully elucidated. Our study assessed humoral immunity only using the Siemens electrochemiluminescence immunoassay to measure antibody concentrations instead of using a neutralisation assay. Despite this, the quantitative values of antibodies against the RBD of the S-Protein of SARS-Cov-2 have been shown to be a good correlation with virus neutralisation titres (r=0.843; p<0.0001) and an overall qualitative agreement of 98.5%. 39 When compared head to head with other SARS-CoV-2 assays, only the Siemens and Roche assays achieved a sensitivity of at least 98.1% and specificity of at least 98% without further optimisation. 40 The Siemens ADVIA Centaur sCOVG assay reports a range of quantification of 0.5-750.0 U/mL, 99.4% specificity and 90.5% sensitivity while the S1-RBD antibody levels show a good correlation with virus neutralisation titres (r=0.843; p<0.0001). 39 It is still unclear what level of SARS-CoV-2 IgG antibody and consequently neutralising antibody levels are required for protection against severe SARS-CoV-2 infection. A predictive model of immune protection has shown that a 50% protective neutralisation level has been estimated to be approximately 20% of the average convalescent level while for a 50% level of protection from severe infection, only approximately 3% of the average convalescent level is required. 41 In rhesus macaques, the neutralising antibody titre thresholds for full protection against SARS-CoV-2 was approximately 500 while a titre of approximately 50 provided partial protection. 42 Since these titres are readily achievable by vaccination in humans, a SARS-CoV-2 IgG titre cut-off of 7 and 25 U/mL were used as a reference during the logistic regression analysis for our study as this equated to a neutralising antibody titre of 50 and 500, respectively based on the strong correlation to viral neutralisation testing as per the graph on page 14 of the Siemens Advia Centaur kit insert. 16 Hence, even with relatively low SARS-CoV-2 levels in the IMID group which correlate with low neutralising antibody titres, there could still be adequate protection against SARS-CoV-2 suggesting that T-cell immune responses are also contributory to protection. 42 Our study had several limitations, one of which included not withholding therapy following the second vaccination. This was mainly due to the Pfizer vaccination which was given 3 weeks apart and withholding therapies given fortnightly or greater would not have been possible. In addition, there were concerns for disease flare if therapy was withheld consecutively in a short timeframe which was an ethical consideration. Despite this, it has been shown that seroconversion rates and respective antibody titres after the second vaccination are not significantly affected by DMARD monotherapy, 19 with the exception of tsDMARDs which was highlighted in our study. Combinations with csDMARD were possible in the tsDMARD group, however the majority of patients were on monotherapy (51/60, 85%). Numerically, all the DMARD groups still showed higher responder rates and antibody titres after the second vaccine when withholding, however was only statistically significant for tsDMARDs. The small sample size and hence a type 2 error cannot be excluded. Although the patients were instructed to withhold therapy following the first vaccine dose only, the possibility remains that in a minority of patients that these instructions were not adhered to and therapy was also withheld following the second vaccination. The intention-totreat principle was applied to the final analysis. In Australia, there had been a recommendation to vaccinate people aged 60 years and over with the AZ and 16-59 years old people with the Pfizer vaccine. 15 Given the age disparity, the median age of the AZ vaccine group was higher compared with the Pfizer group and hence the two groups could not be age-matched. Hence in addition to age, other factors including disease activity or type were not considered in the main analysis (tables 3 and 4). The majority of patients in this study were also able to afford private healthcare. Medicare-dependent patients may produce different results dependent on socioeconomic and compliance factors. The blood sampling regimen of our study was quite intensive with three episodes required. The participants who missed the baseline SARS-CoV-2 IgG test however they were all from Western Australia and were unlikely to affect the vaccine responses as there were no cases of community transmission during the study period. The antinuceleocapsid Ab could be tested in such patients for confirmation. In summary, the seroconversion rate with both the AZ and Pfizer vaccines were impaired following the first vaccination however was mitigated following DMARD hold. Despite this, the mean SARS-CoV2 antibody levels were not affected following the second vaccination regardless of hold with the exception of the tsDMARD group. This places emphasis on withholding tsDMARD therapy following the first vaccination dose and the IMID group having at least two vaccinations preferably with an mRNA vaccine to ensure adequate immune responses. RMD Open RMD Open RMD Open Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors. Competing interests None declared. Patient consent for publication Not applicable. Ethics approval This study was approved by both the St John of God and St Vincent's Hospital Melbourne Human Research Ethics Committee (HREC approval 1809 and 099/21). All participants provided written informed consent prior to commencement. Provenance and peer review Not commissioned; externally peer reviewed. Data availability statement Data are available on reasonable request. Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise. Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
2022-05-18T06:23:38.079Z
2022-05-01T00:00:00.000
{ "year": 2022, "sha1": "8645be1628cf6f99eb2af21480caa4369d5e3b4d", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "3ccde5fbc09453cecba228080df1e36832cc34a7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233219881
pes2o/s2orc
v3-fos-license
A New Coreset Framework for Clustering Given a metric space, the $(k,z)$-clustering problem consists of finding $k$ centers such that the sum of the of distances raised to the power $z$ of every point to its closest center is minimized. This encapsulates the famous $k$-median ($z=1$) and $k$-means ($z=2$) clustering problems. Designing small-space sketches of the data that approximately preserves the cost of the solutions, also known as \emph{coresets}, has been an important research direction over the last 15 years. In this paper, we present a new, simple coreset framework that simultaneously improves upon the best known bounds for a large variety of settings, ranging from Euclidean space, doubling metric, minor-free metric, and the general metric cases. center based clustering problems have become the cornerstones of various data analysis approaches and machine learning techniques (see formal definition in Section 3). Datasets used in practice are often huge, containing hundred of millions of points, distributed, or evolving over time. Hence, in these settings classical heuristics (such as Lloyd or k-means++) are lapsed; The size of the dataset forbids multiple passes over the input data and finding a "compact representation" of the input data is of primary importance. The method of choice for this is to compute a coreset, i.e. a weighted set of points of small size that can be used in place of the full input for algorithmic purposes. More formally, for any ε > 0, an ε-coreset (referred to simply as coreset) is a set Q of points of the metric space such that any α-approximation to a clustering problem on Q, is a α(1 + ε)-approximation to the clustering problem for the original point set. Hence, a small coreset is a good compression of the full input set: one can simply keep in memory a coreset and apply any given algorithm on the coreset rather than on the input to speed up performances and reduce memory consumption. Coreset constructions had been widely studied over the last 15 years. In this paper, we specifically focus on the (k, z)-clustering problem, which encapsulates k-median (z = 1) and k-means (z = 2). Given two positive integers k and z and a metric space (X, dist), the (k, z)-clustering problem asks for a set S of k points, called centers, that minimizes cost(X, S) := x∈X min s∈S dist(x, s) z The method of choice for designing coreset is importance sampling, initiated by the seminal work of Chen [Che09]. The basic approach is to devise a non-uniform sampling distribution which picks points proportionally to their cost contribution in an arbitrary constant factor approximation. In a nutshell, the current best-known analysis shows that, for a given set S of k centers, it happens with high probability that the sampled instance Ω with appropriate weights has roughly the same cost as the original instance, i.e. cost(Ω, S) ∈ (1 ± ε)cost(X, S). Then, to show that the set Ω is an ε-coreset, it is necessary to take a union-bound over these events for all possible set of k centers. Bounding the size of the union-bound is the main hurdle faced by this approach: indeed, there may be infinitely many possible set of centers. The state-of-the-art analysis relies on VC-dimension to address this issue. Informally, the VC dimension is a complexity measure of a range space, denoting the cardinality of the largest set such that all subsets are included in the range space. The application to clustering considers weighted range spaces, where each point is weighted by its relative contribution to the cost of a given clustering 1 . In metric spaces where the weighted range space induced by distances to k centers has VC-dimension D, it can be shown that taking O ε,z (k · D log k) samples yields a coreset [FSS20], although tighter bounds are achievable in certain cases. For instance, in d dimensional Euclidean spaces D is in O(kd log k) [BLHK17], which would yield coresets of size O ε,z (k 2 ·d log 2 k), but Huang and Vishnoi [HV20] showed the existence of a coreset with O(k · log 2 k · ε −2z−2 ) points. This analysis was proven powerful in various metric spaces, such as doubling spaces by Huang, Jiang, Li and Wu [HJLW18], graphs of bounded treewidth by Baker, Braverman, Huang, Jiang, Krauthgamer, Wu [BBH + 20] or the shortest-path metric of a graph excluding a fixed minor by Braverman, Jiang, Krauthgamer and Wu [BJKW21]. However, range spaces of even heavily constrained metrics do not necessarily have small VC-dimension (e.g. bounded doubling dimension does not imply bounded VC-dimension or vice versa [HJLW18,LL06]), and applying previous techniques requires heavy additional machinery to adapt the VC-dimension approach to them. Moreover, the bounds provided are far from the bound obtained for Euclidean spaces: their dependency in k is at least Ω(k 2 ), leaving a significant gap to the best lower bounds of Ω(k). We thus ask: Question. Is it possible to design coresets whose size are near-linear in k for doubling metrics, minor-free metrics, bounded-treewidth metrics? Are the current roadblocks specific to the analysis through VC-dimension, or inherent to the problem? To answer positively these questions, we present a new framework to analyse importance sampling. Its analysis stems from first principles, and it can be applied in a black-box fashion to any metric space that admits an approximate centroid set (see Definition 1) of bounded size. We show that all previously mentioned spaces satisfy this condition, and our construction improves on the bestknown coreset size. More precisely, we recover (and improve) all previous results for (k, z)-clustering such as Euclidean spaces, p spaces for p ∈ [1, 2), finite n-point metrics, while also giving the first coresets with size near-linear in k and ε −z for a number of other metrics such as doubling spaces, minor free metrics, and graphs with bounded treewidth. Our Results Our framework requires the existence of a particular discretization of the set of possible centers, as described in the following definition. We show in the latter sections that this is indeed the case for all the metric spaces mentioned so far. Definition 1. Let (X, dist) be a metric space, P ⊆ X a set of clients and two positive integers k and z. Let ε > 0 be a precision parameter. Given a set of centers A, a set C is an A-approximate centroid set for (k, z)-clustering on P if it satisfies the following property. For every set of k centers S ∈ X k , there existsS ∈ C k such that for all points p ∈ P that satisfies either cost(p, S) ≤ 8z ε z cost(p, A) or cost(p,S) ≤ 8z ε z cost(p, A), it holds |cost(p, S) − cost(p,S)| ≤ ε z log(z/ε) (cost(p, S) + cost(p, A)) , This definition is slightly different from Matousek's one [Mat00], in that we seek to preserve distances only for interesting points, and allow an error εcost(p, A). This is crucial in some of our applications. Theorem 1. Let (X, dist) be a metric space, P ⊆ X a set of clients with n distinct points and two positive integers k and z. Let ε > 0 be a precision parameter. Let also A be a constant-factor approximation for (k, z)-clustering on P . Suppose there exists an A-approximate centroid set C for (k, z)-clustering on P . Then, there exists an algorithm running in time O(n) that constructs with probability at least 1 − π a coreset of size O 2 O(z log z) · log 4 1/ε min(ε 2 , ε z ) (k log |C| + log log(1/ε) + log(1/π)) with positive weights for the (k, z)-clustering problem. When applying this theorem to particular metric spaces, the running time is dominated by the construction of the constant-factor approximation A, which can be done for instance inÕ(k|P |) given oracle access to the distances using [MP04] 2 . • Since general discrete metric spaces have doubling dimension O(log n), this yields coreset of size O (Γ · k log n). This improves on the bound from Feldman and Langberg [FL11] O ε −2z k log k log n . • O Γ · k log 2 k + log k ε 4 for a family of graphs excluding a fixed minor, see Corollary 7. This improves on Braverman et al. [BJKW21], whose coreset has size O(k 2 /ε 4 ). • O Γ · k log 2 k + log k ε 3 for Planar Graphs, which is a particular family excluding a fixed minor for which we can save a 1/ε factor and present a simpler, instructive proof. We note the lower bound Ω( k log n ε ) for k-Median in general metric spaces from [BBH + 20]. This means that in the case of metrics with doubling dimension d, our bounds are optimal up to a poly log(1/ε)/ε factor. For graphs with treewidth t, another lower bound of Ω kt ε from [BBH + 20] shows that our bounds are optimal up to the same factor. Overview of Our Techniques Our proof is arguably from first principles. We now give a quick overview of its ingredients. The approach consists in first reducing to a well structured instance, that consists of a set of centers A inducing k clusters, all having roughly the same costs, and where every point is at the same distance of A, up to a factor 2. Then we show it is enough to perform importance sampling on all these clusters. average; i.e. ∀C i , cost(C i ∩ G, A) ≥ cost(G,A) 2k . • In every cluster C i , there exists r G,i such that the points in the intersection of the cluster with the group cost r G,i (up to constant factors), i.e. ∀p ∈ C i ∩ G, cost(p, A) = Θ(r G i ). We then compute coresets for each group and output the union. In some sense, this preprocessing step identifies canonical instances for coresets; any algorithm that produces improved coresets for instances satisfying the aforementioned regularity condition can be combined with our preprocessing steps to produce improved coreset in general. Importance Sampling in Groups. The first technical challenge is to analyse the importance sampling procedure for structured instances. The arguably simplest way to attempt to analyse importance sampling is by first showing that for any fixed solution S we need a set Ω of δ samples to show that with good enough probability and then applying a union bound over the validity of Eq. (1) for all solutions S. This union bound is typically achieved via the VC-dimension. Using this simple estimator, most analyses of importance sampling procedures require a sample size of at least k points to approximate the cost of a single given solution. To illustrate this, consider an instance where a single cluster C is isolated from all the others. Clearly, if we do not place a center close to C, the cost will be extremely large, requiring some point of C to be contained in the sample. One way to remedy this is by picking a point p proportionate to cost(p ,A) cost(A) + 1 |C i | rather than cost(p ,A) cost(A) , where C i is the cluster to which p is assigned, see for instance [FSS20]. This analysis always leads to coreset of size quadratic in k at best 3 . Our analysis of importance sampling for structured instances will allow us to bypass both the quadratic dependencies on k, and the need of a bound on the VC-dimension of the range space. Our high level idea is to use two union bounds. The first one will deal with clusters that are very expensive compared to their cost in A. The second one will focus on solutions in which clusters have roughly the same cost as they do in A. For the former case, we observe that if a cluster C i is served by a center in solution S that is very far away, then we can easily bound its cost in S as long as our sample approximates the size of every cluster. Specifically, assume that there exists a point p in C i with distance to S at least Ω(1) · ε −1 · dist(p, c i ). Then, since we are working with structured instances, all points of C i are roughly at the same distance of c i and that this distance is negligible compared to dist(p, S), all points of C i are nearly at the same distance of S. Conditioned on the event E that the sample Ω preserves the size of all clusters, the cost of C i in solution S is preserved as well. Note that this event E is independent of the solution S and thus we require no enumeration of solutions to preserve the cost of expensive clusters. Proving that E holds is a straightforward application of concentration bounds. The second observation is that points with dist(p, S) ≤ ε /z · dist(p, A) are so cheap that their cost is preserved by the sampling with an error at most ε · cost(A). Indeed, their cost in S cannot be more than ε · cost(A): it is easy to show that the same bound holds for the coreset. The intermediate cases, i.e. solutions in which S serves clusters at distances further than ε/z · dist(p, A), but not so far as to simply use event E to bound the cost, is the hardest part of the analysis. Using a geometric series, we can split the cost ranges into into log z ε 2 ∈ O(z log ε −1 ) groups by powers of two. Due to working with a structured instance, the points within such a group have equal distances, up to a constant factors. This also implies that the cost in such a group is equal, up to a factor of 2 O(z) . The overall variance of the cost estimator is then of the order . Thus, standard concentration bounds give an additive error of ε · (cost(A) + cost(S)) with at most O(ε −2−z ) many samples for every group. To improve this to O(ε −z ), we use a different estimator defined as follows. For every cluster C i , let q i be the point of C i that is the closest to S. We then consider Conditioned on event E, the estimator in Equation 3 is always concentrated around its expectation, as cost(q i , S) is fixed for S. The first estimator in Equation 2 now has a reduced variance. Specifically, at the border cases of points at distance Θ(1/ε)dist(p, A) of S, the Estimator 2 has variance at most O(1) · max(ε −2 , ε −z ) · cost(A) · cost(S), which ultimately allows us to show that O(ε −2 + ε −z ) samples are enough to achieve an additive error of ε · (cost(S) + cost(A)). This technique is somewhat related to (and inspired by) chaining arguments (see e.g. Talagrand [T + 96] for more on chaining). The key difference is while chaining is generally applied to improve over basic union bounds, our estimator is designed to reduce the variance. Preserving the Cost of Points not in Well-Structured Groups Unfortunately, it is not possible to decompose the entire point set into groups. Given an initial solution A and a cluster C ∈ A, this is possible for all the points at distance at most ε −O(z) · cost(C,c) |C| . The remaining points are now both far from their respective center in A and, due to Markov's inequality, only a small fraction of the point set. In the following, let P f ar denote these points. For any given subset of these far away points and a candidate solution S, now use that either the points pay at most what they do in A, or an increase in their cost significantly increases the overall cost. In the former case, standard sensitivity sampling preserves the cost with a very small sample size. In the latter case, a significant cost by a point p in P f ar also implies that all points close to the center c serving p in A have to significantly increase the cost. A Union-bound to Preserve all Solutions As pictured in the previous paragraphs, the cost of points with either very small or very large distance to S is preserved for any solution S with high probability. The guarantee we have for interesting points is weaker: their cost is preserved by the coreset with high probability for any fixed solution S. Hence, for this to hold for any solution, we need to take a union-bound over the probability of failure for all possible solution S. However, the union-bound is necessary only for these interesting points : this explains the introduction of the approximate centroid set in Definition 1. Assuming the existence of a set C such as in Definition 1, one can take a union-bound over the failure of the construction for all set of k centers in C k to ensure that the cost of interesting points is preserved for all these solutions. To extend this result to any solution S, one can take the set of k pointsS in C k that approximates best S, and relate the cost of interesting points in S to their cost inS with a tiny error. Since the cost of interesting points iñ S is preserved in the coreset, the cost of these points in S is preserved as well. We briefly picture now how to get approximate centroid sets for specific metrics. We are looking for a set C with the following property: for every solution S, there exists a k-tupleS ∈ C k such that for every point p with dist(p, S) ≤ ε −1 dist(p, A) in a given cluster C of A, |cost(p, S) − cost(p,S)| ≤ ε (cost(p, A) + cost(p, S)). We call such points interesting. Metrics with doubling dimension d: C is simply constructed taking nets around each input point. A γ-net of a metric space is a set of points that are at least at distance γ from each other, and such that each point of the metric is at distance at most γ from the net. The existence of γ-nets of small size is one of the key properties of doubling metrics (see Lemma 19). For every point p, C contains an εcost(p, A)-net of the points at distance at most 8z ε · cost(p, A) from p. If p is an interesting point, there is therefore a center of C close to its center in S. However, this only shows that centers from the solutionS ∈ C k are closer than those of S. Showing that none gets too close is a different ballgame. We will see two ways of achieving it. The first one, that we apply for the doubling, treewidth and planar case, is based on the following observation: if a center s ∈ S is replaced by a centers, that is way closer to a point p than s, then s can be discarded in the first place and be replaced by the center serving p. This is formalized in Lemma 18. The other way of ensuring that no center fromS gets to close to a point in p is based on guessing distances from points in S to input points. It can be applied more broadly than Lemma 18, but yields larger centroid sets. We will use it only for minor-excluded graphs, for which Lemma 18 cannot be applied. Graphs with treewidth t: The construction of C is not as easy in graph metrics: we use the existence of small-size separators, building on ideas from Baker et al. [BBH + 20]. Fix a solution S, and suppose that all interesting points are in a region R of the graph, such that the boundary B of R is made of a constant number of vertices. Fix a center c ∈ S, and suppose c is not in R. Then, to preserve the cost of interesting points, it is enough to have a center c at the same distance to all points in the boundary B as c. C is therefore constructed as follows: for a point p, its distance tuple to B = {b 1 , ..., b |B| } is the tuple (d 1 , ..., d |B| ), where d i = dist(p, b i ) is the distance to b i . For every distance tuple to B, C contains one point having approximately that distance tuple to B. Letc be the point of C having approximately the same distance tuple to B as c: this ensures that ∀p, cost(p, c) ≈ cost(p,c). It is however necessary to limit the size of C. For that, we approximate the distances to B. This can be done for interesting points p as follows: since we have dist(p, c) ≤ ε −1 dist(p, A), rouding the distances to their closest multiple of εdist(p, A) ensures that there are only O(1/ε 2 ) possibilities, and adds an error εcost(p, A). We show in Section 9 how to make this argument formal, and how to remove the assumption that all interesting points are in the same region. In minor-excluded graphs this class of graphs, that includes planar graphs, admits as well small-size shortest-path separators. A construction similar in spirit to the one for treewidth is therefore possible, as presented in Section 11. This builds on the work of Braverman et al. [BJKW21]. However, due to the nature of the separator -which are small sets of paths, and not simply small sets of vertices -one cannot apply the idea of Lemma 18 to show that no center gets too close. Instead, we will guess the distance from input points to any point in S, allowing to constructS with the same distances. Of course, this mere idea requires way too many guesses to have a small set C: we see in Section 11 how to make it work properly. We start the section by showing two preprocessing lemmas: the first one is Lemma 18, as described above. The second one allows to apply Theorem 1 in the case the input set is weighted, so that we can assume the input has only poly(k, ε −1 ) many distinct points, by first computing a non-optimal coreset. Roadmap The paper is organized as follow: after defining the concepts used in the paper, we present formally the algorithm in Section 4. We then describe the construction of a coreset for a structured instance in Section 5, and the reduction to such an instance in Section 7. Finally, we show the existence of approximate centroid set in various metric spaces in Section 8. We furthermore explain the dimension reduction technique leading to our result for Euclidean spaces in Section 12, and the O(k 2 ε −2 ) construction in Appendix B. A deeper description of related work is made in Section 2. Related Work We already surveyed most of the relevant bounds for coresets for k-means and k-median. A complete overview over all of these bounds is given in Table 1, further pointers to coreset literature can be found in surveys [MS18]. For the remainder of the section, we highlight differences to previous techniques. The early coreset results mainly considered input data embedded in constant dimensional Euclidean spaces [FS05,HK07,HM01]. These coresets relied on low-dimensional geometric decompositions inducing coresets of sizes typically of order at least k · ε −d . These techniques were replaced by importance sampling schemes, initiated by the seminal work of Chen [Che09]. The basic approach is to devise a non-uniform sampling distribution which picks points proportionately to their impact in a given constant factor approximation. A significant advantage of importance sampling over other techniques is that it generalizes to non-Euclidean metrics. While the early coreset papers [HK07,HM04] were indeed heavily reliant on the structure of Euclidean spaces, Chen gave the first coreset of size O(k 2 ε −2 log 2 n) for general n-point metrics. Coresets via Bounded VC-Dimension The state of the art importance sampling techniques in Euclidean spaces are based on reducing the problem of constructing a coreset to constructing an ε-net in a range space of bounded VC-dimension 4 . Li, Long and Srinivasan [LLS01] showed that if the VC-dimension is bounded by D, an ε-approximation of size O( D ε 2 ) exists. The remarkable aspect of these bounds is that they are independent of the number of input points. To apply the reduction, we need a bound on the VC-dimension for the range space induced by the intersection of metric balls centered around k points in a d-dimensional Euclidean space. For Euclidean kmeans and k-median, an upper bound of D ∈ O(kd log k) is implicit in the work of [BEHW89] and Eisenstat and Angluin [EA07]. This bound was recently shown to be tight by Csikos, Mustafa and Kupavskii [CMK19]. The dependency on d may be replaced with a dependency on log k, as explained in more detail in Section 12. Thus O(k log 2 k) is a natural barrier for known techniques in Euclidean spaces. VC-Dimension and Doubling Dimension A further complication arises when attempting to extend sampling techniques for bounded VC-dimension in range spaces of bounded doubling dimension d. While the two notions share certain similarities and are asymptotically identical for the range space induced by the intersection of balls in in Euclidean spaces, the two quantities are incomparable in general. For instance, Li and Long proved the existence of a range space with constant VC dimension and unbounded doubling dimension [LL06]. Conversely, [HJLW18] also showed that a bound on the doubling dimension does not imply a bound on the VC-dimension. Nevertheless, by carefully distorting the metric they were able to prove that a related quantity known as the shattering dimension can be bounded, yielding the first coresets for bounded doubling dimension independent of n. Even so, their boundÕ(k 3 dε −2 ) is still far from what is currently achievable in Euclidean spaces. Similarly, the construction from [BBH + 20] for graphs with bounded treewidth uses that a graph of treewidth t has shattering dimension O(t). They use this fact to get coreset for k-Median, of sizeÕ(k 3 t/ε 2 ). For excluded-minor graphs, [BJKW21] proceeds similarly, but need an additional iterative procedure: they first show that in an excluded-minor graph, a subset X of the vertices has coreset of size O k,ε (log |X|), using the shattering-dimension techniques. They show then how to iterate this construction (using that "a coreset of a coreset is a coreset") to remove dependency in |X|. This iterative procedure is of independent interest, and we use it as well for bounded treewidth and excluded-minor settings. Further Related Work So far we only described works that aim at giving better coreset construction for unconstrained k-median and k-means in some metric space. Nevertheless, there is a rich literature on further related questions. As a tool for data compression, coresets feature heavily in streaming literature. Some papers consider a slightly weaker guarantee of summarizing the data set such that a (1 + ε) approximation can be maintained and extracted. Such notions are often referred to as weak coresets or streaming coresets, see [FL11,FMS07]. Further papers focus on maintaining coresets with little overhead in various streaming and distributed models, see [BEL13, BFLR19, BFL + 17, FS05, FGS + 13]. Other related work considers generalizations of k-median and k-means by either adding capacity constraints [CL19, HJV19, SSS19], or considering more general objective functions [BLL18,BJKW19]. Coresets have also been studied for many other problems: we cite non-comprehensively Determinant Maximization [IMGR20], Diversity Maximization [CPP18,IMMM14] logistic regression [HCB16,MSSW18], dependency networks [MMK18], or low-rank approximation [MJF19]. Problem Definitions Given an ambient metric space (X, dist), a set of points P ⊆ X called clients, and positive integers k and z, the goal of the (k, z)-clustering problem is to output a set S of k centers (or facilities) chosen in X that minimizes p∈P min c∈S (dist(p, c)) z Definition 2. An ε-coreset for the (k, z)-clustering problem in a metric space (X, dist) is a weighted subset Ω of X with weights w : Ω → R + such that, for any set S ⊂ X, |S| = k, Given a set of point P with weights w : P → R + on a metric space I = (X, dist) and a solution S, we define cost(P, S) := p∈P w(p)cost(p, S) and, in the case where P contains all the points of the metric space, we define cost(S) := cost(P, S). We will also make use of the following lemma, to have a weaker version of the triangle inequality for k-Means and more general distances. Proofs of this lemma (and variants thereof) can be found in [BBC + 19, CS17, FSS20, MMR19, SW18]. For completeness, we provide a proof in the appendix. Lemma 1 (Triangle Inequality for Powers). Let a, b, c be an arbitrary set of points in a metric space with distance function d and let z be a positive integer. Then for any ε > 0 From Weighted to Unweighted Inputs We start by showing a simple reduction from weighted to unweighted inputs. Essentially, we convert a point with weight w to w copies of the point. Corollary 2. Let ε, π > 0. Let (X, dist) be a metric space, P a set of clients with weights w : P → R + and two positive integers k and z. Let also A be a constant-factor approximation for (k, z)-clustering on P with weights. Suppose there exists a A-approximate centroid set, denoted C. Then, there exists an algorithm running in time O(|P |) that constructs with probability at least 1 − π a positively-weighted coreset of size for the (k, z)-clustering problem on P with weights. Proof. We start by making all weights integers: let w min = min p∈P w(p), andw(p) = 2 w(p) εw min . This definition ensures that We denoteP the set of points P with weightw. First, we note that for any solution S, cost(P, S) − εw min cost(P , S) ≤ ε 2 cost(P, S). Hence, it is enough to find an ε/2-coreset forP , and then scale the coreset weights of the coreset points by εw min /2. We have that the weights inP are integers: a weighted point can therefore be considered as multiple copies of the same points. By the previous equation, A is a constant-factor approximation forP as well. The definition of a centroid set does not depend on weights, so C is a A-centroid set forP as well. Hence, we can apply Theorem 1 onP and scale the resulting coreset by εw min /2 to conclude the proof. Partitioning an Instance into Groups: Definitions As sketched, the algorithm partitions the input points into structured groups. We give here the useful definitions. Fix a metric space I = (X, dist), positive integers k, z and a set of clients P . For a solution S of (k, z)-clustering on P and a center c ∈ S, c's cluster consists of all points closer to c than to any other center of S. Fix as well some ε > 0, and let A be any solution for (k, z)-clustering on P with k centers. Let C 1 , ..., C k be the clusters induced by the centers of A. • the average cost of a cluster C i is • For all i, j, the ring R i,j is the set of points p ∈ C i such that • The inner ring R I (C i ) := ∪ j≤2z log(ε/z) R i,j (resp. outer ring R O (C i ) := ∪ j>2z log(z/ε) R i,j ) of a cluster C i consists of the points of C i with cost at most ( ε /z) 2z ∆ C i (resp. at least ( z /ε) 2z ∆ C i ). The main ring R M (C i ) consists of all the other points of C i . For a solution S, we let R S I and R S O be the union of inner and outer rings of the clusters induced by S. • for each j, R j is defined to be ∪ k i=1 R i,j . • For each j, the rings R i,j are gathered into groups G j,b defined as follows: • For any j, let G j,min := ∪ b≤0 G j,b be the union of the cheapest groups, and G j,max := ∪ b≥z log 4z ε G j,b be the union of the most expensive ones. The set of interesting groups is made of G j,min , G j,max , and G j,b for all 0 < b < z log 4z ε . • The set of outer rings is also partitioned into outer groups: ε . Intuitively, grouping points by groups is helpful, as all points in the same ring can pay the same additive error. Since there are very few groups, it turns out possible to construct a coreset for each group, and then take the union of the group's coreset. This is essentially the algorithm we propose. We note few facts about the partitioning: Fact 1. There exist at most O(z log(z/ε)) many non-empty R j that are not in some inner or outer ring, i.e., not in R A I nor in R A O . Hence, the number of different non-empty groups is bounded as well: Fact 2. There exists at most O(z 2 log 2 (z/ε)) many interesting G j,b . This is simply due to the fact that j can take only interesting values between 2z log(ε/z) and 2z log(z/ε), and interesting b between 0 and z log(4z/ε). By the definition of the outer groups, we have also that Fact 3. There exists at most O(z log(z/ε)) many interesting outer groups. For simplicity, we will drop mention of "interesting" : when considering any group, it will implicitly be an interesting group. The Coreset Construction Algorithm, and Proof of Theorem 1 4.1 The algorithm For an initial metric space (X, dist), set of clients P and ε > 0, our algorithm essentially consists of the following steps: given a solution A, it processes the input in order to reduce the number of different groups. Then, the algorithm computes a coreset of the points inside each group using the following GroupSample procedure. The final coreset is made of the union of the coresets for all groups. The GroupSample procedure takes as input a group of points G as defined in Section 3.3, a set of centers A inducing clustersC 1 ,C 2 , ...,C k on G and an integer δ. Note importantly that the definition of clustersC i says that they are only made of points from the group G. The output of GroupSample is a set of weighted points, computed as follows: a point p ∈C i is sampled with probability δ·cost(C i ,A) |C i |·cost(G,A) , and the weight of any sampled point is rescaled by a factor |C i |·cost(G,A) δcost(C i ,A) . 5 The properties of the GroupSample procedure are captured by the following lemma. Lemma 2. Let (X, dist) be a metric space, k, z be two positive integers and G be a group of clients and A be a solution to (k, z)-clustering on G with k centers such that: • for every clusterC induced by A on G, all points ofC have the same cost in A, up to a factor 2: ∀p, q ∈C, cost(p, A) ≤ 2cost(q, A). • for all clustersC induced by A on G, it holds that cost(G,A) Let C be a A-approximate centroid set for (k, z)-clustering on G. Then, there exists an algorithm GroupSample, running in time O(|G|) that constructs a set Ω of size δ such that, with probability We further require the SensitivitySample procedure, which we will apply to some of the points not consider by the calls to GroupSample. From a group G, this procedure merely picks δ points p with probability cost(p,A) cost(G,A) . Each of the δ sampled points has a weight cost(G,A) δ·cost(p,A) . The key property of SensitivitySample is given in the following lemma. Lemma 3. Let (X, dist) be a metric space, k, z be two positive integers, P be a set of clients and A be a c A -approximate solution solution to (k, z)-clustering on P . Then, there exists an algorithm SensitivitySample running in time O(|G|) that constructs a set Ω of size δ such that it holds with probability 1 − exp k log |C| − 2 O(z log z) · ε 2 log 2 1/ε · δ that, for all sets S of k centers: ) . An interesting feature of Lemma 3 is that the probability does not depend on ε −z , as it does in Lemma 2. Using the two algorithms GroupSample and SensitivitySample, we can formally present the whole algorithm: Input: A metric space (X, dist), a set P ⊆ X, k, z > 0, a solution A to (k, z)-clustering on P , and ε such that 0 < ε < 1/3. Output: A coreset. Namely, a set of points Ω ⊆ P ∪ A and a weight function w : Ω → R + such that for any set of k centers S, cost(P, S) = (1 ± ε)cost(Ω, S). 1. Set the weights of all the centers of A to 0. 2. Partition the remaining instance into groups: (a) For each cluster C of A with center c, remove R I (C) and increase the weight of c by |R I (C)|. (b) For each cluster C with center c in solution A , the algorithm discards also all of C ∩ ∪ j G j,min and R O (C) ∩ G O min , and increases the weight of c by the number of points discarded in cluster c. (c) Let D be the set of points discarded at those steps, and P 1 be the weighted set of centers that have positive weights. 3. Sampling from well structured groups: For every j such that z log(ε/z) ≤ j ≤ 2z log(z/ε) and every group G j,b / ∈ G j,min , compute a coreset Ω j,b of size using the GroupSample procedure. 4. Sampling from the outer rings: using the SensitivitySample procedure. 5. Output: • Weights: weights for A defined throughout the algorithm, weights for Ω j,b defined by the GroupSample procedure, weights for Ω O defined by the SensitivitySample procedure. Remark 1. Instead of using the GroupSample procedure, one could use any coreset construction tailored for the well structured group. Improving on that step would improve the final coreset bound: if the size of the coreset produced for a group is T , then the total coreset has size Proof of Theorem 1 As we prove in Section 7, the outcome of the partitioning step, D and P 1 , satisfies the following lemma, that deals with the inner ring, and the groups G j,min and G O min : Lemma 4. Let (X, dist) be a metric space with a set of clients P , k, z be two positive integers, and ε ∈ R * + . For every solution S, it holds that where D and P 1 are defined in Step 2 of the algorithm. Combining properties of the partitioning, Lemma 2, Lemma 3 and Lemma 4 allows to prove Theorem 1: Proof of Theorem 1. Let Ω be the output of the algorithm described above, and δ = O as defined in step 3 of the algorithm. Due to Fact 2 and Fact 3, Ω has size O(z 2 log 2 (z/ε) · δ + |A|), and non-negative weights by construction. We now turn to analysing the quality of the coreset. Any group G j,b for b > 0 satisfies Lemma 2: ε , the cost of all clusters induced by A on G j,b are equal up to a factor 2, hence for all i Hence, Lemma 2 ensures that, with probability Similarly, Lemma 3 ensures that, with probability Taking a union-bound over the failure probability of Lemma 3 and of Lemma 2 applied to all groups G j,b with z log(ε/z) ≤ j ≤ 2z log(z/ε) and all G O i implies that, with probability where the penultimate inequality uses Lemma 4, and the last one that A is a constant-factor approximation. For δ = log 2 1/ε 2 O(z log z) min(ε 2 ,ε z ) (k log |C| + log log(1/ε) + log(1/π)), this probability can be simplified to The complexity of this algorithm is: • O(n) to compute the groups: given all distances from a client to its center, computing the average cost of all clusters costs O(n), hence partitioning into R j cost O(n) as well, and then decomposing R j into groups is also done in O(n) time; • plus the cost to compute the coreset in the groups, which is Hence, the total complexity is O(n). Sampling inside Groups: Proof of Lemma 2 The goal of this section is to prove Lemma 2: Let (X, dist) be a metric space, k, z be two positive integers and G be a group of clients and A be a solution to (k, z)-clustering on G with k centers such that: • for every clusterC induced by A on G, all points ofC have the same cost in A, up to a factor 2: ∀p, q ∈C, cost(p, A) ≤ 2cost(q, A). • for all clustersC induced by A on G, it holds that cost(G,A) Let C be a A-approximate centroid set for (k, z)-clustering on G. Then, there exists an algorithm GroupSample, running in time O(|G|) that constructs a set Ω of size δ such that, with probability 1 − exp k log |C| − 2 O(z log z) · min(ε 2 ,ε z ) log 2 1/ε · δ it holds that for all set S of k centers: Description of the GroupSample Algorithm The GroupSample merely consists of importance sampling in rounds, i.e. there are δ rounds in which one point of G is sampled. LetC 1 ,C 2 , ... be the clusters induced by A on G: the probability of sampling point p ∈C i is cost(C i ,A) |C i |·cost(G,A) -recall that all clustersC i contain only points from the group G. The weight of any sampled point is rescaled by a factor |C i |·cost(G,A) δcost(C i ,A) . If there are m copies of a point, it is sampled in a round with probability m·cost(C i ,A) |C i |·cost(G,A) (which is equivalent to sampling each copy with probability cost(C i ,A) |C i |·cost(G,A) ). In what follows, each copies will be considered independently. δcost(C i ,A) the scaling factor of the weight of a point p ∈C i . Organization of the Proof To analyze the sampling procedure of GroupSample, we consider different cost ranges I ,S induced by a solution S as follows. We distinguish between the following cases. • ≤ log ε/2. We call all I ,S in this range tiny. The union of all tiny I ,S is denoted by I tiny,S . • ≥ z log(4z/ε). We call all I ,S in this range huge. Note that interesting and huge ranges intersect. This is to give us some slack in the proof: for a solution S, we will deal with huge ranges before relating S to its representativeS from C k . Due to the approximation, some non-huge range for S can become huge forS: however, due to our definition, they stay in the interesting ranges. A simple observation leads to the next fact. Bounding the difference in cost of G ∩ I ,S requires different arguments depending on the type of I ,S . The two easy cases are tiny and huge, so we will first proceed to prove those. Proving the interesting case is arguably both the main challenge and our main technical contribution. For the proof, we will rely on Bernstein's concentration inequality: Theorem 3 (Bernstein's Inequality). Let X 1 , . . . X δ be non-negative independent random variables. In this paper we will simply drop the E[X i ] 2 terms from the denominator, as the second moment will dominate in all important cases. In what follows, we fix k, z, G and A, as in the assumptions of Lemma 2. LetC 1 , ...,C k be the clusters induced by A on G. The assumptions imply the following fact: We will start with the tiny type, as it is mostly divorced from the others. Proof. By definition of I tiny,S , p∈I tiny,S cost(p, S) ≤ p∈I tiny,S ε 2 · cost(p, A) ≤ ε 2 · cost(G, A). Similarly, we have for the other term Dealing with Tiny Type where the last inequality uses that Ω contains δ points. Preserving the Weight of Clusters, and the Huge Type We now consider the huge ranges. For this, we first show that, given we sampled enough points, |C i | is well approximated for every clusterC i . This lemma will also be used later for the interesting points. We define event E to be: For all clusterC i induced by A on G, Proof. Consider any clusterC i induced by A on G. The expected number of points sampled from C i is then at least where the inequality holds by assumption on G. Define the indicator variable of point p from the sample being drawn fromC i as P i (p). Using Chernoff bounds, we therefore have Now, rescaling P i (p) by a factor |C i |·cost(G,A) δcost(C i ,A) implies that approximating µ i up to a (1 ± ε) factor also approximates |C i | up to a (1 ± ε) factor. The final result follows by applying a union bound for all clusters in all groups. We now show that for any clusterC i with a non-empty huge range, Lemma 6 implies that the cost is well approximated -without the need of going through the approximate solutionS. Lemma 7. Condition on event E. Then, for any solution S, and any i such that there exists ≥ z log(4z/ε) and a point p ∈C i with cost(p, S) ≥ 2 cost(p, A), we have: Proof. Let p ∈C i as given in the statement. Using the structure of clusters in a group, this implies for any q By a similar calculation, we can also derive an upper bound of cost(q, S) ≤ cost(p, S) · (1 + 2ε). Hence, we have 5.5 Bounding Interesting I ,S : a Simple but Suboptimal Analysis. Now we move onto the most involved case, presenting first a suboptimal analysis of GroupSample for the interesting types. As explained in the introduction, our main goal is to design a good estimator and apply Bernstein's inequality to it. Since the clusters intersecting a huge I ,S are dealt with by Lemma 7, we only need to focus on the interesting clusters, namely clustersC that satisfy In other words, a clustering is interesting only if it does not have any point in a huge I ,S . This restriction will be crucial to our analysis. Let L S be a set of interesting clusters (possibly not all of them). 6 For simplicity, we will assimilate L S and the points contained in the clusters of L S . We present here a first attempt to show that the cost of interesting points is preserved. Although suboptimal, it serves as a good warm-up for our improved bound. In this first attempt, we will use the simple estimator E(L S ) := p∈L S ∩Ω f (p)cost(p, S) as an estimator of the cost for points in L S . Note that by choice of the weights f (p), this estimator is unbiased: E[E(L S )] = p∈L S cost(p, S), precisely the quantity we seek to estimate. To show concentration, we rely on Bernstein's inequality from Theorem 3. Hence, the key part of our proof is to bound the variance of the estimator. Lemma 8. Let G be a group of points, and A be a solution. Let C be an A-approximate centroid set, as in Definition 1. It holds with probability that, for all solutionS ∈ C k and any set of interesting clusters LS induced by A on G: Proof. First, we fix some solution S and some set of interesting clusters L S , verifying Eq. (5). We express E(L S ) as a sum of i.i.d variables : δcost(p,A) . We will rely on Bernstein's inequality (Theorem 3). To do this, we need an upper bound on the variance of E(L S ), as well as an almost sure upper bound M on every sample. We first bound Where, in the third line, we upper bounded only one of the cost(p, To apply Bernstein's inequality, we also need an upper-bound on the value of X i : using cost(p, Applying Bernstein's inequality with those bounds on the variance and the value of the X i , we then have: Hence, for a fixed solution S and a fixed set of interesting clusters L S , it holds with probability Doing a union-bound over the C k many solutions S and the 2 k many sets of interesting clusters concludes the lemma: it holds with probability 1 − exp k log C − ε 2+z 2 O(z log z) log 2 1/ε · δ that, for any solution S ∈ C k and any set of interesting clusters In order to apply Lemma 8, note that the quantity |E(L S ) − E[E(L S )]| is equal to |cost(L S ∩ Ω, S) − cost(L S , S)|, namely the difference between the cost in the full input and the cost in the coreset of points in L S . This lemma is enough to conclude that the outcome of GroupSample is a coreset, once combined with Lemmas 5 and 7. To see the end of the proof, one can jump directly to the proof of Lemma 2 (in Section 5.7) and use use Lemma 8 instead of Lemma 12. This would give a coreset of sizẽ O kε −2−z , instead ofÕ kε − max(2,z) . Bounding Interesting I ,S : Improved Analysis The shortcoming of the previous estimator is its huge variance, with dependency in ε −z . We present an alternate estimator with small variance, allowing in turn to increase the success probability of the algorithm. As for the previous estimator, we only need to focus on some interesting clusters L S , namely clusters that do not have any point in a huge I ,S and satisfy Eq. (5), important enough to be recalled here: all clusters in L S verify Designing a Good Estimator: Reducing the Variance Our first observation is that we can estimate the cost of points in I ,S ∩L S , for each independently, instead of estimating directly the cost of L S as in previous section. For them, we will use the following estimator: Definition 4. Let G be a group of points, andC i be the clusters induced by a solution A on G. For a given set of interesting clusters L S , we let where q i,S = argmin p∈C i cost(p, S). E ,S (L S ) can be expressed differently: F ,S (L S ) is a random variable whose value depends on the randomly sampled points Ω (we will discuss F ,S (L S ) in more detail later). Note that the expectation of E ,S (L S ) is Now instead of attempting to show directly concentration of all cost(I ,S ∩ L S ∩ Ω, S), we will instead show that: 1. E ,S (L S ) is concentrated for all S, and 2. F ,S (L S ) is concentrated around its expectation. The reason for decoupling the two arguments is that E ,S (L S ) has a very small variance, for which few samples are sufficient: each term of the sum has magnitude cost(p, S) − cost(q i,S , S) instead of simply cost(p, S). This difference is crucial to our analysis. Furthermore, event E from Lemma 6 easily leads to a concentration bound on F S (L S ) = F ,S (L S ). To establish the gain in variance obtained by subtracting cost(q i,S , S), we have the following lemma. Lemma 9. Let G be a group of points, and S be an arbitrary solution andC i be a cluster induced A on G where all points have same cost, up to a factor 2. Denote by q i,S = argmin p∈C i cost(p, S). Then for every interesting range with ≥ log ε/2 and every point p ∈C i ∩ I ,S , . By choice of q i,S , w p ≥ 0, so we consider the upper bound. We first show useful inequalities, relating the different solutions. Since p ∈ I ,S , we have: where the last inequality holds since p and q i,S are in the same cluster and have up to a factor 2 the same cost. We also have that cost(p, q i,S ) ≤ 2 z−1 (cost(p, A) + cost(q i,S , A)) ≤ 3 · 2 z−1 cost(q i,S , A). Now, using Lemma 1, for any α ≤ 1, which after rearranging implies We optimize the final term with respect to α, which leads to α = 2 − z (ignoring constants that depend on z) and hence an upper bound of cost(p, S) − cost(q i,S , S) ≤ 2 O(z log z) 2 (1−1/z) · cost(q i,S , A). Concentration of the Estimator E ,S (L S ) First, we show that every estimator E ,S(L S ) is tightly concentrated. This follows the lines of the proof of Lemma 8, incorporating carefully the result of Lemma 9. Lemma 10. Let G be a group of points, and A be a solution. Consider an arbitrary solution S. Then for any set of interesting clusters L S induced by A on G, and any estimator E ,S (L S ) with ≤ z log 4z/ε, it holds that: with probability at least Proof. In order to simplify the notations, we drop mention of L S and define E ,S = E ,S (L S ). Lemma 9 allows to write slightly differently E ,S : with all the weights w p are in [0, We can also write E ,S as a sum of independent random variables: Recall that, due to Fact 5, the probability that the j-th sampled point is p, where We will rely on Bernstein's inequality (Theorem 3). To do this, we need an upper bound on the variance of E ,S , as well as an almost sure upper bound M on every sample. We first bound E[X 2 j ]: in the second line, we use that Ω j consists of a single point to move the square inside the sum. where the fourth line follows from using Lemma 9 to replace the value of w p,S . To bound p∈I ,S cost(p, A), we need to deal with the cases z = 1 (i.e. k-median) and z ≥ 2 (k-means and higher powers) separately. For the former, we have 2 2 (1−1/1) = 1, so we can use i ], we obtain for z = 1: and for z > 1: The almost sure upper bound (for which no case distinction is required) can be derived similarly , using X i ≤ sup 2cost(G,A) δcost(p,A) · cost(p, A) · w p,S : where the inequality holds due to ≤ z log(4z/ε). Applying Bernstein's inequality with Equations 9, 10, and 11, we then have . This yields our final desired bound of exp − min(ε 2 , ε z ) 2 O(z log z) log 2 1/ε · δ . Concentration of F ,S (L S ) We now turn our attention to bounding the random variable F ,S (L S ). It turns out that bounding is rather hard, and in fact no easier than bounding cost(I ,S ∩ Ω, S). Fortunately, this is not necessary, as it turns out that we can merely bound the sum of F ,S (L S ). We consider the random variable defined as follows: Showing that F S (L S ) is concentrated is now an almost direct consequence of event E from Lemma 6, which says that p∈C i ∩Ω Lemma 11. Let G be a group of points, and A be a solution. Conditioned on event E, we have for all solutions S and all sets of interesting clusters L S induced by A on G: Proof. Given a solution S and any set of interesting clusters L S induced by A on G, we have Event E ensures that the mass of each cluster is preserved in the coreset, i.e., that p∈C i ∩Ω Now finally observe that since q i,S was always the point ofC i whose cost in S is the smallest, we have E[F S (L S )] ≤ cost(L S , S) ≤ cost(G, S). Combining Them All We can now show that the sample Ω indeed verifies Lemma 2. To do that, we naturally follow the structure of previous lemmas, and decompose into terms for which we can apply Lemmas 5, 7, 10, and 11. First, we note that the probability of success of Lemma 10 is too small to take a union-bound over its success for all S. To cope with that issue, we use the approximate centroid set, in order to relate E ,S (L S ) to E ,S (L S ), whereS comes from a small set on which union-bounding is possible. Lemma 12. Let G be a group of points, and A be a solution. Let C be an A-approximate centroid set, as in Definition 1. It holds with probability that, for all solutionS ∈ C k and any set of interesting clusters LS induced by A on G: cost(LS,S) − cost(Ω ∩ LS,S) ≤ ε cost(G, A) + cost(LS,S) . Proof. Taking a union-bound over the success of Lemma 10 for all possibleS ∈ C k , all choice of interesting clusters LS and all such that log(ε/2) ≤ ≤ z log(4z/ε), it holds with probability 1 − exp(k log |C|) exp −2 O(z log z) · min(ε 2 ,ε z ) log 2 1/ε · δ that, for everyS ∈ C k , LS and , For simplicity, we drop again the mention of LS and write E ,S = E ,S (LS), FS = FS(LS). We now condition on that event, together with event E. We write: where the second to last inequality used Lemma 11. From the approximate centroid set to any solution. We can now finally turn to the proof of Lemma 2: it combines the result we show previously for the huge type, and the use of approximate centroid set with the Lemma 12 for the interesting and tiny types. Proof of Lemma 2. Let X, k, z, G and A as in the lemma statement. We condition on event E happening. Let S be a set of k points, andS ∈ C k that approximates best S, as given by the definition of C (see Definition 1). This ensures that for all points p with dist(p, S) ≤ 8z ε · dist(p, A) or dist(p,S) ≤ 8z ε · dist(p, A) , we have |cost(p, S) − cost(p,S)| ≤ ε z log(z/ε) (cost(p, S) + cost(p, A)). Our first step is to deal with points that have dist(p, S) > 4z ε · dist(p, A), using Lemma 7. None of the remaining points is huge with respect toS: hence, they all are in interesting clusters with respect toS. Let LS be this set of cluster: it can be handled with Lemma 12. The remaining of the proof formalizes the argument. Let H S be the set of all clusters that are intersecting with some I ,S with > z log(4z/ε). We also denote H S the points contained in those clusters. We decompose the cost difference as follows: Since we condition on event E, the term 16 is O(ε) · (cost(G, A) + cost(G, S)), using Lemma 7. Now we take a closer look at term 15. By definition ofS, it holds for all points p ∈ G \ H S that |cost(p, S) − cost(p,S)| ≤ ε(cost(p, S) + cost(p, A)). Therefore: This allows us to focus on bounding the cost difference to solutionS instead of S. For the remaining points in G \ H S , we aim at using Lemma 12: for that, we show that LS := G \ H S contains only interesting clusters with respect toS. Indeed, for any p ∈ LS, we have |cost(p, S) − cost(p,S)| ≤ ε z log z/ε (cost(p, S) + cost(p, A)) by definition ofS. Hence, cost(p,S) ≤ cost(p, S) + ε z log z/ε (cost(p, S) + cost(p, A)) and p is indeed not huge with respect toS. Therefore, we can apply Lemma 12 to get: Hence, we finally conclude: The probability now follows from taking a union-bound over the failure probability of Lemma 6 and Lemma 12. Specifically In a given clusterC i induced by A on G, the complexity of the algorithm is O(|C i |): it is both the cost of computing the scaling factor f (p) for all p ∈C i , and the cost of sampling δ points using reservoir sampling [Vit85]. Hence, the cost of this algorithm for all clusters is O(|G|). Sampling from Outer Rings In this section we prove Lemma 3: Lemma 3. Let (X, dist) be a metric space, k, z be two positive integers, P be a set of clients and A be a c A -approximate solution solution to (k, z)-clustering on P . Let G be either a group G O b or G O max . Suppose moreover that there is a A-approximate centroid set C for (k, z)-clustering on G . Recall that the SensitivitySample procedure merely picks δ points p with probability cost(p,A) cost(G,A) . Each of the δ sampled points has a weight cost(G,A) δ·cost(p,A) . The procedure runs in time O(|G|). The main steps of the proof are as follows. • First, we consider the cost of the points in G such that cost(p, S) is at most 4 z · cost(p, A). For this case, we can (almost) directly apply Bernstein's inequality as in the previous section. • Second, we consider the cost of the points in G such that cost(p, S) > 4 z · cost(p, A). Denote this set by G f ar,S . For these points, we can afford to replace their cost in S with the distance to the closest center c ∈ A plus the distance from c to the closest center in S. The latter part can be charged to the remaining points of the cluster from the original dataset (i.e., not restricted to group G) which are in much larger number and already paying a similar value in S. We first analyse the points not in G f ar,S . For that, we will go through the approximate centroid set C to afford a union-bound: we show the following lemma. Lemma 13. LetS ∈ C k , and define G close,S to be the set of points of G such that cost(p,S) ≤ 5 z · cost(p, A). It holds with probability Proof. We aim to use Bernstein's Inequality. Let E close,S = δ i=1 X i , where X i = cost(G,A) δ·cost(p,A) · cost(p,S) if the i-th sampled point is p ∈ G close,S and X i = 0 the i-th sampled point is p / ∈ G close,S . Recall that the probability that p is the i-th sampled point is cost(p,A) cost(G,A) . We consider the second moment E[X 2 i ]: Furthermore, we have the following upper bound for the maximum value any of the X i : Combining both bounds with Bernstein's inequality now yields Noting that cost(Ω ∩ G close,S ,S) = E[E close,S ], concludes: we have with probability Now we turn our attention to G f ar,S . For this, we analyse the following event E f ar , similar to E: Lemma 14. Event E f ar happens with probability at least Proof. We aim to use Bernstein's Inequality. Recall that the probability that the i-th sampled point is p is cost(p,A) cost(G,A) . We consider the second moment E[X 2 i ]: where the final inequality follows since every cluster has cost at least half the average. Indeed, either the group considered is G O max , and then any cluster verifies cost(C ∩ G) , A), or all the clusters in G O b have an equal cost, up to a factor of 2 -hence none cost less than half of the average. Furthermore, we have by the same argument the following upper bound for the maximum value any of the X i : Combining both bounds with Bernstein's inequality now yields Lemma 15. Let (X, dist) be a metric space, k, z be two positive integers. Suppose G is either a group G O b or G O max . Let G f ar,S ⊂ G be the set of all clients such that cost(p, S) > 4 z · cost(p, A). Condition on event E f ar . Then, the set Ω of size δ constructed by SensitivitySample verifies the following. It holds for all sets S of k centers that: cost(G f ar,S , S) + cost(Ω ∩ G f ar,S , S) ≤ 2ε z log z/ε · cost(S). Proof. Our aim will be to show that max (cost(G f ar,S , S), cost(Ω ∩ G f ar,S , S)) ≤ ε z log z/ε · cost(S). It is key here that we compare to the cost of the full input in S, and not simply the cost of the group G. First, we fix a cluster C ∈ A, and show that the total contribution of points of C ∩ G f ar,S is very cheap compared to cost(C, S), i.e. that cost(G f ar,S ∩ C, S) ≤ ε z log z/ε · cost(C, S). For this, fix a point p ∈ G f ar,S ∩ C, and let c be the center of cluster C. Let C close be the points of C with cost at most z ε z · cost(C,A) Using that the point p is both in the outer ring of C and in G f ar,S , we can lower bound the distance from c to S as follows. Triangle inequality and cost(p, S) > 4 z · cost(p, c), yield dist(c, S) ≥ dist(p, S)−dist(p, c) ≥ 4dist(p, c)−dist(p, c) ≥ 3dist(p, c). Since p is from an outer group, it verifies cost(p, c) ≥ z ε 2z · cost(C,c) |C| . Combining those two observations yields: cost(c, S) ≥ 3 z cost(p, A) ≥ Using this and Lemma 1, we now have for any q ∈ C close : Using additionally that |C close | ≥ (1 − ε z ) · |C| and cost(c, S) ≥ 3 z · z ε 2z · cost(C,c) |C| , we get: We are now equipped to show the first part of the lemma, namely cost(G f ar,S , S) ≤ ε z log z/ε ·cost(S). Since G ∩ C contains only points from the outer ring of C, with distance at least (z/ε) 2 times the average, Markov's inequality implies that |G ∩ C| ≤ ε Summing this up over all clusters C, we therefore have What is left to show is that, in the coreset, the weighted cost of the points in G f ar,S ∩ Ω can be bounded similarly. For that, we use event E f ar to show that p∈G f ar,S ∩C∩Ω cost(G,A 0 ) cost(p,A 0 ) ≈ |G f ar,S ∩C| In particular, event E f ar implies that with probability Therefore, we have Combining far and close to show Lemma 3 The overall proof follows from those lemmas. Proof of Lemma 3. First, we condition on event E f ar , and on the success of Lemma 13 for all solution in C k . This happens with probability Let S be a solution, andS its corresponding solution in C k . We break the cost of S into two parts: points with cost(p, tildeS) ≤ 5 z · cost(p, A), on which we can apply Lemma 13, on the others, on which we will apply Lemma 15. Since any point in G close,S verifies |cost(p, S)−cost(p,S)| ≤ ε z log z/ε (cost(p, S)+cost(p, A)) we can relate this to cost(G close,S , S) as follows. First, this implies cost(G close,S ,S) ≤ (1+ε)cost(G close,S , S)+ εcost(G, A). Hence: We now deal with the other far points. For this, note that G \ G close,S ⊆ G f ar,S . Indeed, any point p ∈ G \ G f ar,S has its cost preserved byS, and therefore verifies A). Hence, adding the two inequalities gives that To remove the terms depending on Ω from the right hand side, one can proceed as in the end of Lemma 2, applying grossly the previous inequality to get cost(G ∩ Ω, S) = O(1)cost(S) and cost(G ∩ Ω, A) = O(1)cost(A). This concludes the theorem: A)). This concludes the coreset construction for the outer groups. Partitioning into Well Structured Groups In this section, we show that the outcome of the partitioning step satisfies Lemma 4, that we restate for convenience. Lemma 4. Let (X, dist) be a metric space with a set of clients P , k, z be two positive integers, and ε ∈ R * + . For every solution S, it holds that where D and P 1 are defined in Step 2 of the algorithm. Recall that the inner ring R I (C) (resp. outer ring R O (C)) of a cluster C consists of the points of C with cost at most ( ε /z) 2z ∆ C (resp. at least ( z /ε) 2z ∆ C ). The main ring R M (C) consist of all the other points of C. Recall also that D contains all points that are either in some inner ring, in some group G j,min or in G O min . P 1 contains center of A weighted by the number of points from D in their clusters. To prove Lemma 4, we treat separately the inner ring and the groups G j,min and G O min in the next two lemmas. Their proof are deferred to next sections. For all those lemmas, we fix a metric space I a set of clients P , two positive integers k and z, and ε ∈ R * + . We also fix A, a solution to (k, z)-clustering on P with cost cost(A) ≤ c A cost(OPT). Lemma 17. For any solution S and any j, Moreover, for any solution S, The proof of Lemma 4 combines those lemmas. Proof of Lemma 4. We decompose |cost(D, S) − cost(P 1 , S)| into terms corresponding to the previous lemmas: where the second inequality uses Lemmas 16 and 17. The Inner Ring: Proof of Lemma 16 Lemma 16. For any solution S and any cluster C with center c of A, Proof. Let C be a cluster induced by A, and p be a point in the inner ring R I (C). We start by bounding |cost(p, S) − cost(c, S)|. Let S(p) (resp. S(c)) be the closest point from S to p (resp. c). Summing this over all points of the inner ring yields The Cheap Groups: Proof of Lemma 17 Lemma 17. For any solution S and any j, Moreover, for any solution S, Proof. Using Lemma 1, for a point p in cluster C i Let G be a group, either G j,min or G O min . Summing for all cluster C i and all p ∈ G ∩ C i , we now get Now, either G = G j,min for some j, and cost(G, A) ≤ ε 4z z · cost(R j , A); or G = G O min , and cost(G, In both cases, the lemma follows. Application of the Framework: New Coreset Bounds for Various Metric Spaces In this section, we apply the coreset framework to specifics metric spaces. For each of them, we show the existence of a small approximate centroid set, and apply Theorem 1 to prove the existence of small coresets. We recall the definition of a centroid set (Definition 1): given an instance of (k, z)-clustering and a set of centers A, an A-approximate centroid set C is a set that satisfies the following: for every solution S, there existsS ∈ C k such that for all points p that verifies cost(p, S) ≤ 8z ε z cost(p, A) or cost(p,S) ≤ 8z ε z cost(p, A), it holds |cost(p, S)−cost(p,S)| ≤ ε z log(z/ε) (cost(p, S) + cost(p, A)). Theorem 1 states that in case there is an A-approximate centroid set C, then there is a linear-time algorithm that constructs with probability 1 − π a coreset of size O 2 O(z log z) · log 4 1/ε min(ε 2 , ε z ) (k log |C| + log log(1/ε) + log(1/π)) Structural Property on Solutions We also show a structural property on solutions, that we will use in order to show the existence of small approximate centroid sets. Essentially, when replacing a center s by a center in C we will make an error εcost(q, A) for some q that we can choose: it is necessary to ensure this error is tiny compared to any cost(p, s) + cost(p, A). Given a point q and a center s, we say that a point p is problematic with respect to q and s when dist(p, A) + dist(p, s) ≤ ε 2 8z 2 (dist(q, A) + dist(q, s)). In that case, we cannot bound the error dist(q, A) + dist(q, s) by some quantity depending on cost(p, s) + cost(p, A). However, we show the following: Lemma 18. Let S be a solution, such that any input point p verifies dist(p, S) ≤ 8z ε · dist(p, A). There exists a solution S ⊆ S such that • for all p, it holds that |cost(p, S) − cost(p, S )| ≤ ε z log z/ε (cost(p, S) + cost(p, A)), and • for any center s ∈ S , let q = argmin p:dist(p,s)≤ 10z ε dist(p,A) dist(p, A) + dist(p, s). There is no problematic point with respect to q and s. Proof. First, we show that in case there is a problematic point p with respect to some s and q, then we can serve the whole cluster of s by S(p), the point that serves p in S. We work in this proof with particular solutions, where points are not necessarily assigned to their closest center. This simplifies the proof, but needs particular care at some moments. In particular, we will ensure that dist(p, S(p)) ≤ 10z ε · dist(p, A) is always verified. We will then remove inductively centers with problematic points to construct S . Removing a center that has a problematic point. Let s ∈ S, and q = argmin p:dist(p,s)≤ 10z ε dist(p,A) dist(p, A) + dist(p, s) as in the statement. Let p be a problematic point with respect s and q, and S(p) its the center serving p in S. First, note that since p is problematic, it must be that dist(p, S(p)) ≤ dist(s, p): otherwise, p would verify dist(p, s) ≤ 10z ε dist(p, A), and the minimality of q would ensure that p is not problematic. Thus, it holds that: dist(s, S(p)) ≤ dist(s, p) + dist(p, S(p)) ≤ 2dist(s, p) Now, let p be served by s. Using the triangle inequality, we immediately get dist(p , S(p)) ≤ dist(p , s) + dist(s, S(p)) ≤ dist(p , s) + ε 2 4z 2 (dist(q, A) + dist(q, s)). Additionally, it holds that Hence, if S(p) is removed as well, the error for points served by s will be an ε 2 8z 2 -fraction of the initial error. This implies that the total error will not accumulate, as we will now see. Constructing S . To construct S , we proceed iteratively: start with S = S, and as long as there exists a center s that have a problematic point p with respect to it, remove s and reassign the whole cluster of s to S (p), the closest point to p in the current solution. This process must end, as there is no problematic point when there is a single center. In Metrics with Bounded Doubling Dimension We start by defining the Doubling Dimension of a metric space, and stating a key lemma. Consider a metric space (X, dist). For a point p ∈ X and an integer r ≥ 0, we let β(p, r) = {x ∈ X | dist(p, x) ≤ r} be the ball around p with radius r. Definition 5. The doubling dimension of a metric is the smallest integer d such that any ball of radius 2r can be covered by 2 d balls of radius r. Notably, the Euclidean space R d has doubling dimension θ(d). A γ-net of V is a set of points X ⊆ V such that for all v ∈ V there is an x ∈ X such that dist(v, x) ≤ γ, and for all x, y ∈ X we have dist(x, y) > γ. A net is therefore a set of points not too close to each other, such that every point of the metric is close to a net point. The following lemma bounds the cardinality of a net in doubling metrics. Lemma 19 (from Gupta et. al [GKL03]). Let (V, dist) be a metric space with doubling dimension d and, diameter D, and let X be a γ-net of V . Then |X| ≤ 2 d· log 2 (D/γ) . The goal of this section is to prove the following lemma. Combined with Theorem 1, it ensures the existence of small coreset in graphs with small doubling dimension. Lemma 20. Let M = (X, dist) be a metric space with doubling dimension d, let P ⊂ X, let k and z be positive integers and let ε > 0. Further, let A be a c A -approximate solution with at most k centers. There exists an A-approximate centroid set for P of size A direct corollary of that lemma is the existence of a coreset in Doubling Metrics, as it is enough to show the mere existence of a small centroid set for applying Corollary 2. Corollary 4. Let M = (X, dist) be a metric space with doubling dimension d, and two positive integers k and z. If log k > d then O(log kd) = O(log k). If d > log k then O(kd + k log kd) = O(kd), hence the claimed bound follows. Proof of Lemma 20. For each point p ∈ P , let c be the center to which p was assigned in A. Let B p, 8z ε dist(p, c) be the metric ball centered around p with radius 8z ε · dist(p, c), and let N p be an ε 4z · dist(p, A)-net of that ball. Due to Lemma 19, N p has size (ε/z) −O(d) . Additionally, let s f be a point not in any B(p, 10z ε dist (p, A)), if such a point exist. Let N := s f p∈Y N p . We claim that N is the desired approximate centroid set. For a candidate solution S, apply first Lemma 18, so that we can assume that for any center s ∈ S, and q = argmin p:dist(p,s)≤ 10z ε dist(p,A) dist(p, A) + dist(p, s), there is no problematic point with respect to q and s. letS be the solution obtained by replacing every center s ∈ S bys ∈ C as follows: let q = argmin p:dist(p,s)≤ 10z ε dist(p,A) dist(p, A) + dist(p, s). Picks to be the closest point to s in N q . If such a q does not exist, picks = s f . Now, let p be a point such that cost(p, S) ≤ 8z ε z · cost(p, A), let s be any center in S and q defined as previously. Then, by construction of theS, there is a centers with dist(s,s) ≤ ε 4z dist(q, A) and therefore, using that p is no problematic: ≤ (1 + ε)cost(p, S) + εcost(p, A). To show the other direction, for any point inS there is a center s with dist(s,s) ≤ ε 4z dist(q, A). Hence the previous equations apply as well, and we can conclude: for a point p such that cost(p, S) ≤ 8z ε z · cost(p, A), |cost(p, S) − cost(p,S)| ≤ ε(cost(p, S) + cost(p, A)). Rescaling ε concludes the lemma: there is an A-approximate centroid set with size |P Graphs with Bounded Treewidth In this section, we show that for graphs with treewidth t, there exists a small approximate centroid set. Hence, the main framework provides an algorithm computing a small coreset. We first define the treewidth of a graph: Definition 6. A tree decomposition of a graph G = (V, E) is a tree T where each node b (call a bag) is a subset of V and the following conditions hold: • The union of bags is V , • ∀v ∈ V , the nodes containing v in T form a connected subtree of T , and • for all edge (u, v) ∈ E, there is one bag containing u and v. The treewidth of a graph G is the smallest integer t such that their exists a tree decomposition with maximum size bag t + 1. Lemma 21. Let G = (V, E) be a graph with treewidth t, X ⊆ V and k, z > 0. Furthermore, let A be solution to (k, z)-clustering for X. Then, there exists an A-approximate centroid set for . Applying this lemma with X yields the direct corollary: Corollary 5. Let G = (V, E) be a graph with treewidth t, X ⊆ V , k and z > 0. There exists an algorithm running timeÕ(nk) that constructs an ε-coreset for (k, z)-clustering on X, with size Proof. Let X ⊆ V . We start by computing a (k, ε)-coreset X 1 of size O(poly(k, 1/ε, t)), using the algorithm from [BBH + 20] We now apply our framework to X 1 . Computing an approximation on X 1 takes timeÕ(|X 1 |k), using the algorithm from Mettu and Plaxton [MP04]. Lemma 21 ensure the existence of an approximate centroid set for X 1 with size poly(|X 1 |) z ε O(t) . Using that |X 1 | = O(poly(k, ε, t)) yields a coreset of size Instead of using [BBH + 20], one could apply our algorithm repeatedly as in Theorem 3.1 of [BJKW21], to reduce iteratively the number of distinct point consider and to eventually get the same coreset size. The number of repetition needed to achieve that size bound is O(log * n), where log * (x) is the number of times log is applied to x before the result is at most 1; formally log * (x) = 0 for x ≤ 1, and log * (x) = 1 + log * log x for x > 1. The complexity of this repetition is thereforeÕ(nk), and the success probability 1 − π, as proven in [BJKW21]. For the proof of Lemma 21, we rely on the following structural lemma: 7 . Lemma 22 (Lemma 3.7 of [BBH + 20]). Given a graph G = (V, E) of treewidth t, and X ⊆ V , there exists a collection T of subsets of V such that: Our construction relies on the following simple observation. Let s be a possible center, and p be a vertex such that cost(p, s) ≤ 4z ε z cost(p, A). Let A ∈ T such that p ∈ A. Then, either s ∈ A, or the path connecting p to s has to go through P A . We use this observation as follows: it would be enough to replace a center s from solution S by one that has approximately the same distance of all points of P A . The main question is : how should we round the distances to P A ? The goal is to classify the potential centers into few classes, such that taking one representative per class gives an approximate centroid set. The previous observation indicates that classifying the centers according to their distances to points of P A is enough. However, there are too many different classes: instead, we round those distances. Ideally, this rounding would ensure that for any point p and any center s, all centers in s's class have same distance to p, up to an additive error ε(cost(p, s) + cost(p, A)). This would mean rounding the distance from s to any point in P A by that amount -for instance, rounding to the closest multiple of ε(cost(p, s) + cost(p, A)). Nonetheless, this way of rounding depends on each point p: a rounding according to p may not be suited for another point q. To cope with that, we will quite naturally round distances according to the point p that minimizes cost(p, s) + cost(p, A). Additionally, to ensure that the number of classes stays bounded, it is not enough to round to the closest multiple of ε(cost(p, s) + cost(p, A)): we also show that distances bigger than 1 ε (cost(p, s) + cost(p, A)) can be trimmed down to 1 ε (cost(p, s) + cost(p, A)). That way, for each point of P A there are only 1/ε 2 many possible rounded distances. Hence, a class is defined by a certain point p, a part A and by |P A | = t many rounded distances: in total, that makes poly(|X|)ε −O(t) many classes. The approximate centroid set contains one representative of each class: this would prove Lemma 21. We now make the argument formal, in particular to show that the error incurred by the trimming is affordable. Proof of Lemma 21. Given a point s ∈ V and a set A ∈ T , we call a distance tuple to A d A (s) := (dist(s, x) | ∀x ∈ X ∩ A) + (dist(s, x) | ∀x ∈ P A ). Let q ∈ X: the rounded distance tuple of s with respect to q is d A,q (s) defined as follows: 2. For y ∈ P A , d(s, y) is the multiple of ε 3 8z 3 · dist(q, A) smaller than 200z 3 ε 3 dist(q, A) closest to dist(s, y). Now, for every A ∈ T , q ∈ X and every rounded distance tuple T to A with respect to q such that ∃s : T = d A (s), C contains one point s ∈ A having that rounded distance tuple. Bounding the size of C. Fix some A ∈ T , and q ∈ X. A rounded distance tuple to A is made of O(t) many distances. Each of them takes its value among poly(z/ε) possible numbers, due to the rounding. Hence, there are at most z ε O(t) possible rounded distance tuple to A, and so at most that many points in C. Since there are poly(|X|) different choices for A and q, the total size of C is poly(|X|) z ε O(t) . Bounding the error. We now bound the error induced by approximating a solution S by a solutionS ⊆ C. First, by applying Lemma 18, we can assume that for any center s ∈ S, and q = argmin p:dist(p,s)≤ 10z ε dist(p,A) dist(p, A) + dist(p, s), there is no problematic point with respect to q and s. Let A ∈ T such that s ∈ A, and q = argmin p:dist(p,s)≤ 10z ε dist(p,A) dist(p, A) + dist(p, s).s is chosen to have the same rounded distance tuple to A with respect to q as s.S is the solution made of all suchs, for s ∈ S. Planar Graphs The goal of this section is to prove the existence of small centroid sets for planar graph, analogously to the treewidth case. This is the following lemma: Lemma 23. Let G = (V, E) be an edge-weighted planar graph, a set X ⊆ V and two positive integers k and z. Furthermore, let A be a solution of (k, z)-clustering of X. As for treewidth, this lemma implies the following corollary: Corollary 6. Let G = (V, E) be an edge-weighted planar graph, a set X ⊆ V , and two positive integers k and z . There exists an algorithm with running timeÕ(nk) that constructs an ε-coreset for (k, z)-clustering on X with size The big picture is the same as for treewidth. As in the treewidth case, planar graph can be broken into poly(X) pieces, each containing at most 2 vertices of X. The main difference is in the nature of the separators: while treewidth admit small vertex separators, the region in the planar decomposition are bounded by a few number of shortest path instead. This makes the previous argument void: we cannot round distances to all vertices in the boundary of a region. We show how to bypass this, using the fact that separators are shortest paths: it is enough to round distances to a well-chosen subset of the paths, as we will argue in the proof. Formally, the decomposition is as follows: Lemma 24 (Lemma 4.5 of [BJKW21], see also [EKM14]). For every edge-weighted planar graph G = (V, E) and subset X ⊆ V , there exists a collection of subsets of V Π := {V i } with |Π| = poly(|X|) and ∪V i = V such that, for every V i ∈ Π: • |V i ∩ X| = O(1), and • there exists a collection of shortest paths P i with |P i | = O(1) such removing the vertices of all paths of P i disconnects V i from V \ V i . As for treewidth, we proceed as follows: given the decomposition of Lemma 24, for any center s ∈ V i , we identify a point q and round distances from s to P i according to dist(q, A). C contains one points with the same rounded distances as s, and we will argue thats can replace s. As mentioned, we cannot round distances to the whole shortest-paths P i . Instead, we show that it is enough to round distances from s to points on the boundary of V i that are close to q: since the boundary consists of shortest path, it is possible to discretize that set. Proof of Lemma 23. Let Π = {V i } be the decomposition given by Lemma 24. For any V i and any q ∈ X, we define a set of landmarks L i,q as follows: for any P ∈ P i , let L i,q,P be a ε z · dist(q, A)net of P ∩ B q, 90z 2 ε 2 · dist(q, A) . Note that since P is a shortest path, the total length of P ∩ B q, 90z 2 ε 2 · dist(q, A) is at most 180z 2 ε 2 · dist(q, A), and so the net has size at most 180z 3 ε 3 . We define Rounding the distances to L i,q We now describe how we round distances to landmarks, and define C such that for each possible distance tuple, C contains a point having that distance tuple. Formally, given a point s ∈ V i and a point q ∈ X, the distance tuple d q (s) of s is defined as d q (s) = (dist(s, x) | ∀ x ∈ X ∩ V i ) + (dist(s, y) | ∀y ∈ L i,q , ∀i). The rounded distance tupled q (s) of s is defined as follows : • For y ∈ L i,q ,d(s, y) is the multiple of ε z · dist(q, A) smaller than 90z 2 ε 2 dist(q, A) closest to dist(s, y). The set C is constructed as follows: for every V i and every q, for every rounded distance tuple {d q (p)}, add to C a point that realizes this rounded distance tuple (if such a point exists). It remains to show both that C has size poly(|X|) exp O(z 3 ε −3 log z/ε) , and that C contains good approximation of each center of any given solution. Hence, the total size of C is at most poly(|X|) · exp O(z 3 ε −3 log z/ε) . Error analysis. We now show that for all solution S, every center can be approximated by a point of C. First, by applying Lemma 18, we can assume that for any center s ∈ S, and q = argmin p:dist(p,s)≤ 10z ε dist(p,A) dist(p, A) + dist(p, s), there is no problematic point with respect to q and s. Let S be some cluster of S, with center s. As in Lemma 20 and 21, we aim at showing how to finds ∈ C such that, for every p ∈ X ∩ S with dist(p, S) ≤ 10z ε · dist(p, A), we have |cost(p, s) − cost(p,s)| ≤ 3ε (cost(p, s) + cost(p, A)). For this, let V i be a part of Π containing s, and P i be the paths given by Lemma 24. We let q := argmin p∈X:dist(p,s)≤ 10z ε dist(p,A) dist(p, s) + dist(p, A). We defines to be the point of C that has the same rounded distance tuple to L i,q as s. LetS be the solution constructed from S that way. We show now thatS has the required properties. First, if p / ∈ V i , then we show how to use that s ands have the same rounded distances to L i,q . • If dist(p, s) > 21z 2 ε 2 · dist(q, A), we argue that d(s,s) is negligible. The argument is exactly alike the one from Lemma 21, we repeat it for completeness. Finally, in the case where p ∈ V i , then we get either |dist(p,s) − dist(p, s)| ≤ ε z dist(p, A) and we are done, or both dist(p,s) and dist(p, s) are bigger than 8z ε dist(p, A). We can now conclude, exactly as in the treewidth case: in all possible cases, it holds that either dist(p, s) and dist(p,s) are bigger than 10z ε dist(p, A), or: To extend that result to the full solutions S andS instead of a particular center, we note that since p is interesting, dist(p, S) ≤ 8z ε dist(p, A). Hence, we can apply Eq. (31) with s being the closest point to p in S: cost(p,S) ≤ (1 + ε)cost(p, S) + εcost(p, A). In particular, this implies that dist(p,s) ≤ 10z ε dist(p, A). Chose nows to be the closest point to p inS and s its corresponding center in S. Using Eq. (31) therefore gives: Rescaling ε and combining the two inequality concludes. Minor-Excluded Graphs A graph H is a minor of a graph G if it can be obtained from G by deleting edges and vertices and contracting edges. We are interested here in families of graph excluding a fixed minor H, i.e. none of the graph in the family contains H as a minor. The graphs are weighted: we assume that for each edge, its value is equal to shortest-path distance between its two endpoints. The goal of this section is to prove the following lemma, analogous to Lemma 23. Lemma 25. Let G = (V, E) be an edge-weighted graph that excludes a minor of fixed size, a set X ⊆ V and two positive integers k and z. Furthermore, let A be a solution of (k, z)-clustering of X. As for the bounded treewidth and planar cases, this lemma implies the following corollary: Corollary 7. Let G = (V, E) be an edge-weighted graph that excludes a fixed minor, and two positive integers k and z . There exists an algorithm with running timeÕ(nk) that constructs an ε-coreset for (k, z)-clustering on V with size O log 5 1/ε 2 O(z log z) min(ε 2 , ε z ) k log 2 k log(1/ε) + k log k ε 4 + log 1/π The big picture is the same as for planar graphs. Minor-free graphs have somewhat nice separators, that we can use to select centers. However, those separators are not shortest paths in the original graph, as described in the next structural lemma. Lemma 26 (Lemma 4.12 in [BJKW21], from Theorem 1 in [AG06]). For every edge-weighted graph G = (V, E) excluding some fixed minor, and subset X ⊆ V , there exists a collection of subsets of V Υ := {Π i } with |Υ| = poly(|X|) and ∪Π i = V such that, for every Π i ∈ Υ: • |Π i ∩ X| = O(1), and • there exists a groups of paths {P i j } with | ∪ P i j | = O(log |X|) such that removing the vertices of all paths of P i disconnects Π i from V \ Π i , and such that paths in P i j are shortest-paths in the graph G i j := G \ ∪ j <j P i j . The general sketch of the proof is as follows: we consider the boundary B of a region Π i , and enumerate all possible tuple of distances from a point inside the leaf to the boundary. For each tuple, we include in C a point realizing it. Of course, this would lead to a set C way too big: the boundary of each leaf consists of too many points, and there are too many distances possible. For that, we show how to discretize the boundary, and how to round distances from a point to the boundary. Discretizing the boundary is not as easy as in the planar case, as the separating paths are not shortest paths in the original graph G. A separating path P ∈ P j , however, is a shortest path in the graph G i j := G \ ∪ j <j P i j . As in the planar case, we therefore start from the point q closest to s in the graph G i j . Note here that we cannot infer much on the distances in the original graph G: for this reason, we are not able to apply Lemma 18, and we need to present a whole different argument. We will assume that we know D = dist j (q, s), where dist j is the distance in the graph G i j . In that case, we can simply take an εD-net of P ∩ B j (q, D), where B j (q, D) is the ball centered at q and of radius D in G i j . This net has size O(1/ε 2 ), as P is a shortest path in G i j . Then, ifs has same distances to this net as s, we are able to show as in the previous cases that for any point separated from s by P , dist(p,s) dist(p, s); and for any point separated froms by P , dist(p, s) dist(p,s). To estimate dist i (q, s), we proceed as follows: either dist i (q, s) ≈ dist i (q, q 2 ) for some q 2 ∈ X, or not. In the first case, we can pick such a q 2 . In the second case, we will need to ensure that when p is such that dist i (p, q) dist i (q, s), thens stays close to q. When p is such that dist i (q, p) dist i (q, s), then p and q are essentially located at the same spot, and we ensure thats stays far from q. Construction of the centroid set. From Lemma 26, we have a decomposition into regions Υ = {Π i }. In this argument, we fix a region Π j ∈ Υ. Π j is bounded by O(log |X|) paths P 1 , ..., P m and P i is a shortest path in some graph G i , subgraph of G: if P i ∈ P j , then G i := G j . We change the indexing for simplicity, and let Π = Π j . We let dist i be the distances in the graph G i . We consider two ways of rounding the distances. The first starts from a point q 1 ∈ X, and is useful when there is q 2 ∈ X such that εdist i (q 1 , s) ≤ dist i (q 1 , q 2 ) ≤ 1 ε dist i (q 1 , s). Along each paths, we designate portals as follows. Consider a path P i . For any pair of vertices q 1 , q 2 ∈ X, let D = dist i (q 1 , q 2 ) + dist(q 2 , A) and let N i,q 1 ,q 2 be an ε 2 D-net of P i ∩ B i (q 1 , D ε 2 ), where B i (q, D ε 2 ) is the ball centered at q and of radius D ε 2 in G i . For each possible q 1 , q 2 and any point s ∈ Π, we consider the following distance tuple: (dist i (s, n), ∀n ∈ N i,q 1 ,q 2 ) ∪ (dist i (s, q 1 )) ∪ (dist i (x, s), ∀x ∈ Π ∩ X). We define the rounded is the multiple of εD closest to dist i (s, q 1 ) and smaller than 3D ε . • for any x ∈ Π ∩ X,d 1 (x, s) is the closest multiple of εdist(x, A) to dist i (x, s) smaller than 1 ε · dist(x, A). We also consider another rounding, which will be helpful when for all points , dist(p, A)+dist i (q, q 1 ) / ∈ [εdist i (q 1 , s), 1 ε dist i (q 1 , s)]. For any q 1 , q 3 , and q 4 in X,d 2 (q 1 , q 3 , q 4 ) = when 1 ε · (dist i (q 1 , q 4 ) + dist(q 4 , A)) < dist i (q 1 , s) < ε · (dist i (q 1 , q 3 ) + dist(q 3 , A)), andd 2 (q 1 , q 3 , q 4 ) = ⊥ otherwise. q 3 or q 4 may be unspecified. In that case, the corresponding part of the inequality is dropped. 8 To construct C, we proceed as follows: for any region Π ∈ Υ given by Lemma 26, and for any path P i in the boundary of Π, select a roundingd 1 If there is any, pick one point s achieving all those rounding distances, and add s to C. We will show Lemma 25 using this centroid set. For that, we break the proof into two parts: first, the size of C is the desired one; then, C is indeed an approximate centroid set. Proof. Fix a region Π, a path P i on Π's boundary, and points q 1 , q 2 . There are O 1/ε 4 points in the net N i,q 1 ,q 2 , and O(1) in Π ∩ X. For each of those points, there are at most 3/ε 4 many choices of distances. For a fixed region Π, path P i on Π's boundary, and points q 1 , q 2 , q 3 , there only 2 possible different d 2 (q 1 , q 2 , q 3 ). Now, there are poly(|X|) many regions Π, and for each of them O(log |X|) many paths P i . For each path, there are at most |X| 3 choices of q j for it, so in total |X| O(log |X|) possible choices. Each choice gives rise to O(log |X|) · O 1/ε 4 many net points, each having at most 3/ε 4 many choices of distances. Lemma 28 gives exactly the same guarantee as Eq. (30): hence, as in the proof for treewidth, we can conclude from that inequality that for any solution S and any interesting point p, |cost(p, S) − cost(p,S)| ≤ ε(cost(p, S) + cost(p, A)). Combining the guarantees from Lemma 28 and Lemma 27 concludes the proof of Lemma 25. A Note on Euclidean Spaces Lastly, we briefly want to survey the state of the art results for eliminating the dependency on the dimension in Euclidean spaces. In a nutshell, the frameworks by both Feldman and Langberg [FL11] and us only yield coresets of size O(kdpoly(log k, ε −1 )). To eliminate the dependency on the dimension, we typically have to use some form of dimension reduction. In a landmark paper, [FSS20] showed that one can replace the dependency on d with a dependency on k/ε 2 for the k-means problem, see also [CEM + 15] for further improvements on this idea. Subsequently, Sohler and Woodruff [SW18] gave a construction for arbitrary k-clustering objectives which lead to the first existence proof of dimension independent coresets for these problems. Unfortunately, there were a few caveats; most notably a running time exponential in both k. Huang and Vishnoi [HV20] showed that the mere existence of the Sohler-Woodruff construction was enough to compute coresets of size poly(k/ε). Recently, the Sohler-Woodruff result was made constructive in the work of Feng, Kacham and Woodruff [FKW19]. Having obtained a poly(k/ε)-sized coreset, one can now use a terminal embedding to replace the dependency on d by a dependency ε −2 log k/ε. Terminal embeddings are defined as follows: Definition 7 (Terminal Embeddings). Let ε ∈ (0, 1) and let A ⊂ R d be arbitrary with |A| having size n > 1. Define the Euclidean norm of a d-dimensional vector x = d i=1 x 2 i . Then a mapping f : R d → R m is a terminal embedding if ∀x ∈ A, ∀y ∈ R d , (1 − ε) · x − y ≤ f (x) − f (y) ≤ (1 + ε) · x − y . Terminal embeddings were studied by [EFN17,MMMR18,NN19], with Narayanan and Nelson [NN19] achieving an optimal target dimension of O(ε −2 log n), where n is the number of points 9 . It was first observed by Becchetti et al. [BBC + 19] how terminal embeddings can be combined with the Feldman-Langberg [FL11] (or indeed our) framework. Specifically, given the existence of a poly(k/ε)-sized coreset, applying a terminal embedding with n being the number of distinct points in the coreset now allows us to further reduce the dimension. At the time, the only problem with such a coreset bound was k-means. The generalization to arbitrary k-clustering objectives is now immediate following the results by Huang and Vishnoi [HV20] and Feng et al. [FKW19]. It should be noted that more conventional Johnson-Lindenstrauss type embeddings proposed in [BBC + 19, CEM + 15, MMR19] do not (obviously) imply the same guarantee as terminal embeddings. We appended a short proof showing that terminal embeddings are sufficient at the end of this 9 See the paper by Larsen and Nelson for a matching lower bound [LN17] combining Equations 32 and 33 p∈A w p · cost(p, S) − q∈P w q · cost(p, S) ≤ ε · p∈A w p · cost(f (p), f (S)) + ε · q∈P w q · cost(f (q), f (S)) + p∈A w p · cost(f (p), f (S)) − q∈P w q · cost(f (q), f (S)) ≤ 2ε · p∈A w p · cost(f (p), f (S)) + ε · q∈P w q · cost(f (q), f (S)) ≤ (3 + ε)ε · In this section, we show how to trade a factor ε −z for a factor k in the coreset size. Lemma 29. Let (X, dist) be a metric space, P be a set of points, k, z two positive integers and A a set of O(k) centers such that each for each cluster with center c induced by A, all points of the cluster are at distance between ε z 2 ∆ C and z ε 2 ∆ C , for some ∆ C . Suppose there exists an A-approximate centroid set C for P . Suppose we initially computed a set of k centers A. Our aim is to define a sampling distribution that approximates the cost of any solution S with high probability. While the basic idea is related to importance sampling (i.e. sampling proportionate to cost(p, A)), we add a few modifications that are crucial. Compared to the framework described in the main body, we change slightly the definition of ring. The algorithm is as follows: from every R i,j , sample δ points uniformly at random (if |R i,j | ≤ δ, simply add the whole R i,j ). Next, we consider the interesting cases. The main observation here is that there are only O(log 1/ε) many rings per cluster, hence a coarser estimation using Bernstein's inequality is actually sufficient to bound the cost. Proof. We start by bounding |R i,j | · (ε · 2 ) z in terms of cost(R i,j , S) + cost(R i,j , A). Furthermore, by the same reasoning and again using Equation 34, we have the upper bound M on the (weighted) cost in S of every sampled point in every ring: M ≤ (ε · 2 +1 ) z · |R i,j | ≤ (cost(R i,j , S) + cost(R i,j , A)) · 8 z Applying Bernstein's inequality and Equations 35 and 36, we now have
2021-04-14T02:17:17.837Z
2021-04-13T00:00:00.000
{ "year": 2021, "sha1": "19927da753e52738ee3190785cb93c82067fb3f0", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2104.06133", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b8121848b08ad48ba7fe65711f3acf9f76f25052", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
219109979
pes2o/s2orc
v3-fos-license
RESEARCH Development and Validation of School Crisis Prevention/ Preparedness and Management (SCPP&M) Scale This research study was to develop a valid and reliable school crisis prevention/ preparedness and management scale in the context of managing educational crisis. A blended research approach was utilized to accomplish the goals’ exploration. As a feature of the foundation to this examination, the instructive setting of Punjab Province is considered. The school crisis communication theory (SCCT), itself holding a solid reason in regards to its applications and qualities, which were steady in giving the method of reasoning for this investigation. Five point Likert scale was used for this purpose. The scale, would therefore explore the perceptions of 278 sampled primary school teachers of the Punjab responsible for the management of educational crisis against School Crisis Prevention/Preparedness and Management Scale (SCPP&M). As they were already seven major elements (Crisis Identification, Challenge, Communication, Reduction, Reconstruction, Sustainability and Evaluation). Each construct measured through Exploratory Factor Analysis. The results of the scale development comprised on four factors (Prevention, Preparedness, Response & Recovery). This study is based on the claim that it is one of the pioneer efforts in developing school crisis management strategy at all educational levels. Introduction Education is basic for all children yet it is particularly earnest for the a great many children influenced by crises, be they man-made or catastrophic events. However, for children influenced by crisis and emergencies, their entitlement to education remains an unfulfilled guarantee. Other than cataclysmic events happening in Pakistan there are likewise strife influenced zones where schools, teachers and students face brokenness in their education. Teaching framework exists inside the substances of its socio-world of politics. Enhancements in the division can't be supported (and even started) without setting off an adjustment in the point of view of the key stakeholders. The Quran stresses the importance of reading, studying, reflecting, investigating and this is a commandment prescribed to all Muslims. All Muslims both male and female has right to seek knowledge because it's a sacred duty. Read! In the name of your Lord who created (all the exists) From Islamic perspectives, the purpose of this Surah is to reflect on the educational importance in development process. The first word "Iqra" was revealed to our Prophet Muhammad (Peace Be Upon Him) from Allah Taala (SWT). So, the meaning of word is Read, educate yourselves, to seek knowledge and be educated. According to the Prophet Muhammad (PBUH): Knowledge seeking is mandatory on all Muslim " Mitroff and Anagnos (2002) recommends, "an emergency is an occasion that effects or can possibly influence the entire of an institute". In the event that something influences just a little and detached piece of an institute, it might be notable crisis for an organization. A few calamities would make an emergency in any school on the grounds that effect would overpower even the most proficient staffs. However, different episodes may not occur at any point in light of handy anticipation. As per Herman (2015), a school crisis is a transitory occasion or condition that influences a school, making people experience dread, powerlessness, shock and terribleness; a school crisis requires unprecedented activities to reestablish a feeling of mental and physical security. The source of crisis need not be school-based; outside occurrences and conditions likewise can make an emergency for a school. This definition includes the term condition to highlight the possibility that a crisis may extend over time (such as in the case of unresolved, repeated bomb threats or a natural disaster with long term effects). An emergency is a hazard portrayed by time-related pressure activated by an occasion. The activating occasions at present can be different, including, a bomb scam, a student grab, a gas spill from a neighboring property, or a kid misuse media claim against a staff part. Being badly arranged for or misusing the worry of the occasion in some cases can be the more harming than the hidden occasion. Crisis management includes: reacting expertly to this time-basic worry in a way that tends to the basic needs of the time while quieting as opposed to intensifying the pressure experienced by different members (Farmer, 2018). The board emergency is enormously helped by readiness that guarantees the required assets is promptly accessible, for instance: student records and parental contact subtleties being open offsite. Reacting to a crisis requires school pioneers to act unexpectedly, however, with the help of cautious arranging. School pioneers' abilities and certainty are altogether upgraded by emergency recreation and testing. The target of organizational crisis management the board is to settle on auspicious choices dependent on best certainties and consistent discernment when working under phenomenal conditions (Pearson, 2002). In the event that one has, an intensive comprehension of the basic fundamentals of emergency the executives the effects of all emergencies can be reduced. Fagerli& Bjorn (2003) guarantee that in the talk of crisis management, terms, for example, recognizing, breaking down, detecting, diagnosing, and evaluating possess large amounts of the different portrayals. Fruitful execution of these exercises empowers organizations maybe not to maintain a strategic distance from crisis however unquestionably to be proactive in that they can get ready for and conceivably anticipate them. They further contend that what associations need to accentuation is maybe not emergency the board but rather emergency arranging. The investigation aim is to develop school crisis Prevention/Preparedness and management scale for Educational organization especially for Punjab Government's Primary schools to sustain these educational crises. It may be critical to attempt investigation which tends the inquiries of what really intends to be re-addressed, regarding advance necessities and difficulties of the marvels. Theoretical Framework The system of the investigation depends on the hypothetical supporting situational emergency correspondence theory (SCCT). This theory was initially created by Coombs in 2007, a teacher in correspondence learns at Eastern Illinois University, where he shows the board emergency, corporate correspondence and advertising. His Situational Crisis Communication Theory (SCCT) is a theory based and experimentally tried strategy for choosing emergency reaction systems. Coombs presented the SCCT in 1995 as an emblematic way to deal with emergency correspondence, however, he tried during the most recent 13 years, refined, and formed it into a progressively rational theory. Figure 1 shows a theoretical framework of the study which based in SCCT. This hypothesis comprises of three center components: (1) the emergency circumstance, (2) emergency reaction methodologies, and (3) a framework for coordinating the emergency circumstances and emergency reaction procedures (Coombs, 2006). The conviction is that the adequacy of correspondence procedures is subject to qualities of the emergency circumstances. By understanding the emergency circumstance, an emergency supervisor can pick the most fitting reaction. SCCT is an endeavor to comprehend, to clarify, and to give prescriptive activities to emergency correspondence (Heath & Coombs 2006). In this framework, all these three elements are further comprised of seven factors. Crisis and matching process are consisted of two factors whereas crisis response strategies consisted of five factors. All factors contributed to scale development and school crisis management strategy formulation Research Methodology A questionnaire was utilized as an information gathering procedure since it could be managed to a bigger example. Utilizing the data from various writing as (Kerr, 2019;Hajer, Thayaparan &Kulatunga, 2016: Liou, 2014McCarty, 2012;Thompson, 2012) the analyst's very own understanding and the ideas noted in hypothetical structure, the survey was intended to investigate the view of school partners in regards to class emergency the board. The questionnaire contained a lot of scales to investigate the recognitions on seven dimensions/factors about school crisis prevention/preparedness and management scale. This piece of poll estimated every measurement on five point Likert scale. Notwithstanding the seven elements of the crisis management that were gotten from the hypothetical structure. The information was breaking down and decoded in various ways determined in the information investigation segment. Development of School Crisis Prevention/Preparedness & Management Scale (SCPP&M) In the present study, the scale, would therefore explore the perceptions of school stakeholders responsible for the management of educational crisis against School Crisis Prevention/Preparedness and Management Scale (SCPP&M). As they were already seven major elements (Crisis Identification, Challenge, Communication, Reduction, Reconstruction, Sustainability and Evaluation) given by Coombs (2007) in his theory and each element further comprised of at least six to eight factors. In table 1, there were six factors given for the element of Crisis Identification, Challenge, Communication, Reduction, Reconstruction, Sustainability and eight factors of Evaluation. Therefore, the overall in SCCT Theory, there were seven elements or dimensions which were further comprised of 44 sub factors. For the development of an appropriate school crisis prevention/preparedness and management scale (SCPP&M), after a critical review of the theory a group of 120 items was made of each element of theory. Items were carefully generated after a thorough review on SCCT theory. Second draft of the scale (SCPP&M) was articulated which was consisted of seven dimensions with 65 items again. Finally, the third scale draft comprised on 44 items/factors. The response format of SCPP&M scale was decided to be a five point Likert scale: Strongly Agree (SA) =5, Agree (A) =4, Undecided (UD) =3, Disagree (DA) =2 & Strongly Disagree (SDA) =1 which allowing clear ratings. Exploratory Factor Analysis (EFA) So as to quantify the SCPP&M scale's validity and reliability, factor analysis was implemented on the grounds that the SCPP&M size of this investigation was legitimately created from the 44 sub-elements of seven elements of SCCT theory. Before the factor analysis, SCPP&M questionnaire was comprised on 44 sub factors produced from seven variables of situational crisis communication theory. Along these lines, it was chosen to apply EFA on this scale because this is one of the compelling techniques for factor analysis. In this analysis, EFA was performed just once through SPSS. By and large, after factor analysis the seven variables converged in to four elements (Prevention, Preparedness, Response and Recovery) and in the interim, forty-four sub components decreased to thirty-three factors so as to improve the unwavering quality worth. In table 2 after factor analysis, last maintenance of things just as decrease in the quantity of sub scale is given. Construct Validity of the Scale The fundamental goal to utilize the EFA to find out the structure of develop and analyzed factors its unwavering quality. It is an information driven procedure. Consequently, forty-four things of School Crisis Prevention/Preparedness & Management scale were analyzed through EFA with the information of 278 respondents. For evaluating information appropriate to be the factor broke down, after suppositions were observationally tried. The scientist proposed the number of alternatives to be pursued for testing the test ampleness. The example for performing EFA was chosen, keeping in view the criteria which given by the Field (2009). The sample size was comprised on 278 respondents and it was additionally exactly tried through the KMO and Bartlett tests. KMO=.758 which is genuinely sufficient to perform the factor analysis. In this study, the interpretations of component matrix were used as most researchers translate design network. It contains data about the special commitment of a variable to a factor. In this examination, Oblique or Oblimin turn strategy was utilized on the grounds that the components were theoretical situation. As indicated by Field (2009), the factor loadings are the check of the functional significance of an offered variable to a given factor. Regularly, take a stacking of a flat out values of more than 0.3 to be significant. In any case, the importance of a factor stacking will rely upon the example size, for the example of 200 it ought to be more noteworthy than 0.364. This worth depends on an alpha level. In this analysis table 3.7, 3.8, 3.9, 3.10, presents the loadings of the sub-factor of the four components of SCPP&M Scale. Internal consistency of SCPP&M scale Factor examination was raced to comprehend the factorial authenticity of the questionnaire. In order to find out the inside consistency of the hard/fast scale and also sub-factors' reliability analysis was run and internal consistency was investigated on controlling test whereupon the school crisis shirking/status and the Likert scale (SCPP&M) was made (n=278). The internal consistency of everything of all factors were resolved and the estimation of Cronbach's Alpha were given for everything of all data driven factors in tables 3.8, 3.9, 3.10, 3.11. Results of EFA on Each Factors of SCPP&M Scale In this analysis, through Statistical Package for Social Sciences (SPSS) 25, exploratory factor analysis (EFA) was directed to analyze the impression of school stakeholders to dependable the school crisis management as an extensive factual system of factor analysis. After pilot study, the under seven primary components of SCPP&M scale (Crisis Identification, Challenge, Communication, Reduction, Reconstruction, Sustainability and Evaluation) with forty-four things were chosen. Generally speaking, the aftereffects of EFA uncovered that out of forty-four items, thirty-three items were held and eleven items (2,7,10,14,18,21,22,28,30,32,36) were disposed of as these items held their personality in isolation introducing one factor lastly it was chosen not to incorporate them in definite group of items which were available as a gathering exhibiting a solitary factor as well. Accordingly, seven components were converged to four elements and contextualized their names (Prevention, Preparedness, Response and Recovery) due to disposed of things (table,3). Thusly a scale holding thirty-three items of every one of the four measurements was settled. An assessment of the items showing up in four variables demonstrated that they were moderately reasonably corresponded however inside each SCPP&M scale elements are demonstrating a solid positive relationship. A detail record of the EFA results on each factor for the build legitimacy of SCPP&M scale is given in the accompanying tables: Before factor analysis, this factors contain six sub-factors under the label of crisis identification (1,2,3,4,5,6). Maximum number of sub factors loaded on factor 1. But after the EFA on this dimension (table 4), finally eleven items (1,5,6,8,11,25,33,34,35,37,42) were retained except one item (2) which reported a high loading on factor three in isolation. This item was had loading on <.5 while the standard value for the factor loading is > .5 so this item was discarded on the basis of results. Therefore, the item was excluded from the final scale. Moreover, eight items shifted (8, 11 from 2nd factor, 25 from 5th factor, 33, 34, 35 shifted from 6th factor and 37, 42 shifted from 7th factor to this factor) to this factor. These items typically presented crisis identification/ prevention, identification of problems, cause/ reasons of the crisis from students' academic and social needs to people be treated with respect manners, so factor-1 Crisis identification was named with Prevention as indicated by the factor loadings of all things in this measurement. In this space the scope of sub factor stacking is .805 to .528 while the scope of thing complete connection is .517 to .725 lastly, the scope of Cronbach Alpha is .873 and 19.49% of the difference is represented by factor-1. Before factor analysis, this factors contain six sub-factors under the label of Challenges (7,8,9,10,11,12). But after the EFA on this dimension (table 5), finally seven items (04,17,20,26,27,39,44) were retained except one item (7) which reported a high loading on factor seven in isolation. The item was discarded because it loading on <.5 while the standard value of the factor loading is > .5. so, the item was excluded from the final scale. Moreover, seven items shifted (04 from factor-1, 17 from factor-3, 20 from factor-4, 26,27 shifted from factor-5 and 39, 44 shifted from factor-7 to this factor) to this factor. These items typically presented challenges/ preparedness regarding awareness program, safety actions, risk n availability of crisis team and chain of command so factor-2 Challenges was named with Preparedness as per the factor loadings of all things in this measurement. In this area the scope of sub factor stacking is .664 to .551 while the scope of thing all out relationship is .576 to .717 lastly, the scope of Cronbach Alpha is .675 and 19.79% of the difference is represented by factor-2. Before factor analysis, this factors contain six sub-factors under the label of Challenges (13,14,15,16,17,18). But after the EFA on this dimension (table 6), finally five items (12,16,19,31,43) were retained except two items (14,18) which reported a high loading on factor-2 and on factor-1 in isolation. Table 4 Factor leadings, item total correlations on Prevention and sub-factors (FL= Factor Loadings, I.T.C= Item Total Correlation, α=Cronbach Alpha) These items were discarded on the basis of results because these had loading <.5 while the standard value of loading factor is > .5. So, these items were excluded from the final scale. Moreover, four items shifted (12 from factor-2, 19 from factor-4, 31 from factor-6 and 43 shifted from factor-7) to this factor. These items typically presented communication/Response regarding decision making power, accountability and crisis networking system for parents, so factor-3 Communication was named with Response as per to the factor loadings in this dimension of all items. In this domain the range of sub factor loading is .667 to .518 so, the item total correlation range is .710 to .923 and finally, the Cronbach Alpha value is .620 and 21.45% of the variance is calculated for by factor-3. All these four factors were merged in one factor due to low numbers of subfactors in each factor. These items typically presented Recovery process regarding ongoing evaluation, effective communication, continuity of routine performance and evaluation of pre-background of employment to make better plans for future, so factors-4,5,6 & 7 were merged in one factor and was named with Recovery as indicated by the factor loadings of all things in this measurement. In this area the scope of sub factor stacking is .686 to .510 while the scope of thing complete connection is .590 to .952 lastly, the scope of Cronbach Alpha is .636 and 20.07% of the change is represented by factor-4. Conclusion After factor examination of forty-four things of the School Crisis Prevention/Preparedness and Management (SCPP&M) scale dependent on information of 278 respondents utilizing Oblimin turn strategy, factorial legitimacy of the scale was set up on observational, method of reasoning and theoretical grounds. The last scale developed with thirty-three information driven things and four very much characterized elements. They concluded theses crisis management factors playing an important role in development of crisis management strategy. Their findings support to findings of the school crisis preparedness, prevention and management scale. This SCPP&M scale dependent on educational crisis elements can be utilized as a "viable apparatus" in Punjab and somewhere else through the educational stakeholders, chairmen, policy maker for getting to the crisis management systems of educational institutes.
2020-04-23T09:07:31.112Z
2020-03-31T00:00:00.000
{ "year": 2020, "sha1": "c8a27f0346f2de1b4a866d7e90878018feae4e00", "oa_license": null, "oa_url": "https://doi.org/10.35484/pssr.2018(2-ii)33", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "843d893462e2609e1aa8868543dad14e014d4fad", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Environmental Science" ] }
23745006
pes2o/s2orc
v3-fos-license
Effects of mechanical and thermal load cycling on micro tensile bond strength of clearfil SE bond to superficial dentin Background: Certain studies have been conducted on the effects of mechanical and thermal load cycling on the microtensile bond strength (microTBS) of composites to dentin, but the results were different. The authors therefore decided to evaluate these effects on the bonding of Clearfil SE bond to superficial dentin. Materials and Methods: Flat dentinal surface of 42 molar teeth were bonded to Filtek‐Z250 resin composite by Clearfil SE bond. The teeth were randomly divided into 7 groups and exposed to different mechanical and thermal load cycling. Thermocycling was at 5‐55°C and mechanical load cycling was created with a force of 125 N and 0.5 Hz. Then, the teeth were sectioned and shaped to hour glass form and subjected to microTBS testing at a speed of 0.5 mm/min. The results were statistically analyzed by computer with three‐way analysis of variance and T‐test at P < 0.05 significant. To evaluate the location and mode of failure, the specimens were observed under the stereomicroscope. Then, one of the specimens in each group was evaluated under Scanning Electron Microscopy (SEM) for mode of failure. Results: All of the study groups had a significantly lower microTBS as compared to the control group (P < 0.001). There was no statistically significant difference between mechanical cycling with 50K (kilo = 1000) cycles, and 50K mechanical cycles plus 1K thermal cycles. Most of the fractures in the control group were of adhesive type and this type of fracture increased after exposure to mechanical and thermal load cycling. Conclusion: Thermal and mechanical load cycling had significant negative effects on microTBS and the significant effects of mechanical load cycling started to be significant at 100K cycles. INTRODUCTION Light cure resin composites have been used for a long time in restorative dentistry to restore dental structure and correction of color and contour of the teeth. Studies on the adhesion of resin composites to teeth started with the studies on the adhesion to enamel and followed by to dentin. [1] To date, seven generations of dentin-bonding agents have been introduced. The sixth generation of bonding agents (self-etch) needs fewer stages and is easy to apply. This bonding agent has been made on the basis of simultaneous use of conditioner and primer on enamel and/or dentin with the help of non-washable acid monomers. Most of these bonding agents are composed of two phases. In the first stage, condi-primer and in the second stage, adhesive is applied on dentinal surface. [2] The bonding agent used in the present study belongs to this category of adhesive systems. In order to evaluate the characteristics of bonding, most of the researchers have usually performed in-vitro studies and simulated thermal and mechanical load cycling to resemble the oral environment. [3] Nowadays, microtensile bond strength (microTBS) is used to evaluate the bond strength of resin composites to dental hard tissues in the extra oral environment. Nikaido evaluated the microTBS and mode of fractures of resin composite restorations after application of thermal and mechanical load cycling on 24 molar teeth. In first group, flat dentinal surfaces were made and following the use of an adhesive, the crowns of the teeth were built-up with resin composite. In second group, class I cavities were prepared and then restored using two types of adhesives and resin composite. The samples of both groups were divided into 4 subgroups and were exposed to mechanical load cycling for 0, 1K, 5K, and 10K cycles with 50 N (Newton) load, and thermocycling for 0, 125, 625, and 1250 cycles. The samples were then immersed in water for 1 week and exposed to microTBS test. The location of bond failure of resin composites in each sample was examined by SEM. The results showed that the mean microTBS in first group, was approximately 40 MPa, and so, mechanical and thermal load cycling did not affect the microTBS. In second group, the mean of microTBS in the control group was 21 MPa and decreased significantly in the other groups as the cycles of thermal and mechanical load were increased. [4] Mitsui studied the effects of mechanical and thermal load cycling on the microTBS in total-etch and self-etch adhesive systems. Class II cavities were prepared in 168 bovine incisor teeth and restored with resin composites using self-etch and total-etch dentin bonding agents and composite. The teeth were then divided into 7 equal groups and various thermal and mechanical load cycling were applied. For the thermocycling, water baths at 5°C and 55°C with 60 s of dwell time were used and mechanical cycle was performed with 80 N at 2 cycles/s. Then, the samples were sectioned and trimmed to obtain a surface of 0.81-1 mm 2 and tested for microTBS. The results showed that total-etch adhesive had significantly higher microTBS than that of self-etch adhesive and the bond strength decreased as the rate of load cycling were increased, but at 100K load cycle, there was no significant difference in bond strengths compared to the control group. [5] Xie evaluated the effect of thermocycling on microTBS of one-and two-step self-etching adhesives. Clearfil S3 Bond (S3) and Clearfil SE Bond (SE), were applied on cervical lesions in human premolars and restored by using Clearfil AP-X resin composite. Then the teeth were sectioned into 0.7 × 0.7 mm composite-dentin beams and aged with 0, 5K, or 10K thermocycles. The beams were subsequently subjected to microTBS testing at a crosshead speed of 1 mm/min and statistical analyses were computed. The results showed statistically significant effects on bonding effectiveness by adhesive system, thermocycling, or combinations of the adhesive system and thermocycling (P < 0.05). Regardless of the lesion type, the microTBS for S3 decreased significantly after 5K or 10K thermocycles, while the microTBS for SE showed a significant decrease only after 10K thermocycles. The results suggested that thermocycling had a significant negative effect on the bond strength of the two SEAs tested. [6] Therefore, considering the results of the above-mentioned studies, the aim of present study was to evaluate the effects of mechanical and thermal load cycling on the microTBS of a self-etch dentin bonding agent (Clearfil SE Bond) to superficial dentin and also to observe the modes of fractures. MATERIALS AND METHODS Forty-two extracted sound maxillary molar teeth without caries and developmental defects were collected over a period of 1 month. The teeth were stored in normal saline in the room temperature. [7] Then debridement was done for removing the adjacent periodontal tissues. In order to carry out infection control, the teeth were disinfected in 0.5% choloramine-T solution for 24 h prior to study. [8] Then, a diamond burr (SS White/USA) in high-speed handpiece with water spray was used to remove the enamel and expose the underlying dentin and after 5 time tooth preparations, another burr was used. Before application of the dentin-bonding system, the dentin surfaces were polished by 320 grit silicone adhesive paper to create standard smear layer on each tooth surface, [9] and the teeth were then washed under tap water and the excess water was removed. After preparation of tooth surfaces, the adhesive system used in this study, Clearfil SE Bond (Kuraray Co, Osaka/ Japan), was applied to dentin surfaces according to the manufacturer's instructions. The bonding primer of Clearfil SE Bond (Kuaray/Japan) was applied on the prepared dentinal surface by a microbrush according to the instructions of the manufacturing company and allowed to be remained for 20 s before spreading it with gentle air stream. The primer then was light cured using Astralis light cure unit (Vivadent/Lichtenstein) with an intensity of 500 mw/cm 2 for a period of 10 s. The intensity of light had been confirmed by a radiometer (Dentamerica/Taiwan). [7] Then, Filtek Z 250 resin composite (3M, Dental Product, St Paul, MN/USA) was applied on the bonded area in two layers of 1.5 mm. Each layer was irradiated with Astralis curing light unit, separately, on four sides for a period of 40 s. The distance between the tip of the light source and resin composite was at the minimum distance and the head of the light cure unit was holding perpendicular to the surface of the composite restoration. This distance was maintained for all samples. The teeth were then randomly divided into 7 groups (G1-G7) of 6, and the study went on as follows: Samples were mounted in self-curing acrylic resin (Flash Acrylic, Yates Motloid, Chicago, IL/USA) to a level 1 mm below the CEJ of every tooth and then, Mechanical load cycling of 0, 50K, 100K and 500K were applied on groups G1 through G7, respectively, and thermocycling were applied to G5 through G7 groups for 1K cycles. For mechanical load cycling, the teeth were mounted in the mold of the Load cycling machine (Vafaii Corp./Iran). The distance between the force area of the machine and each tooth was adjusted and then the force was applied. During mechanical load cycling, the teeth were immersed in normal saline. It is worth mentioning that the magnitude of applied force on the teeth in the mechanical cycles was 125N with a frequency of 0.5 HZ. [3] Group G5 through G7 specimens were thermocycled between 5°C and 55°C with a holding and dwell time of 15 s and 60 s respectively. [5] The teeth were then placed in the mold filled with self-cure acrylic resin (Flash Acrylic, Yates Motloid, Chicago, IL/USA) in the appropriate position. A 0.3 mm diamond disk (Ham Co. Machines, Inc., Rochester/USA) was used to cut the teeth in mesiodistal direction and parallel to the horizontal plane of the teeth under running water to prepare 1-mm thick slabs of teeth. A total of 12 samples were made in each group. Using a diamond fissure burr (SS White/USA), the segmented samples were thinned at the bonding area to create an hour-glass shape with an interface area of 0.8-1 mm 2 . Then, the samples were subjected to microTBS test by universal Testing machine (Bisco Corp./USA) at a cross head speed of 0.5 mm/min in order to create fracture and the applied force was recorded. The results were analyzed by analysis of variance (ANOVA) and T-test using SPSS 16 software program. In order to determine the mode of fracture, the samples were examined by a stereomicroscope (Zeiss-Stenc-SV11/ Germany) with ×20 magnification and one sample in each group was evaluated by SEM (Philips, XL20/ Netherlands). The results showed that the highest mean value of microTBS was in G 1 (35.4 Mpa), while the lowest was in G 7 (12.71 Mpa) respectively. The difference between all groups was significant (P < 0.001) [ Table 1]. The difference between the test and control groups according to Dunnett post hoc test was statistically significant (P < 0.001). Kolmogorov In the present study, with an increase in the cycles of the mechanical load, with and without thermocycling, microTBS values decreased significantly [ Table 2]. Results of the two-way analysis of variance showed that both mechanical and thermal load cyclings had an effect on the microTBS and also, there was a reciprocal difference between the effects of mechanical and thermal load cycling [ Table 3]. In the present study, all of the fracture sites were studied by a stereo microscope with a magnification of ×20. The mode of fracture evaluation showed that the maximum number of fractures was in the adhesive (64.28%) and the minimum was, mixed type (9.52%) respectively [ Table 4]. Moreover, the type of fractures in one of the samples of each group was studied by SEM. DISCUSSION The present study was conducted with the aim of studying the effects of mechanical and thermal load cycling on the microTBS of Clearfil SE bond to superficial dentin. Results determined that application of simultaneous mechanical and thermal cycling leads to a decrease in microTBS that is consistent with the studies by Bedran De Castro et al., [10,11] Toledano et al., [12] Mitsui et al., [5] Abdalla et al., [3] and Kasraei and Khamverdi. [8] The results of the study were not consistent with the results of Nikaido et al. [4] that showed after 50K mechanical load and 2K thermal cycles, there was no significant difference in the values of microTBS between the study and control groups. This may be explained by the fact that, the applied pressure on resin composite in their study might be eccentric. Also, resin composite could act as a shock absorber and distributed the force during mechanical loading. Other factors including type of tooth, adhesive agent, the time passed since extraction, environmental circumstances, and intensity and direction of applied force to the samples could play a role in the outcomes of that study. In the present study, the effects of 1K thermal cycle became significant at mechanical cycles more than 50K, whereas in the study by Nakata et al., [13] the effect of 1K thermal cycles was not significantly different from that of the control group. Results of the most clinical studies are consistent with the most in-vitro studies, but due to some limitations, it is not possible to simulate the oral environment in a laboratory. Therefore, many studies have used methods like mechanical load cycling and thermo cycling in order to achieve conditions similar to oral environment. [14] In the present study, simultaneous application of mechanical and thermal load cycling were used to mimic chewing condition too. Thermo cycling is a common method to simulate oral environment in laboratory. On the basis of International Standard Organization (ISO) TB 11450 standard, 500 cycles must be carried out for thermo cycling. [15] Based on a review article, a thermo cycling of 1K is similar to approximately 1 year work in mouth environment and the 500 cycles proposed by ISO standard is very minimal in mimicking the long term. [15] But other studies have reported different thermo cycles to mimic the aging of dental materials. The researches placed teeth or restorations at a temperature comparable to the oral cavity and applied stress on bonding area. [16] This process helped us to understand stress generation in restoration due to aging of restoration and thermal changes. [17] Changes in thermocycles lead to speeding up the hydrolysis of unprotected collagen fibers by high-temperature water and removing the resin oligomers that are not properly polymerized. [18][19][20] Also, because of the higher rate of thermal expansion of restorative materials compared to dental structures, repetition of contraction and expansion leads to formation of a gap between tooth and restoration. Changes in the size of the gap can result in pathological fluid movement that causes microleakage and is most severe at the bonded area. [21,22] From a clinical point of view, these are the most vulnerable margins. [15] Repetitive contractive/expansive effect of thermocycling results in formation of stresses similar to clinical conditions and when the ratio of C-factor gets high, more stress is created. [23] It is not clear whether the thermocycling has an effect on the bonding strength, although one meta-analysis study showed that thermocycling does not have a significant effect on the bond strength. [24] Intra-oral restorations are continuously exposed to stresses from the opposite teeth for about 1 million mechanical strokes per year. [14] These strokes over a long period of time have their effects on bonds of the interface between restoration and tooth surface and result in weakening failure of the restoration. [4] In the present study, the mechanical cycles applied to the teeth were 50K, 100K, and 500K, respectively. Different studies have reported different mechanical cycles. For example, Nikaido et al. [4] used 10K, 50K, and 100K. Bedren De Castro et al. [10,11] and Kasraei and Khamverdi [8] used 100K, Toledano et al. [12] and Mazzitelli et al. [25] used 5K and Abdolla et al. [3] applied 4K cycles in their studies. The present study is different from them regarding the variety in number of mechanical cycles. In the last two decades, one of the most important topics in restorative dentistry has been determination of appropriate methods to bond resin composite to dental hard tissues. Appropriate bonding is in relation to chemical, physical, and mechanical properties of adhesive resin and dental substrate. After forming an attachment between tooth and resin composite, resistance to fracture depends upon the extent of defect in the interface between bonding agent and tooth surface and there is the possibility of formation and widening of cracks and ultimately breaking of the attachment. This is also related to the total characteristics of the substrate, adhesive resin, and age of bonding. [1] The base of this adhesion is on the micromechanical retention that is proved particularly in the case of enamel, but is still a question about the dentin. In the present study, sound dentin was used as a substrate. Previously, shear bond strength was the most common method used to examine the bond strength. In this method, a cutting force was applied by the tip of a blade to the interface between resin and dentin. The main problem of this method was the little attention paid to the geometry of the cavities and the shrinkage due to polymerization. [27] MicroTBS is a relatively new method in evaluating bond strength that was introduced by Sano in 1994. [28] In this test, the bonding surface is decreased to approximately 1 mm 2 . By reducing the bonding area, the defects are reduced to a minimum and the measured bond strength is near the actual and is higher than of tensile bond strength test. It is determined that a surface area of 1 mm 2 is critical in these tests and larger bond size causes higher than actual bond strength is recorded at greater bonding surface, while lower than actual bond is recorded when smaller area is tested. [29] Therefore, one of the standard criteria's in these tests is the surface area of about 1 mm 2 that was observed in this study. It is said that this test depicts a more realistic value of bond strength. Even though this method has several difficulties from the initial stage of gathering the specimens to implementation of the test, as the required sample size in this test is less than other tests, it would be one of the best methods to compare various types of bonding. Moreover, it is possible to recruit from different families of teeth. It is also possible to perform this test on dental surfaces with different clinical traits like dentinal caries, sclerotic dentin, and on the cervical region of root or enamel. [30] Samples with small defects may be excluded in order to get more realistic values of bond strength. [29] It is also possible to study the bond strength in different area of a single tooth. [31] Samples are more appropriate to be the subject of microTBS test than tensile bond strength test which needs more samples because the resin composite and surrounding dental tissue can protect the interface between tooth surfaces and resin composite from thermal changes. [15] In order to perform microTBS test, samples usually are made in beam or hour-glass shape. In this study, hour-glass-shaped samples were used. The distance between pulp and interface between dentin and resin composite was equal or less than 3 mm. In addition, the upper and lower parts of hour-glass-shaped samples were stuck to the designed arms of testing machine and a larger surface area was prepared for attachment. This approach reduced the risk of early separation of the sample from the arm of testing machine. Certain studies, Nikaido et al., [4] Bedran De castro et al. [10,11] Mitsui et al., [5] Abdalla et al. [3] and Kasraei and Khamverdi, [8] used beam shaped samples in their studies, but Osorio et al. [32] and Toledano et al. [12] used hour-glass-shaped samples. Technique of microTBS test has its advantages and leads to higher accuracy in measurement over other methods, but it has its own several problems too, like preparing 1-mm thick slices and ultimately hour-glass or beam-shaped samples are highly technique-sensitive. In addition, placing the samples in the testing machine and fixing them is another sensitive step. If there is inadequate stability, the microtensile force will be altered and the results do not hold the required accuracy. Therefore, there is a risk of losing some of the samples during each stage of the study and a number of extra samples have to be prepared, as the backup, at the beginning of the study. In the present study, diamond fissure burs were used to remove enamel and expose dentinal surface. In the study of Ogata et al., [33] the effect of the type of bur on microTBS was evaluated. In that study, samples were obtained using various burs and results showed that the method of obtaining the smooth surface of dentin had no significant effect on microTBS, but the type of adhesive resin determined the value of bond strength. In the present study, as the Kasrai et al. study, [8] the teeth were kept in normal saline and to prevent from cross contamination, 0.5% chloramine T solution was used. In the study of Zheng et al., [34] the effect of the type of storage media on the bond strength of adhesive was studied. The storage media used in that study included distilled water at 4°C, 0.02% thymol, 10% formalin, 1% chloramine, and freezing of teeth at 20°C. They studied the bond strength of single adhesive resin to teeth and compared them with recently extracted teeth and concluded that storage media have a significant effect on the bond strength. They concluded that if a recently extracted tooth was not available, the best available method for storage of teeth would be 1% chloramine solution or freezing the teeth at 20°C (centigrade degree). [34] In the present study, the maximum period of staying in storage media before study was one month. In the study of Miranda et al., [35] the effect of storage duration on the bond strength was studied. It was concluded that the duration of staying in storage media before bonding has no significant effect on the bond strength and teeth can be kept for long periods of time in appropriate preservative mediums. In the present study, the value of microTBS was measured with a cross head at the speed of 0.5 mm/min. In the study of Reis et al., [36] the various speeds of cross head were evaluated. They showed that the differences between the effects of cross head cutting speeds of 0.5, 1, 2, and 4 mm/min on microTBS were not significant. However, cutting speeds of 0.5 mm/min and 1 mm/min have been used in most of the studies. In the present study, the mode of fracture was evaluated by a stereomicroscope with ×20 magnification. Adhesive fracture was the most relevant mode of fracture in both the control group and test groups following mechanical and thermal load cycling, and is consistent with Bedran's study. [10,11] But, in Mitsui et al. study, [5] the most common mode of fracture was mixed and after increasing the thermal and mechanical load cycling the rate of this mode of failure increased. The reason of significant difference between the locations or types of fracture in studies is due to the variation in classification of fractures. Certain studies have reported that a vast number of cohesive fractures in dentin or adhesive resin is detectable by low magnification of stereomicroscope, but adhesive and mixed types of fracture are detectable only with high magnification. The high rate of reported cohesive fractures in certain studies could be due to error in alignment of the position of samples in an examining machine or formation of small cracks during cutting that are mistakenly considered as cohesive fractures. [37] In the present study, the mean value of microTBS decreased as the mechanical load cycling was increased and mechanical load cycling over 50K cycles caused a decrease in the value of microTBS and simultaneous use of mechanical and thermal load cycling decreased the values of microTBS. The variation of results in different studies shows that there are several factors interfering with the generalization of the outcome of experiments to clinical trials. These factors include; type of teeth, storage media, infection control type, substrate and surrounding moisture content, presence or absence of thermo and or mechanical load cycling, depth, and location of selected substrate for the test, mechanical properties of the restorative substance, type of test (shear, micro shear, microtensile, and tensile), speed and magnitude of loading cross head strength, design, and dimensions of the final sample. [29] CONCLUSION Considering the limitations of the present experimental study, it can be concluded that, an increase in the mechanical load cycling, leads to a decrease in the value of microTBS but the minimum mechanical load cycles to make significant changes is 100K. Also, simultaneous application of thermal and mechanical load cycling decreases the value of microTBS and most of the fractures are of the adhesive type.
2018-04-03T04:40:48.304Z
2013-06-19T00:00:00.000
{ "year": 2013, "sha1": "9ca8e970068a6084dbef2bebd477b8e3a4a44b10", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/1735-3327.113344", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9ca8e970068a6084dbef2bebd477b8e3a4a44b10", "s2fieldsofstudy": [ "Materials Science", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
247252505
pes2o/s2orc
v3-fos-license
Rad5 HIRAN domain: Structural insights into its interaction with ssDNA through molecular modeling approaches Abstract The Rad5 protein is an SWI/SNF family ubiquitin ligase that contains an N-terminal HIRAN domain and a RING C3HC4 motif. The HIRAN domain is critical for recognition of the stalled replication fork during the replication process and acts as a sensor to initiate the damaged DNA checkpoint. It is a conserved domain widely distributed in eukaryotic organisms and is present in several DNA-binding proteins from all kingdoms. Here we showed that distant species have important differences in key residues that affect affinity for ssDNA. Based on these findings, we hypothesized that different HIRAN domains might affect fork reversal and translesion synthesis through different metabolic processes. To address this question, we predicted the tertiary structure of both yeast and human HIRAN domains using molecular modeling. Structural dynamics experiments showed that the yeast HIRAN domain exhibited higher structural denaturation than its human homolog, although both domains became stable in the presence of ssDNA. Analysis of atomic contacts revealed that a greater number of interactions between the ssDNA nucleotides and the Rad5 domain are electrostatic. Taken together, these results provide new insights into the molecular mechanism of the HIRAN domain of Rad5 and may guide us to further elucidate differences in the ancient eukaryotes HIRAN sequences and their DNA affinity. Communicated by Ramaswamy H. Sarma Introduction The Rad5 is a member of the switching defective/non-fermentable sucrose family (SWI/SNF), an ATP-dependent chromatin remodelers with DNA translocase and binding functions formed by a large subunit complex called SWI2/ SNF2 with a RING C3HC4 motif, characteristic of ubiquitin ligases (Blasty ak et al., 2007(Blasty ak et al., , 2010Hayashi et al., 2018;Xu et al., 2016). During the replication process, the Rad5 protein complexes with other proteins that recognize the stalled replication fork, caused by DNA errors, and start two signaling pathways: the Translesion Synthesis (TLS) or the Fork Reversal (FR) (Blasty ak et al., 2007;Elserafy et al., 2018;Hargreaves & Crabtree, 2011;Shin et al., 2018). The presence of the HIP116 Rad5p N-terminal (HIRAN) domain in the Rad5 structure is crucial to the initial formation of the four-way junction in the FR DNA repair pathway (Achar et al., 2015;Chavez et al., 2018;Kile et al., 2015;Korzhnev et al., 2016). The HIRAN domain is commonly found in the N-terminal regions of the proteins that contain the motifs of the SWI2/ SNF2 ATPases subunit (Iyer et al., 2006). The NMR structure of the HIRAN domain from the HLTF protein shows 6 betasheets (b1, b2, b3, b4, b5, b6) and two alpha-helices (a1, a2) organized as a barrel, diagonally twisted, with the two alpha helices in an Oligonucleotide/Oligosaccharide-Binding (OBlike) structure (Korzhnev et al., 2016) (Figure 1A). Residues Y72 and Y93 handle crucial interactions within the DNA interaction pocket through p-p stacking interactions with two nucleotides (Kile et al., 2015;Korzhnev et al., 2016). Residues N91, R71, H110 and K113 stabilize ssDNA through electrostatic interactions, and the residue F142 interacts with a third nucleotide, also with p-p stacking interactions outside the pocket ( Figure 1B). The HLTF's HIRAN domain interacts with the ssDNA independent of the nucleotides' composition (Hishiki et al., 2020). The HIRAN domain is described in several DNA-binding proteins from all kingdoms, being a conserved domain and well distributed in eukaryotic organisms (Iyer et al., 2006). The elucidation of its molecular interaction of ancient tertiary structure functions is an initial step to comprehend the evolution of complex protein functions involved in both FR and TLS DNA repair pathways. In this work, we built a multiple sequence alignment (MSA) with 48 sequences presenting a better evolutionary conserved residues profile of the HIRAN domain. Because of the lack of experimental structures in databases, we built the putative tertiary structure of the Rad5's HIRAN domain by comparative modeling and used it to get insights into the molecular mechanism involving its ssDNA interaction. Studies of structural variations by using Molecular Dynamics (MD) and Molecular Docking allowed us to infer the structural behavior of yeast and human HIRAN domains and to understand their affinity to ssDNA. Here, we report the structural dynamics of the yeast HIRAN domain, which is related to the HLTF's HIRAN domain, and the role of key residues in ligand stabilization. Primary structure analysis The Rad5's HIRAN domain sequence (GenBank ID: CAA97556.1) was obtained by extending the N and C-terminal from the primary structure located at Pfam (Finn et al., 2014) database (access code PF08797) and performing MAFFT (Katoh, 2002) global alignment for a maximum score. This sequence was also predicted by Korzhnev and colleagues (Korzhnev et al., 2016). The HLTF's HIRAN sequence (GenBank ID: XP_011511393.1) was taken from the tertiary structure calculated by nuclear magnetic resonance (NMR) data (PDB ID 5K5F) (Korzhnev et al., 2016). BLASTp (Altschul et al., 1990) was used to acquire all the 48 HIRAN sequences from other species using 10 À6 e-value cutoff with Rad5's and HLTF's HIRAN domains as a reference. The resulting MSA was visually inspected using MEGAx v10.1.8 software (Kumar et al., 2018). The logo representation and calculation were performed with ggseqlogo R package (Wagih, 2017) using the fasta alignment as input. All domain family regions from Rad5 and HLTF were extracted from the Pfam database. Comparative modeling MODELLER v9.22 (Sali & Blundell, 1993) program was used to build the tertiary structure of the Rad5's HIRAN domain of Saccharomyces Cerevisiae (scRad5) by comparative modeling. The BLASTp (Altschul et al., 1990) program was used to acquire homologous sequences used for global alignment within the PDB database. The alignment of the scRad5 sequence with the HLTF's HIRAN domain model (PDB ID: 5K5F); (Korzhnev et al., 2016), was performed with salign function and the calculations with the automodel function, as described in the software's user manual. 25,000 structures were calculated using the 5K5F structure as a template, and the structure with the lowest DOPE score (Shen & Sali, 2006) was selected. This structure had its global energy minimized using the CHIMERA v1.13.1 (Pettersen et al., 2004) program with the AMBER ff99SB force field with 100 steepest descent steps with a 0.02 (Å) step and ten conjugate gradient steps with a 0.02 (Å) step size. Subsequently, the structure was submitted to the SAVES v5.0 platform for structural validation analysis with PROCHECK (Laskowski et al., 1993) and ERRAT (Colovos & Yeates, 1993), and selected to be used in the molecular docking and dynamics steps. Molecular docking DOCK v6.9 (Allen et al., 2015) software was chosen to perform molecular docking using a non-commercial academic license. For the grid calculations, a cubic box with 4 nm edges was created around the DNA interaction site. The molecular surface of the receptors was generated using the CHIMERA Write DMS (Sanner et al., 1996) tool from the structure with no hydrogen atoms. Then, the spheres were generated by the sphgen tool of the DOCK6 package using the DMS file and a maximum distance of 4 Å and minimum of 1.4 Å for each sphere was established as the cutoff for a contact. The center of the box has the same vector position as the center of the selected sphere group. Later, the grid calculation was performed using the grid tool from the DOCK6 package (Allen et al., 2015), with default parameters. Structural pose calculations were performed using the DOCK6 tool, considering a flexible ligand, a rigid receptor, resulting in 100 conformations of the sDNA. However, only the top 10 poses with the lowest overall energy were evaluated. The RNA ribbon of the HIRAN domain molecular docking assays with ssDNA was performed using the chain G from PDB ID 4S0N as a template. This structure has four nucleotides of thymines (Poly-T), and it is a stable conformation of the ssDNA within the HLTF interaction pocket. Molecular dynamics simulations The GROMACS v2018.3 (Abraham et al., 2015) software was used to perform the molecular dynamics simulations. The HIRAN domain structures with a poly-T ssDNA were obtained from the molecular docking runs. All systems were created and performed under the same conditions to compare the systems characteristics. Table 1 summarizes and describes all systems used in this work. The replicates were performed with random initial velocities to search for a vast number of structural samples. The AMBER99SB-ILDN (Hornak et al., 2006) force field was used for all systems without ssDNA and the AMBER14SB force field (Maier et al., 2015) for all systems with ssDNA. AMBER force fields are commonly used in protein and DNA systems because of its accuracy in predicting these molecules' structural behaviors. AMBER14SB only differs from AMBER99SB-ILDN in having the ff99bsc0 parameters for nucleotide simulation (Galindo-Murillo et al., 2016;Petrovi c et al., 2018;Weber & Uversky, 2017). The simulation box was solvated with TIP3P (Transferable Intermolecular Potential, three-point; (Jorgensen et al., 1983)) water model, and sodium and chlorine ions to neutralize the system electrostatic charges. The box contains periodic boundary conditions (PBC) parameters in three dimensions, and the coulombic interactions were treated by the PME algorithm (Darden et al., 1993) with a cut-off of 1 nm for short-distance interactions (electrostatic and van der Waals). Minimization was performed with the steepest descent algorithm and stopped when the maximum force of the system was less than 1000.0 kJ/mol/nm. The system was equilibrated using 300 ps of NVT and NPT ensembles. The thermostat algorithm used was the V-rescale (Bussi et al., 2007), with 300 K as the reference temperature. The pressure coupling algorithm used was the Parrinello-Rahman barostat (Parrinello & Rahman, 1981), with isotropic type and isothermal water compressibility value of 1 bar À1 . The production stage used the leapfrog integrator and had a time integrator of 2 fs and recorded the atoms trajectories, speeds, and energies every 50 ps. The trajectory analyses were done with the GROMACS package commands (rms, trjconv, gyrate, rmsf, and make_ndx), and the graphics representations plotted using ggplot2 (Wickham, 2016) R package. The Y-axis values of the RMSF plots were converted into a B-factor by GROMACS using the -oq flag in the rmsf command. These values were color rendered at the initial structure of each model using the Render by Attribute tool of CHIMERA (Pettersen et al., 2004), and the terminal residues with high RMSF values were removed from this renderization. The GROMOS algorithm (Daura et al., 1999) was used as a method for the representative structures clustering using the cluster tool. All cutoff values and their respective number of structures selected can be found in Supplementary Table S1. A specific cutoff value was used for each system to find a minimum number of structures that represent the structural conformations adopted throughout the production stage. Contact analyses were performed using the CPPTRAJ (Roe & Cheatham, 2013) tool from the AMBER package. The nativecontacts command was used to extract native and non-native contacts with the maxdist to establish a maximum distance cutoff of 4 Å. Native contacts are those initially present in the system, while non-native contacts are acquired during the simulation step (Case et al., 2018). This analysis quantifies the number of frames that the atoms distance was below the cutoff distance in relation to the total frames, which indicates that the frames containing this interaction are equivalent to a certain percentage of the total simulation time. All calculations were performed using NMRBox (Maciejewski et al., 2017). Results and discussion Yeast and human HIRAN domains share key residues in the ssDNA interaction HIRAN is identified as a domain rich in beta-sheets fused to other catalytic domains and in the N-terminal region of SWI2/SNF2 proteins (Iyer et al., 2006). This domain is found as a standalone protein in prokaryotes. Alignment of the HIRAN domain with sequences from a BLASTp search using the primary structure of HLTF as a reference revealed a conservation pattern and the key residues for the ssDNA interaction in previous work (Kile et al., 2015). Here, we used both HLTF and Rad5 sequences as references for BLASTp searches and multiple sequence alignments to cover species within a broader evolutionary distance range. We analyzed the primary structures of Rad5 and HLTF proteins, and their respective HIRAN domains, using the Pfam database. These analysis revealed a similar domain architecture distribution with the presence of SNF2 and RING domains in similar regions of the sequences (Figure 2A). Rad5 and HLTF proteins have a global identity of 30.20%, according to a pairwise alignment. Therefore, HLTF can be considered the closest human homolog to Rad5. The Pfam seed sequences used to create the HIRAN HMM profile contain a 1-phosphatidylinositol 4-kinase (UniProt entry: Q5AQZ8), a protein not associated with DNA interaction or recognition. In addition, the used region ofHIRAN domain of HLTF do not include its alpha-helix region (a2) and may be the reason for the identification of primary structures from bacteria (UniProt entry: Q5LCS0 and Q47IS2) that have a few conserved residues in comparison to the eukaryotic sequences. Thisalso raises another question about the free-life HIRAN domain in prokaryotes, as this alpha-helix region is important for protein stability, as further shown in this work. This domain database, deposited by Iyer and Aravind in 2006, predicted the HIRAN domain to be an 8beta-sheet domain. This prediction was made years before the first HIRAN tertiary structure was deposited in the PDB in 2015 (PDB ID: 4XZF); (Hishiki et al., 2015), evidencing the relevance of new studies of this domain. To better understand the HMM profile and amino acid conservation of HiRAN sequence among distant species, we performed and curated a multiple sequence alignment with 48 sequences obtained by BLASTp search. Residue conservation regarding the ssDNA and dsDNA interactions are evidenced by the LOGO plot (Supplementary Figure S1). The lysine/arginine and tyrosine residues are conserved at positions 30 and 31, respectively ( Figure 2B). These residues correspondt to R71 and Y72 from the HIRAN domain of HLTF. The asparagine and lysine residues in position 53 are located near to a tyrosine residue in position 55 (N91 and Y93, respectively, from the HIRAN domain of HLTF). These two regions represent the side chain responsible for the formation of the p-p stacking interaction with the nitrogenous base of the nucleotides in the ssDNA, which is important for the ribbon stabilization within the pocket. In fact, most HIRAN domains might pair with ssDNA due to these residues in the loop regions (Hishiki et al., 2020;Kile et al., 2015;Korzhnev et al., 2016). The most conserved region of the primary structure of the HIRAN domain is the putative b6-region (Supplementary Figure S2). HIRAN sequences obtained from BLASTp searches using HLTF as query show a QVGHL pattern (Supplementary Figure S3), whereas sequences obtained tfrom Rad5 as query show an EIGRI pattern (Supplementary Figure S4). Despite the presence of a conserved glycine only, this region shares common physical-chemical properties of the side chain amino acids, indicating a key region in the primary structures of the HIRAN sequences. The F142 from the HIRAN domain of HLTF is a key residue in the binding site only if duplex DNA is in the minor groove of the genetic material (Hishiki et al., 2020) and the sequences acquired with the HLTF sequence as a reference have the phenylalanine/tyrosine residue at position 138 (equivalent to F142 from HIRAN domain from HLTF). These results suggest that primary structures that are conserved in relation to the HIRAN domain of Rad5 might not have this residue in this specific region. The column "Model" contains the name of the system; the column "Description" contains the elements in each system; the "Components" column lists the main components followed by how many replicates were performed, the time and the force field used. The HIRAN domain of Arabidopsis thaliana Rad5A protein (atRad5A) interacts with branched DNA with higher affinity compared with ssDNA and dsDNA, although the conserved ssDNA interaction residues are absent (Kobbe et al., 2016). According to our multiple sequence alignment result, compared to the HIRAN domain of HLTF, the Rad5A has a serine residue in the L1 tyrosine position31, and the glycine-phenylalanine in the L2 tyrosine-aspartate position, 55 and 56 respectively. This difference in the affinity for single-stranded and double-stranded DNA may be reflected in the FR or TLS signaling pathway, as the HIRAN domain is critical for stabilizing the protein complex in the stalled fork (Chavez et al., 2018). Arabidopsis thaliana has an HLTF-like protein with a HIRAN domain closely related to human HLTF with conserved DNA interaction residues, and further studies are needed to understand the entire DNA repair pathway in plants. An evolutionary understanding of the HIRAN domain functions, along with other species distant from humans, is required to infer the impacts on the FR and TLS DNA repair signaling pathways Based on these results, we propose that the human and the yeast HIRAN domains have different affinities for DNA. To evaluate this, we focused on predicting the tertiary structure of the HIRAN domain of Rad5 and its molecular interaction with ssDNA. We used HLTF as a reference protein (Hishiki et al., 2020;Kile et al., 2015). Modeling and assessment of the Rad5's HIRAN domain tertiary structure The tertiary structure of S. cerevisiae Rad5 and its domains cannot be found in structural databases, which may be related to the difficulty of obtaining the recombinant protein to perform experimental studies. Therefore, an alternative is to get the structure through computational methods. Other studies found in the literature used computational approaches to predict different protein structures that interact with DNA, such as the UvrB protein, involved in the nucleotide excision repair process (Bavi et al., 2016), the UvrC protein (Parulekar et al., 2013) and the NAD þ dependent DNA ligase proteins (Shrivastava et al., 2015). First, we constructed the HIRAN domain of Rad5 using two different approaches, threading and comparative modeling, with three softwares, ROBETTA (Song et al., 2013) I-TASSER (Yang et al., 2015), and MODELLER. The procedures with ROBETTA and I-TASSER with and without a template were performed with default parameters, while for MODELLER only one procedure was performed and used a template. The quality of all models was checked by appling statistical approachs, such as Ramachandran plot and ERRAT values, before and after energy minimization using CHIMERA and 3DRefine Server (Bhattacharya et al., 2016) (Supplementary Figures S5 to S17). The MODELLER method yielded the best models with all residues in allowed regions in the Ramachandran plot and ERRAT scores below the error lines (Supplementary Figure S5). Without the use of a template, ROBETTA built models with poor assignments of secondary structures (Supplementary Figure S15). When the HIRAN domain of HLTF was used as a template, ROBETTA generated structures similar to those generated by MODELLER, although a residue was located in a disallowed region (Supplementary Figure S12). The I-TASSER method predicted models with more residues in disallowed regions even after minimization steps. A) The domain architecture and motifs of the SNF2/SWI2 family proteins involved in the FR and TLS DNA repair signaling. The Rad5 protein has the HIRAN domain, domains of the SNF2 family, and a RING finger domain, a similar structure to HLTF, its human homologous. B) the regions L1, L2, b4 and F142, responsible for the ssDNA and dsDNA interaction, extracted from the logo representation. The cysteine and methionine are represented in dark yellow, the phenylalanine, tyrosine and tryptophan are in orange, the lysine, arginine, and histidine are in red, the proline is pink, the aspartic acid and glutamic acid are in blue, the glycine, alanine, valine, leucine, and isoleucine are in gray, the asparagine and glutamine are in brown, and the serine and threonine are in green. This group color representation aids the visual analysis of residues' side chains with similar physico-chemical properties. Therefore, the models of I-TASSER were not considered for further calculations (Supplementary Figures S6 to S11). The ROBETTA and MODELLER models were then submitted to 100 ns MD simulations after CHIMERA energy minimization to assess the stability of their tertiary structures and to ensure the selection of the best-predicted model of Rad5's HIRAN domain. The DSSP data (Kabsch & Sander, 1983) showed unfolding of the a2-helix in both models The RMSF data extracted was converted into B-factors for movement analysis of the residues in the tertiary structure. The data were normalized from the deleted regions with high RMSF values. From A to C, the hsHLTF apo model is represented and from D to F the scRad5 model, with the replicas 1, 2 and 3, respectively. The white color represents the lower B-factor value and red the highest. The correlation between the B-factor value and the structural flexibility of the residues is positive and proportional. Figure S18). However, the MODELLER model had more structured residues and, together with statistical analysis using ERRAT and PROCHECK, was considered the best structure to represent this domain. (Supplementary The HIRAN domain of HLTF (PDB ID: 5K5F) (Korzhnev et al., 2016) (Figure 3A), referred to as hsHLTF apo , was selected as a template for the HIRAN domain of Rad5, named scRad5. This model is a structure in saline solution and represents the domain without ligand in its apo conformation. This structure ensures that we can comparatively analyze the structures of the domains without the influence of any molecule. The pairwise alignment ( Figure 3B) shows a percent of identity and similarity of 12.50% and 51.47%, between HIRAN sequences of HLTF and Rad5 proteins, respectively. Despite the low identity percentage, the degree of similarity can be used to construct a tri-dimensional model once inter-residue interactions between side chains control the protein folding and stabilization (Newberry & Raines, 2019). Proteins have genetic variations acquired through evolution through natural selection. However, those that share a similar microenvironment for molecular action tend to have similar structures and functions due to environmental pressures and restrictions (Worth et al., 2009). The tertiary structure calculated for the scRad5 model ( Figure 3C) shows six beta-sheets and two alpha-helices, as well as a high number of residues with positive electrostatic potential at the putative ssDNA pocket (Supplementary Figure S19), similar to the HIRAN tertiary structure of HLTF and two small alpha-helices in the N-terminal region and L1 region. The structure also shows regions with more extended loops, suggesting that these residues may have highmobility because their side chains have no spatial restrictions with neighboring residues, and their molecular surfaces are exposed to solvent. Dihedral angles analysis (Supplementary Figure S5) showed the presence of 99 residues (79.8%) in most favored regions, 23 residues (18.5%) in additional allowed regions, two residues (1.6%) in generously allowed regions, and no residues in unfavorable regions. The two residues found in generously allowed regions are in loop regions in the tertiary structure. The residues in these regions have more degrees of freedom, because they have less atomic interactions with neighboring residues and side chains externalized into the solvent. The scores observed in the ERRAT (Colovos & Yeates, 1993;Supplementary Figure S5) plot show that only residues in loop positions have a high score value and exceed the warning region. These results, based on statistical and comparative calculations with protein structure databases, support a structure that can represent the conformation of the HIRAN domain of Rad5. Because we found differences in the tertiary structures between the scRad5 and hsHLTF apo models, MD simulations were performed to evaluate these structures, as even small variations in the structures can mean different structural dynamics and different residues in ssDNA interactions. The HLTF's HIRAN domain has fewer secondary structures unfolding Molecular dynamics simulations can contribute to the in-silico observation of biomolecules at the atomic level, improving the understanding of biological processes, such as timedependent structural and conformational changes, and the interaction of a protein with a ligand (Ganesan et al., 2017). MD simulations are a reliable method to understand molecular mechanisms involving amino acids and nucleotides, and have been used for ssDNA (Oprzeska-Zingrebe & Smiatek, 2018) and dsDNA systems (Hamed, 2018;Kamaraj & Bogaerts, 2015;Mary et al., 2017;Pradhan et al., 2018), to elucidate the role of hydrophobic interactions in protein inhibition (Verma et al., 2016) and in a protein-DNA complex system (Pitta & Krishnan, 2018). The RMSF data converted to B-factors allow us to trace a positive fluctuation correlation: the higher the RMSF value of the residue the greater is the B-factor (Figure 4). It was observed that all loops have a greater spatial motion Figure 5. Importance of residues R71/Y72 and K194/Y195 for the ssDNA interaction pocket formation. In A, B and C, the amino acids responsible for stabilizing the ssDNA are in red, in the hsHLTF apo , hsHLTF holo and scRad5 models, respectively. Its molecular surfaces are also shown in gray compared to the secondary structures in scRad5 and hsHLTF apo models. In hsHLTF apo , the residues with the largest fluctuation were Y72 and Y93, which are responsible for ssDNA interaction (Kile et al., 2015). In scRad5, the a2 alpha helix region has higher spatial mobility than the loop regions due to the unfolding of its secondary structure. The residues of scRad5 with high turnover were K197, Y198, G199 and Y217, which are located in loop regions. These structural behaviors, both in loop regions, may be essential for ssDNA recognition and stabilization. The RMSD and RMSF data (Supplementary Figure S20) show that the N-and C-terminal regions exhibit high motility. Because it is a possibility that neighboring domains help shape the HIRAN domain termini, this high motility seen in our MD simulations may not be present in the cellular environment. The folding/unfolding of secondary structures and their quantification throughout the MD trajectory is shown by the DSSP plot (Supplementary Figure S21). The hsHLTF apo DSSP data show that the secondary structures remained stable, with a more homogeneous number of residues therein (Supplementary Figure S22), and that N-terminal alpha-helix and helix located at the L1 are transient, confirming statements from other studies (Kile et al., 2015;Korzhnev et al., 2016). The scRad5 model had several secondary structures unfolding in all replicated simulations, with fewer residues within them (Supplementary Figure S22), in the majority at the a2-alpha-helix and at the b1 and b2 beta-sheets, in addition to an alpha-helix structure at the L1. The number of residues within secondary structures can be correlated with the stability of the model, as the variation heterogeneity of this number suggests that the structure undergoes blunter conformational changes that alter its patterns of hydrogen and inter-residue interactions. These analyses suggest that thsHLTF apo has greater structural stability than scRad5, and this instability can be the reason for the lack of experimental structural work on the S. cerevisiae Rad5 HIRAN domain. These data suggest that the stability of the HIRAN domain of Rad5 may be dependent on interaction with neighboring domains. A previous work of the complete Rad5 protein using molecular modeling methods of the complete Rad5 protein, supported by small-angle X-ray scattering (SAXS) showed no such denaturation of the secondary structure (Gildenberg & Washington, 2019). Gildenberg & Washington, (2019) also performed a coarse-grained MD simulation technique, over a five ms period, and reported that theHIRAN domain of Rad5 exhibits molecular interactions with helicase domains, and it is more energetically favorable near them. The ssDNA interaction pocket of the HIRAN domain has two conformational states Structural clustering algorithms were used to acquire conformational samples from a large number of structures generated by MD simulations and help understand the dynamics of tertiary structures over simulation time (Coskuner & Uversky, 2017). The GROMOS (Daura et al., 1999) clustering algorithm was used to obtain representative structures from trajectory data from the hsHLTF apo and scRad5 simulations (Supplementary Figure S23). Together with the full visual Figure 6. Residues of the hsHLTF holo/ssDNA model that interact with ssDNA throughout the MD simulation time. Plot of atomic contacts generated from the information extracted from the cpptraj tool and nativecontacts command from hsHLTF holo/ssDNA model. In A, the atoms of the residues that interact with their respective nucleotide (NT) atoms, in B, the tertiary structure of the model, and in C, the representation of these residues with their molecular surface. Residues that presented contacts during a period longer than 75% of the simulation time are represented in red, and those that presented between 25% and 75% are in orange. analysis of all frames calculated at the production stage, these structures increased our understanding about the behavior of amino acid residues in the DNA interaction pocket. Based on the analysis of the representative structures, we found that the tertiary structure of the hsHLTF apo domain behaves more uniformly than the scRad5. In both models without the ssDNA ribbon, the interaction pocket was not fully formed at any frame during the simulation. These observations suggest that this pocket does not spontaneously acquire its functional conformation. Thus, it is structured only in the presence of a ligand. The loop regions and their residues responsible for ssDNA interaction exhibit high spatial dynamics in the simulations. For conformational induction to occur, the region requires a complementary fit between the pair and various attempts at collisions and interactions between the protein and the ligand, justifying the need for a pocket with high mobility in its free form (Du et al., 2016). Conformational induction by the ligand allows the protein to interact with different substrate conformations, allowing mutations and variations in the ligand or binding site, providing an evolutionary advantage (Du et al., 2016). The PDB ID 4S0N (Kile et al., 2015), herein referred to as hsHLTF holo , is the structure of the HLTF's HIRAN domain at its holo conformation, complexed with ssDNA, and was used as a comparison to analyze the spatial positions of amino acids in the pocket. During model preparation, all nucleotide atoms were removed to allow simulation of a single HIRAN domain chain without the ligand. This model was used to analyze whether the pocket, once fully structured and stabilized with ssDNA, adopted the conformation found in its apo form. Comparison between hsHLTF apo and hsHLTF holo interaction pockets revealed the importance of residues R71 and Y72 for its formation (Figure 5A and 5B). The hsHLTF apo presents the R71 internalized into the pocket, with its side chain restricting the ssDNA interaction, while the R71 of the hsHLTF holo is externalized. This outward position of R71, in turn, internalizes the Y72 and makes possible the p-p stacking with nucleotides together with the Y93. It is important to note that this transition of spatial conformation from the hsHLTF apo model to the hsHLTF holo , and vice versa, was not observed during any frame of the production stage. The putative pocket of the HIRAN domain of Rad5 has similarities to the hsHLTF apo pocket. Although the key residues that interact with ssDNA are different, residues with similar side chain physicochemical properties are found at similar positions. All pockets have high spatial dynamics and the side chain conformation of the K194, related to R71, sterically restricts the entry of ssDNA when internalized to the pocket. ( Figure 5C). The RMSD values of the hsHLTF holo show that this structure has lower structural mobility compared to the hsHLTF apo model (supplemental Table S2). This may be because the Plot of atomic contacts generated from the cpptraj tool and nativecontacts command from hsHLTF apo/ssDNA model. In A, the atoms of the residues that interact with their respective nucleotide (NT) atoms, in B, the tertiary structure of the model and in C, the representation of these residues with their molecular surface. Residues that presented contacts during a period longerthan 75% of the simulation time are represented in red, those between 25% and 75% are in orange and those lesser than 25% are in pink. original structure originated from the crystallization method and is therefore a situational conformation that is more rigid, or because there is greater stability of HIRAN in its holo conformation with ssDNA (Dauter & Wlodawer, 2016;Su et al., 2015). The representative structures of the hsHLTF holo model (supplemental Figure S24) show a homogeneous profile without major structural deviations. This analysis suggests that the HIRAN domain of HLTF in its holo conformation requires a longer simulation time or another MD simulation method to observe the transition state between holo and apo pocket structures. The scRad5 ssDNA has more electrostatic side chain residues in the interaction pocket and it is stabilized by the ssDNA The molecular dynamics simulation of the hsHLTF holo model with an ssDNA, herein referred to as hsHLTF holo/ssDNA , was In A, the atoms of the residues that interact with their respective nucleotide (NT) atoms, in B, the tertiary structure of the model, and in C, the representation of these residues with their molecular surface. Residues that presented contacts during a period longer than 75% of the simulation time are represented in red, those between 25% and 75% are in orange and those lesser than 25% are in pink. In A and B, the residues responsible for the ssDNA stabilization, extracted from de native contacts data, are represented in red in the models hsHLTF apo/ssDNA and scRad5 ssDNA, respectively. C) The superposition of the structures A and B, with the hsHLTF apo/ssDNA residues represented in blue and light blue and the scRad5 ssDNA residues in red and pink. used as reference for simulations involving the HIRAN and its ligand. We performed molecular docking simulations to acquire the initial structures used in dynamics with protein-ssDNA components ( Supplementary Figures S25 and S26). The RMSD data of the hsHLTF holo/ssDNA model (Supplementary Figure S27 and Supplementary Table S2) showed a more stable structure and less motility when compared to models without ssDNA. The RMSF data revealed that the loop regions involved in the interaction with the ssDNA had less spatial mobility than hsHLTF holo , without the ligand. Together, these results suggest that the presence of the ribbon interferes with the stability and structural flexibility of the HLTF's HIRAN domain. The analyses of native and non-native contacts between the HIRAN domain with ssDNA in our simulations agree with the results obtained by Kile and colleagues (Kile et al., 2015), except for F142, which showed an interaction with NT1 for 25% of the simulation time ( Figure 6A). This is due to the exposure of the initial nucleotides of the model to the solvent and may not occur biologically. The p-p stacking interactions of tyrosine Y72 and Y93 ( Figure 6B) with the nitrogenous base of ssDNA, as well as the electrostatic interactions of N91 and H110 ( Figure 6B) with the phosphate chain, was observed throughout the simulation time and have shown the importance of these residues in the ribbon stabilization within the pocket tertiary structure. The contacts between residues R71 and K113 and the ssDNA remained through 25% to 75% of the simulation time ( Figure 6), demonstrating that they also aided the nucleotides stabilization. The RMSD data of scRad5 ssDNA and hsHLTF apo/ssDNA (Supplementary Figure S28 and Supplementary Table S2) showed the systems in equilibrium without high variation in the backbone atoms distances, indicating that the DNA ribbon interfered with the stabilization of both HIRAN domain tertiary structures. The RMSF data showed that residues in loops regions did not decrease its mobility, suggesting that both structures did not find a stable and a minimum energy conformation with the ssDNA, as presented in the hsHLTF holo/ssDNA model. Analysis of the hsHLTF apo/ssDNA RMSF plot allowed us to infer that loop regions still have high motility, unlike the hsHLTF holo/ssDNA system, and the formation of the interaction pocket, similar to its holo conformation, was not observed. The atoms' contact data of the hsHLTF apo/ssDNA ( Figure 7A) showed that Y93 atoms interact on average around 30% of the simulation time with the nitrogenous base atoms of NT4 (Supplementary Figure S29). In addition, the stabilization of the ribbon close to the interaction pocket was carried out mainly by electrostatic interactions of residues R71, N91, and H110 ( Figure 7B). These results allow us to hypothesize an initial stage for the pocket conformational change that pairs the tyrosine with the nucleotides, NT4 and NT3. Unlike the HIRAN domain model in its holo conformation, R71 got a longer interaction time with the ssDNA because it was positioned in the interior of the pocket, while Y72 was externalized and had only transient interactions with NT3 (Figure 7). The drastic decrease in atomic contacts from hsHLTF holo / ssDNA to hsHLTF apo /ssDNA can be attributed to the conformation of the interaction pocket. As shown in Figure 5, the hsHLTF apo model has a different spatial conformation of R71 and Y72, making it difficult to access the DNA strand. This difference may lead to steric constraints of the internal residues at the interaction site and no p-p stacking of nucleotides with Y72 and Y93. The visual analysis of all frames generated by the production stage of hsHLTFapo /ssDNA and native and non-native contacts data showed that the Y93 residue, when pointing to the interior of the pocket and without spatial restriction, can pair and interact with one of the ssDNA nucleotides. Y93 can be the initial residue of interaction with the ssDNA and induce the pocket formation allowing the Y72 to interact with another nucleotide, through the conformational change with the R71, and consequently the stabilization of the ribbon from the electrostatic interactions of residues N91, K113 and H110. The substitutions of residues Y72 and Y93 by an alanine residue prevent the HLTF's HIRAN domain from interacting and complexing with ssDNA (Kile et al., 2015). In the scRad5 ssDNA model, although Y214 is exposed to the pocket, its predominant contact was with the sugar portion of NT3 (O5' and O4' atoms) (Figure 8). This residue displayed some interactions with the N1 and C2 of NT2, showing the mobility of the ssDNA ribbon ( Figure 8A). The observed interactions cannot be characterized as p-p stacking since they were contacts with specific atoms and the hydroxyl group in its side chain. Despite the high motility and the different conformation, the phosphate chain had become more exposed and more stable in the scRad5 ssDNA system. The non-internalized Y195 in the yeast model has only momentary interactions with the NT1, again showing the motility of the residues exposed to the solvent and the ssDNA ( Figure 8A). The ssDNA remained stable near HIRAN domain of Rad5 because of several electrostatic interactions in the interaction pocket region: residues S243, R241, N191, and K221 presented atoms that had interactions with nucleotides atoms for over 75% of the simulation time, R229, R219, Q248 and E244 between 25% and 75% and the K194 by approximately 10% (Figure 8). The scRad5 model had several amino acids with positive electrostatic side chains near the DNA interaction pocket, and the simulation data suggests that the ribbon stabilization was due to coulombic interactions. The hsHLTF holo/ssDNA and hsHLTF apo/ssDNA models have only five residues that electrostatically interact with ssDNA (R71, N91, K113, H110, and Y92). In comparison, the scRad5 ssDNA presented nine residues with contacts: N191, K194, R219, K221, R229, R241, S243, E244, and Q24 (Figure 9). The DSSP analysis (Supplementary Figure S30) suggests that the scRad5 ssDNA is more stable than the scRad5, maintaining its secondary structures and increasing its count. The spatial restriction imposed from the ssDNA ribbon increased the inter-residue interactions and prevented the a2 alphahelix unfolding. The average number of residues with secondary structures was higher than the replicas of the scRad5 model (Supplementary Figure S31 and Supplementary Table S3). Together with the greater homogeneity of the clustering of representative structures, we can hypothesize that the Rad5 HIRAN domain has greater stability within the presence of ssDNA. Conclusions Our work suggests that the tertiary structure of the HIRAN domain of Rad5 has a greater degree of freedom when compared to the HIRAN domain of HLTF in its apo conformation. The three-dimensional instability with constant unfolding of secondary structures of scRad5 models suggests the need for neighboring domains for its structural stabilization. Structural analysis of the ssDNA interaction pocket revealed an open and a closed state of the pocket, and the transitions were not observed during our independent molecular dynamics' trajectory. These results suggest a possible conformational induction mechanism by ssDNA during the transition from the closed state of the HIRAN domain pocket to its open state, as this region exhibits great structural flexibility. In the HIRAN domain of HLTF, this initial recognition can be attributed to Y93, whereas in Rad5, it was not possible to draw clear conclusions. Based on our findings, we can hypothesize that the time required to observe the full mechanism of pocket formation from apo to holo conformation may require microsecond-scale simulations or another molecular dynamics simulation method, as inferred from the RMSF and RMSD. Analysis of the atomic contacts of the interactions of the HIRAN domain with ssDNA shows that the positively charged side of the scRad5 tertiary structure is critical for stabilizing the ssDNA in the pocket and has more residues in contact with the nucleotides. Together with the previously published data on the HIRAN domain and our analysis of the key residues, our results provide new molecular insights into the mechanism of action of the HIRAN domain in the recognition process of the stalled replication fork. Here we showed that crucial residues for p-p-stacking and electrostatic interactions with ssDNA, such as Y72, Y93, N91 and H110 are conserved among distant species. Also, our data demonstrate the importance of these residues for band stabilization within the tertiary structure of the pocket. In the HIRAN domain of Rad5, stability is greater in the presence of ssDNA and may depend on interaction with neighboring domains.
2022-03-08T06:22:52.153Z
2022-03-07T00:00:00.000
{ "year": 2022, "sha1": "caf3bf70124741a57ceca7e821d40198f480f52d", "oa_license": "CCBY", "oa_url": "https://figshare.com/ndownloader/files/34306862", "oa_status": "GREEN", "pdf_src": "TaylorAndFrancis", "pdf_hash": "f88d83bda6f321fcf7770484c433c285d4314f12", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
226323532
pes2o/s2orc
v3-fos-license
Site index models with density effect for hybrid aspen ( Populus tremula L. × P. tremuloides Michx.) plantations in southern Finland . Introduction Hybrid aspen, a hybrid between the European aspen and North American trembling aspen (Populus tremula L. × P. tremuloides Michx.), was introduced in Finland at the beginning of 1950s in order to supply raw materials for the matchwood industry.From the start of the breeding activities the genetic variation and its effects have been studied using different hybrid progenies and clones (Beuker, 2000).In addition, experiments were established in southern Finland to study growth and yields (Oskarsson, 1962;Saloniemi, 1965;Hagman, 1971;Kallio, 1972).However, breeding and research activities with hybrid aspen decreased in the 1980s due to the decline of the matchwood industry (Tullus et al., 2012).Hybrid aspen received renewed attention during the 1990s, this time by the pulp and paper industry, because of its specific fiber characteristics and its predominant growth rate that was shown earlier (Beuker, 2000). Besides paper production, hybrid aspen also provides suitable raw materials for plywood and veneer (Heräjärvi and Junkkonen, 2006).Due to its high growth rate and resulting short rotation period, hybrid aspen may also be considered suitable for bioenergy (Rytter and Stener, 2005).The ability of hybrid aspen to regrow from root suckers after harvesting the primary stand results in even higher growth rates during the second and following rotations (Hytönen, 2018).Because most of Finland's forested area is covered with Norway spruce and Scots pine, increasing the areas with other (broadleaved) species would increase the forest biodiversity.Hybrid aspen could be recommended as an alternative hardwood species for southern Finland. In order to provide decision-making support for the establishment and management of hybrid aspen plantations in Finland, growth and yield models are needed to show the wood production potential of the species.The site index is a widely applied predictor for site productivity and is included in the majority of growth and yield models (e.g., Clutter et al., 1983;Vanclay, 1994;Burkhart and Tomé, 2012).The site index is usually represented by the dominant height of a stand at a given age based on a growth model for dominant height.The dominant height is commonly assumed to be independent of the stand density, as presented in many textbooks, based on numerous studies (Hiley, 1959;Sjolte-Jørgensen, 1967;Dahms, 1973;Schmidt et al., 1976;Clutter et al., 1983;Seidel, 1984;Lanner, 1985;Pienaar and Shiver, 1984;Smith et al., 1997;Avery and Burkhart, 2002;Harrington et al., 2009). However, there are also studies reporting the effect of the initial spacing on the stand arithmetic height or dominant height, particularly for hybrid species grown in short-rotation plantations.The conclusions on the effect of density on height growth have differed in the studies and the effect has been reported to be either negative or positive (Knowe and Hibbs, 1996, Sharma et al., 2002, Harrington et al., 2009).In some studies, the effect of stand density has been included in site index models (MacFarlane et al., 2002, Sharma et al., 2002, Antón-Fernández et al., 2011). In the Nordic countries, site indexes have been presented only for the major tree species, such as Scots pine, Norway spruce and silver birch.There are only few studies addressing height growth modelling for hybrid aspen (Johansson, 2013), but no growth and yield models for hybrid aspen in Finland have been published so far.No results on the effects of the initial planting density on the dominant height growth for hybrid aspen in northern Europe have been published. The objectives of this study were to examine the dominant height growth of clonal hybrid aspen plantations.The factors affecting dominant height growth were analysed including the effect of the initial planting density.Dominant height growth models for site index assessment were developed.The predictability of the models was verified by comparing them to the results of previous studies. Experimental stands During the mid-1990s, superior individual trees were selected from stands and experiments with hybrid aspen progenies in southern Finland that had been established during the 1950s and 1960s.The selections were made based on growth performance and form.Additionally, there should be no signs of any biotic or abiotic damage.From these selected genotypes only those that showed good vegetative propagation ability were included for further testing in field experiments. The experimental stands used in this study were established using three clones (Table 1).At the time of the stand establishment, only a very limited number of clones with sufficient planting material were available.The clones were reported to be superior to the common European aspen in terms of height growth at an early age (Hynynen et al., 2002(Hynynen et al., , 2004)). The experimental sites were located in Lohja, Lapinjärvi, and Pornainen in southern Finland (Table 2, Fig. 1).This region has a relatively mild climate for Finland with a temperature sum of 1300-1400 degree days (T ≥ +5 • C) and 600-700 mm of annual precipitation (Finnish Meteorological Institute, 2020).Experiments 2 and 3 were planted on a herb-rich heath forest (Oxalis-Myrtillus) site type (Cajander, 1949), while the experiments 1 and 4 were planted on former agricultural fields.Experiments 1, 2 and 3 were located on fertile sites, which are favourable for aspen.Experiment 4 was established on a clay-rich soil, which is not considered the best suitable for hybrid aspen. The original objective of the trials was to study the growth and yield (Cajander, 1949). of hybrid aspen clones under different site conditions and varying spacing.Each stand comprised of one to three blocks, each of which was planted using one or two different clones (Table 2).Each clone was initially planted with four different target densities: 2.5 m × 2.5 m (1600 trees ha − 1 ), 3.0 m × 3.0 m (1200 trees ha − 1 ), 3.5 m × 3.5 m (800 trees ha − 1 ), and 5.0 m × 5.0 m (400 trees ha − 1 ).The actual number of trees per ha varied slightly between experiments, because on forest sites it was not possible to plant in straight lines because of stumps or rocks.The experiments were established with a randomised block design for the clone and initial spacing (Table 2, Fig. 1).The plot size was 25 m × 40 m (0.1 ha) with a 5 m buffer zone to offset the random effects from adjacent plots. The experiments were planted in 1997-1999 using one-year-old plants.Before planting, experiment 1 was ploughed during the previous autumn and harrowed during the spring just before planting.Patch scarification was carried out in experiments 2 and mounding in experiment 3 (Hynynen et al., 2002).There was no mechanical site preparation in experiment 4, but chemical weed control was conducted during the autumn prior to planting.After planting, the seedlings were protected from rodents and hares with 60 cm high Tubex tubes.In addition, experiments 1 and 3 were fenced against moose.Experiments 2 and 4 were not fenced because they were situated near a major road or in an agricultural area, where the risk of moose damage was low. The first inventory of the experiments was made during the first autumn after planting.All measurements inside each plot were recorded at the single tree level.All experiments were annually assessed from year 1 to year 4, measuring the height with an accuracy of 1 cm.Thereafter, from age 5, they were measured every 2-4 years including height measurements at an accuracy of 10 cm and measurements of diameter at breast height (dbh) at 1.3 m from the ground with an accuracy of 1 mm.Single tree data were repeatedly collected 7-12 times from each experiment from the year of establishment until 2015.This resulted in a total number of 485 plot-level measurement instances.The summary statistics and information about the experiments and measurements are provided in Table 2.In addition to this, supplementary information on stand density trends and the size-density relationship is provided in Appendix A. Statistical analysis and modelling approach In the analysis, measurement data for all three clones were pooled together, because the experimental design did not allow the use of balanced data for each clone.Because a site index model was developed using dominant height over age, the dominant height has to be defined (Pienaar and Shiver, 1984).In this study, the dominant height was calculated as the average height of 100 trees with thickest dbh per hectare, which is the commonly used definition in northern Europe (Rantala, 2011). In the measurements less than five years after planting, the dbh was not measured, and thus not included in the data.For these measurement Note: The ANCOVA model was presented in Eq. (1). D. Lee et al. data, the 100 tallest trees per hectare (10 tallest trees per plot) were used to calculate the stand dominant height.All the data points of age 1 and 2 were excluded from the modelling data in order to avoid the effect of varying initial seedling height at the time of planting on the height growth modelling.The total number of data observations eventually applied for model development amounted to 389 data points from 48 plots in 4 experiments with a total of 9 blocks with a range of 3-20 years for the age, 1.5-31.0m for the dominant height, 400-1600 trees ha − 1 for the initial planting density (Table 2).Due to experimental design, the data had a hierarchical structure (multiple sample plots on each site), Therefore a mixed-effects model with random site effects was applied in the analysis.To examine the growth characteristics of hybrid aspen, a correlation analysis between the stand age and dominant height, and an analysis of the covariance between the stand age, dominant height, and initial spacing were carried out using the PROC MIXED procedure in the SAS 9.4 statistical analysis software prior to model development (SAS Institute Inc., 2015). To develop the dominant height growth model in the early analysis, we considered several representative growth functions in forest biometrics such as Schumacher, Chapman-Richards, Hossfeld, and Gompertz.By comparing the growth patterns and fit statistics, the Chapman-Richards growth function was found to be the most suitable for application as a base equation to develop the site index model in the main results of the present study (Bertalanffy, 1957;Richards, 1959;Chapman, 1961).The function has been widely used especially for height growth modelling of plantation forests (e.g., Cao, 1993;Amaro et al., 1998;Palahí et al., 2004;Nord-Larsen, 2006;Huuskonen and Miina, 2007;Weiskittel et al., 2009;Johansson, 2013;Lee et al., 2015). To study additional effects on stand age, the parameters of the Chapman-Richards function have been expressed by modelling dominant height growth as a function of other stand characteristics such as soil and climate factors, or by comparing the significance of modified parameter terms in candidate models with F-test, full model vs. reduced model (Huuskonen and Miina, 2007;Smith et al., 2014).The effect of initial stand density has been studied with the help of modified parameters of the Chapman-Richards function (e.g.Pienaar and Shiver, 1984;Knowe and Hibbs, 1996;Sharma et al., 2002;Antón-Fernández et al., 2011). In this study, the effect of initial density on growth pattern was analysed by adding the density effect to parameters terms of the Chapman-Richards function: asymptote, growth rate, shape, and all their combinations.Then, predicted growth patterns and fit statistics were compared among every possible combination of the densitysensitive candidate model. Model parameters were estimated using the PROC NLMIXED procedure in SAS 9.4 (SAS Institute Inc., 2015).The suitability of these models was checked by fit statistics: the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the coefficient of determination (R 2 ) and the root mean squared error (RMSE).Residual plots were diagnosed using all the independent variables as well as the predicted over the observed values.After the verification process, the site index curves were plotted as anamorphic equations, with a base age of 20 years, by transforming the developed dominant height growth model.A base age of 20 years was chosen based on references from previous studies (Johansson, 2013) and by taking into account the expected final harvest age and prospective yield models for Finland.Furthermore, the parameters and site index curves were compared with the results of earlier studies for northern Europe, which used the same base growth function (Johansson, 2013). Dominant height growth by initial stand density The relationship between the stand age and the dominant height was basically linear up to the age of 20.The correlation coefficient between the dominant height and age was 0.98 (P < 0.0001).Thus, there was no obvious sign of an asymptote of height growth until the age of 20 years in the studied clonal hybrid aspen plantations.This strong linearity was the basis for the selection of the Chapman-Richards function for site index development in the later part of the analysis.An analysis of covariance was applied to examine the overall significance of the initial spacing on the dominant height development.In the analysis of covariance, the dominant height at the age of a stand at the time of each measurement instance was used as a dependent variable (Eq.( 1)). where H is dominant height, μ is the global mean, τ i is the effect of the ith initial spacing class, γ is the slope coefficient of the covariate, A ij is the jth observation of the covariate, stand age (A), in the ith initial spacing class, A is the global mean for covariate, stand age (A).u is the random effect for the experiment, and ε is the error term. The dominant height was significantly different for different initial spacing, which was categorised into four classes: 2.5 m × 2.5 m, 3.0 m × 3.0 m, 3.5 m × 3.5 m, and 5.0 m × 5.0 m (Table 3).Wider initial spacing resulted in slower dominant height growth.This significant result was valid regardless of the definition of dominant height (Appendix B). Model fitting and validation The growth models were developed by applying the Chapman-Richards function taking into account the strong correlation with age and the significant effect of the initial planting density on the dominant height.For the model with a density effect, every possible combination of candidates was examined using fit statistics to choose the best density-sensitive model (Appendix C).Two final model variants, a density-free (Eq.( 2)) model and density-sensitive (Eq.( 3)) model, were fitted to the data: Table 4 Parameter estimates and fit statistics of dominant height growth model depending on the application of the initial density effect for hybrid aspen.For the modelling approach, a nonlinear mixed-effect model was used based on the Chapman-Richards function.Equations were provided in Eq. ( 2) for the densityfree and in Eq. ( 3) for the density-sensitive model. ( Note: all fixed-effect parameters are significant indicating P-values in parenthesis.AIC is the Akaike information criterion.BIC is the Bayesian information criterion.R 2 is the coefficient of determination.RMSE is the root mean square error. where H is the dominant height (m); D is the initial planting density (trees ha − 1 ) divided by 10,000 (m 2 ); e is the base of the natural logarithm.In both models, a and b are parameters, which refer respectively to the asymptote and the growth rate of the original Chapman-Richards function.In Eq. ( 2), the shape of the function is expressed as a single parameter c, but in density-sensitive model (Eq.( 3)) the shape is affected by initial planting density (c 0 + c 1 × D); u 1 and u 2 are random effects; ε is the random error term.Note that the growth rate parameter (b) is estimated as a fixed-effect and only the asymptote and shape parameters vary with random effects due to convergence problems when applying a random effect on the growth rate term (cf., Lappi and Bailey, 1988;Hall and Bailey, 2001;Huuskonen and Miina, 2007). All the fixed-effect parameters were significant in both models (P < 4).In both models, parameters a and b, referring to the asymptote and growth rate, respectively, were quite similar.However, standard errors of the parameters were lower in the density-sensitive model.In density-sensitive model, the effect of the initial planting density (parameter c 1 ) was included in the shape parameter.According to the density-sensitive model (Eq.( 3)) fitted to the hybrid aspen data, the value of the shape parameter of the Chapman-Richards model varied from 1.6378 with 400 trees ha − 1 to 1.4775 with 1600 trees ha − 1 .Thus, for hybrid aspen stands with a low stand density, early growth was slower and the increment curve reached the inflection point later than for stands with a high stand density. The model performance was also evaluated by residuals, AIC, BIC, R 2 , and RMSE.All indices were better in the density-sensitive model than in the density-free model.For residuals and RMSE, the fit statistics of the density-sensitive model distinctly performed better than the density-free model.Residual plots were checked to verify the model behavior (Fig. 2).When comparing the observed and the predicted values, the residuals of the density-free model were plotted with a stepped, discrete distribution because the age was only considered as a predictor (Fig. 2, plot a1 and a2).On the other hand, the residuals of the density-sensitive model were dispersed more with various predictions even for the same age due to the variation in initial density, which was reflected in the predicted values of the density-sensitive model (Fig. 2, plot b1 and b2).The residual variation was slightly smaller in the density-sensitive model than in density-free model, which implies a better fit to the data.The same pattern was also observed in the residuals over age (Fig. 2, plot a3 and b3).Neither of the models showed abnormal trends or biases in the residuals.However, an obvious distinction between the two models was detected in the scatterplot of residuals for the initial density (Fig. 2, plot a4 and b4).In the density-free model, the mean of the residuals for the initial density increased from a negative value to a positive value showing a biased prediction with respect to the initial density.The residuals for the initial density implied a better fit of the density-sensitive model to the dominant height growth of hybrid aspen. Exploratory growth description and site index application In order to assess the growth patterns and the effect of initial density, models were used to simulate dominant height development with different stand densities (Fig. 3).The dominant height predictions varied between 2.0-2.7 m at age 3, and 22.9-24.5 m at age 20 with varying initial densities.The initial density influenced only the early dominant height growth rate resulting in an increasingly dominant height differentiation during the first 14 years.The largest difference of the dominant height growth was observed at age 14 when the dominant height was 17.7 m for a density of 1600 trees ha − 1 while it was 16.0 m for a density of 400 trees ha − 1 .Thereafter, the dominant height differences decreased, and it was predicted to be 0.7 m between 1600 trees ha − 1 and 400 trees ha − 1 at age 40.The predicted height curve of the density-free model remains in the middle of the range of density-sensitive model predictions (Fig. 3). Furthermore, the periodic annual increment (PAI) of dominant height in the density-sensitive model was studied by calculating the growth difference according to the initial density over age to describe the general incremental pattern and age of the maximum PAI (Fig. 4).The annual increment of the dominant height was higher for high density and lower for low density, similar as shown for dominant height.The PAI annually increased up to 1.34 m year − 1 at age 9 for a density of 400 trees ha − 1 and up to 1.42 m year − 1 at age 7 for a density of 1600 Fig. 3. Dominant height growth curves linked to the initial planting density based on density-free (density, free) and density-sensitive (density, 400-1600 trees ha − 1 ) models for hybrid aspen.Regression lines via Eqs.( 2) and (3) are displayed using the fixed effects provided in Table 4.The age range fitted for model development was from age 3 to age 20 and thereafter the predicted curves were simulated for extrapolation.From the entire prediction (plot a), height curves at a certain range (plot b) were magnified to clarify the growth difference.with the data points (grey circle) used for model development was compared to the curve of a study (dash line) from Sweden (SW) reported by Johansson (2013).The density-free site index model via Eq.( 4) was applied to compare with the Chapman-Richards model in the study by Johansson (2013).The site index (SI) in both studies indicates the dominant height at a base age of 20 (H 20 ). D. Lee et al. trees ha − 1 .The annual increment decreased after that, and subsequently the growth differences due to the initial density started to decrease.After age 14 the PAI was reversed for the density and the annual increment of the dominant height was lower for initially high density stands than for initially low density stands.Still, the difference in the annual increment for different initial densities was insignificant compared to the situation before the reversion. Site index equations using the developed density-free and densitysensitive dominant height growth models can be expressed respectively in the density-free (Eq.( 4)) and the density-sensitive (Eq.( 5)) site index model of the anamorphic curve using the estimated parameters as follows: ) 1.5441 (4) where S is the site index (m); A 0 is the base age at 20 years; and other terms are as defined earlier. In the density-sensitive site index model (Eq.( 5)), unlike the conventional site index models, the dominant height prediction at a given age varies according to the initial density.For instance, if Eqs. ( 4) and (5) are applied to predict the dominant height of a 15-year-old stand with a site index (H 20 ) of 27 m, the prediction from the density-free site index model (Eq.( 4)) is 20.7 m.The dominant height prediction of density-sensitive site index model (Eq.( 5)) is 20.3 m for a stand with an initial density of 400 trees ha − 1 and 20.9 m for 1600 trees ha − 1 . So far, the only published site index model for hybrid aspen in the Nordic countries was developed in Sweden by Johansson (2013).It also used the Chapman-Richards function, but without a stand density effect.The density-free site index model (Eq.( 4)) was compared with Johansson's model for deviated site indices with a base age of 20 years (Fig. 5).The dominant height development of stands with a site index from 18 to 30 m by 3 m intervals were predicted from age 5 to age 30.In general, the form of the predicted development for the dominant height was slightly higher than the model by Johansson (2013).However, the site index curves of both studies were identical from age 13 to 22. Before and after this, the site index curve from the present study was above the predicted dominant height of Johansson's model, the difference being 0.8-1.3m at age 5 and 0.7-1.2m at age 30. Effect of initial planting density on height growth Site index models for hybrid aspen were developed based on data from repeatedly measured clonal plantations located in southern Finland.In this study, the characteristics of the dominant height growth linked to the initial density of the stand were studied and models were developed considering these characteristics.The general dominant height growth patterns observed in our study were similar to the findings from earlier studies in the Nordic and Baltic countries (Rytter and Stener, 2005;Heräjärvi and Junkkonen, 2006;Johansson, 2013;Zeps et al., 2016;Stener and Westin, 2017;Fahlvik et al., 2019).In the present study, nonetheless, it was shown for the first time that dominant height growth of hybrid aspen is affected by the initial stand spacing (Table 3).The dependence of height growth on spacing has been found for other tree species.There are several studies on hybrid poplar plantations on this (DeBell and Harrington, 1997;Johnstone, 2008;Benomar et al., 2012;Ghezehei et al., 2016).However, different studies report both negative and positive effects of spacing on height growth.Benomar et al. (2012) reported that, depending on the clone, the mean height growth of hybrid poplar trees increased with initial density. A positive correlation between the mean height growth and initial planting density was also found in a study on ash by Kerr (2003).He proposes three hypotheses for the cause of the higher growth of closer spacing; an improved microclimate, reduced interspecific competition, especially from weeds, and altering of the red-far-red light reflected from foliage.If no weed control is carried out competition from weeds on former agricultural land is strong (Hytönen andJylhä, 2005, 2013), and competition from weeds can have a significant effect on the survival and growth of Populus seedlings (Böhlenius and Övergaard, 2015).Although the effect of weed competition on height growth could not be verified in our study, it could be an explanation for the difference in the dominant height growth linked to the initial density, which is the reason why the effect was strongest during the early years of stand development. Model evaluation with stand density effect In this study the dominant height growth was modelled applying the widely used Chapman-Richards function including the age and initial planting density as predictors.A similar approach was tested by Pienaar and Shiver (1984) for slash pine plantations.They reported no significant effect of the initial stand density on the parameters of the Chapman-Richards function, and concluded using a model without an initial density effect.However, they also stated that the effect of the initial density may have had effect on the dominant height earlier than the Fig. 6.Growth comparison between hybrid aspen and other major species in Finland at a herb-rich site (Oxalis-Myrtillus forest type) (Cajander, 1949;Tonteri et al., 1990).The density-free model of the present study was applied for hybrid aspen in a dominant height growth curve (plot a, solid line) via Eq.( 2) and in site index curves (plot b, solid line) via Eq.( 4).The site index (SI) for hybrid aspen indicates the dominant height at a base age of 20 (H 20 ).The data for Norway spruce and Scots pine was provided by the MOTTI simulator (Natural Resources Institute Finland, 2015).The dominant height model for silver birch was referenced from Oikarinen (1983).range of stand age included in their data, which is consistent with our results that a significant distinction of dominant height growth was observed during early ages only (Fig. 3).In a study by Knowe and Hibbs (1996), the initial density effect was included in the growth rate parameter for red alder stands.The covered age was until 7 years.These results match those of our study that the annual height increment linked to the initial density inversed near the peak of its growth (Fig. 4). In a model of loblolly pine by Antón-Fernández et al. (2011), the asymptote, growth rate, and shape parameters were all estimated using the initial spacing as a variable, and the effect proved to be significant for all three parameters.However, the dominant height growth model of our study gave the best fit using only the modified shape parameter for every possible combination of candidates (Eq (3), Table 4, Appendix C).Because we did not modify the asymptote and growth rate parameter in our model, it was not directly contradicted by the concept that the dominant height growth may not be affected by the stand density.The growth rate (parameter b) of hybrid aspen in our models (Eqs.( 2) and ( 3)) was not affected by the initial planting density.However, the dominant height was strongly linear over age in the measured range.Our models did not indicate any obvious asymptote (parameter a).Therefore, the interpretation should be considered with care especially when extrapolating after the age of 20 (Table 4, Fig. 3).For higher ages additional field measurements and analysis are needed. Practicability and applicability of the final developed models The final models fitted well when using initial planting density as a predictor, but one should be cautious when applying these results.Hybrid aspen grow much faster than the native European aspen in Finland (Hynynen et al., 2002(Hynynen et al., , 2004)).For this study, the modelling data was collected from a clonal plantation of hybrid aspen.Hence, this model should not be applied to hybrid aspen plantations established with seedlings or second-growth plantations from root suckers because their growth characteristics clearly differ (Hytönen, 2018;Fahlvik et al., 2019).The clones originated from superior individual trees which were selected from progenies of controlled hybrid crossings.This is why in general clonal plantations grow faster than plantations from seedlings.In this study variation in growth between the clones was not acknowledged, which resulted in one single model for all clones.Clonal trials with hybrid aspen were conducted at the same time as the planting density trials as part of the Finnish national tree breeding program.They showed that at age 12 there is a significant difference in the height between clones.All three clones used here performed above the average of a total of 25 clones tested (unpublished data).However, in this study the clone effect might be biased with a possible site effect because the different clones were grown at different sites (Table 2). The investigated initial planting density ranged from 400 trees ha − 1 to 1600 trees ha − 1 , which is common for fast growing tree species (such as poplars) in plantations, but wider than normally used in Finland for commercial tree species.Thus, the model should be applied with caution to stands with initial densities outside this range.In cases where the initial density is not known, nonetheless, the density-free model can be used only in cases when a small bias is acceptable in comparison to the density-sensitive model (Fig. 2, plot a4 and b4).Spatial coverage was confined within the region of southern Finland, but our models may be extended to neighbouring countries such as southern Sweden and Estonia, where geographical environment is similar, because the general growth pattern is quite similar to studies in those countries (Rytter and Stener, 2005;Zeps et al., 2016;Stener and Westin, 2017).Nonetheless, the models are not recommended for application in regions where the climate, soil, and/or topography are considerably different.In Finland, the models should be used only in the southern part of the country. Comparison of growth and model to earlier findings The models developed in the present study were similar to the dominant height growth and site index curves developed in earlier studies (Johansson, 2013;Fahlvik et al., 2019).Especially, the site index curves by Johansson (2013) were almost identical to the density-free site index model of our study for ages 13-22 (Fig. 5).Some differences between the two studies were detected outside this age range.This could be because the site quality of our experiments is expected to be more productive.In addition, due to progress in tree breeding, the present clones are expected to be more productive than those from the 1940s-1950s (Johnsson, 1953;Johansson, 2013).Fahlvik et al. (2019) reported, similar dominant height growth to our study beyond age 20.Hence, our models could be verified and applied for southern Sweden. The developed models for the dominant height and site index were compared to those of other major tree species in Finland (Cajander, 1949;Oikarinen, 1983;Tonteri et al., 1990;Natural Resources Institute Finland, 2015).It was shown that hybrid aspen was remarkably higher in dominant height than silver birch, Scots pine, or Norway spruce (Fig. 6).Populus species are in general fast growing tree species and for species hybrids such as hybrid aspen, heterosis in combination with intensive clonal selection even increases this vigorous growth.This indicates the need for models developed specifically also considering a shorter rotation age.It is expected to use our models in studies to evaluate site productivity and for developing further growth and yield models of hybrid aspen. Conclusion Dominant height growth and site index models were developed using data from clonal hybrid aspen plantations in the range of 3 to 20 years of age in southern Finland.Dominant height growth was significantly affected by the initial planting density within the range of 400-1600 trees ha − 1 : the higher the initial density, the higher the dominant height.Considering the effect of the initial density, dominant height growth models were developed using the Chapman-Richards function with modified parameters: a density-free model and a density-sensitive model.The density-sensitive model provided the best fit only when the shape parameter was modified in the Chapman-Richards function, and then it estimated the parameters following a modified equation with a biometrical concept and characteristics, projecting a higher dominant height with increasing initial density. Anamorphic site index curves were explained properly with the density-sensitive model as well as with the density-free model.The developed models can be used for hybrid aspen plantations regardless of the clone used.However, it should not be used for European aspen and the secondary hybrid aspen stands grown from root suckers.The applicable geographical range should be limited to regions with similar environmental conditions as in southern Finland.One should be cautious when extrapolating to ages over 20 years and/or to initial densities outside the range of 400-1600 trees ha − 1 .The developed dominant height and site index models including an initial density variable are to be used for clonal hybrid aspen plantations in southern Finland. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Appendix B. Significant height growth characteristics with initial spacing In order to provide the general height distribution trend of hybrid aspen linked to the initial planting density, an exploratory data analysis was checked using a height distribution visualisation (Fig. B1).In all of the experiments, at the early stages, bell-shaped curves by initial spacing were shown with an identical center location, which indicated the same arithmetic mean height.The curve height was different due to the designed number of trees per ha per plot.Thereafter, from age 4, the height distribution of denser plots tended to shift more to the right, which indicated that the majority of the trees were higher in denser plots.This analytic result can support the significant height growth difference according to the initial density. Still, one may question the current dominant height definition, calculated using the top 6.25% (100/1600), 8.33% (100/1200), 12.50% (100/800), 25.00% (100/400) sample trees per ha for different initial placing, respectively.One could doubt that the significant difference was not due to the initial spacing but because of the specific definition resulting in the selection of a higher dominant height in denser plots.In order to show that superior dominant height growth in denser spacing was not because of the definition of the dominant height (or specific dominant height selection method), an analysis of covariance (ANCOVA) was carried out for the most representative several definitions (Table B1).The result shows that, regardless of the definition, the dominant height growth was significantly different for initial stand densities.Therefore, the most common definition in northern Europe, Criterion B.1, could be chosen. Table C2 Parameter estimates and fit statistics for the candidate models provided in Table C1. Fig. 1 . Fig. 1.Locations of the hybrid aspen density field trials and the design of blocks 1 and 2 in experiment 1 as an example. D .Lee et al. Fig. A1.Stand density trend with age (a) and the size-density relationship with the dominant height (b), quadratic mean diameter (c), and the basal area weighted mean diameter (d), which is commonly used in Northern Europe. Table 2 Description of the field trials and summary statistics of the measurements included in this study. a No. of plots = no. of clones × no. of blocks × no. of treatments.b OMT is Oxalis-Myrtillus (a herb-rich heath forest) site type Table 1 Information on hybrid aspen clones included in the study.BC = British Columbia, CA = Canada, FI = Finland. D.Lee et al. Table 3 Analysis of covariance (ANCOVA) and parameter estimates of fixed-effects for stand age and initial spacing. Estimates by candidate model no.described in Appendix C
2020-10-29T09:08:55.788Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "4897a069ceccabe1f3f871f1a79797f47d150455", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.foreco.2020.118669", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "e04c8878641ef76e1b5779340ee2641e0bf3028e", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Mathematics" ] }
52169072
pes2o/s2orc
v3-fos-license
Association of serum liver enzyme Alanine Aminotransferase (ALT) in patients with type 2 diabetes Objectives: To assess the association of raised serum liver enzyme (ALT) with type 2 diabetic subjects. Methods: This retrospective data was accessed at Baqai Institute of Diabetology and Endocrinology (BIDE) from January 2005 to May 2016. A total of 1966 subjects with type 2 diabetes were included in the study. Subjects were divided into two groups; in group A 1284 subjects had ALT within the normal range (ALT≤35iu/l) and in Group-B 682 subjects had elevated ALT (ALT>35iu/l). Details of demographics, anthropometric measurements and biochemical results at baseline were extracted from the health management system of BIDE. Data analysis was conducted on Statistical Package for Social Sciences (SPSS) version 20. Results: Out of 1966 type 2 diabetic subjects 1284(65.4%) were observed with normal value of ALT (≤35) and 682(34.6%) with elevated ALT (>35). Overall mean age of subjects was 54.66±10.98 years and mean BMI was 27.34±5.99 kg/m2. Significant difference was observed between the groups in age (if ALT>35), gender (more likely to be male) and triglyceride (higher if ALT>35).Whereas no significant difference was found between the groups in HbA1c, cholesterol, HDL and LDL. Conclusion: High frequency of elevated ALT suggests the association of liver disease with type 2 diabetes. The type 2 diabetic subjects need to be routinely screened and further studies to assess the possible associations with NAFLD and insulin resistance are required to further clarify the disease process. INTRODUCTION Type 2 Diabetes Mellitus (T2DM) is a metabolic disorder also associated with liver disease and raised liver enzymes. 1 The pathophysiology in liver among diabetic subjects is similar to that of alcoholic liver disease. The relationship between Non-Alcoholic Fatty Liver Disease (NAFLD) and T2DM is well-known as insulin resistance is the precursor pathophysiological mechanism in both conditions. 2 The occurrence of raised liver enzymes along with liver diseases in Type 2 Diabetes Mellitus (T2DM) has received more attention because a large number of population are at risk and the long-term consequences and high estimated cost for ministry of National Health services. 3 Circulating concentration of liver transaminases was used as surrogate measures of liver functions and NAFLD. 4 Previously, it has been reported that an elevated serum Gamma-Glutamyl Transferase (GGT) level is an essential risk factor in the development of Impaired Fasting Glucose (IFG), T2DM, cardiovascular disease and metabolic syndrome. Although, Aspartate Aminotransferase (AST) and Alanine Aminotransferase (ALT) shows association with metabolic syndrome and T2DM. Similarly, AST, ALT, and GGT were examined but only GGT showed relationship with Type 2 diabetes mellitus. Currently, meta-analysis suggested that elevated levels of ALT and GGT were associated with increased risk of T2DM and GGT also found as substantial risk factor relatively than ALT. 5 Moreover, obesity, type 2 diabetes, dyslipidemia, hypertension and insulin resistance are strongly associated with NAFLD. 6 Comparatively, this disease is being reported in 75% of subjects with T2DM. In about 35% of general population of Western countries it has become the most prevalent liver disease, while in specific groups of obese and diabetic subjects 75%-90% individuals were affected. NAFLD is among the leading causes of chronic liver disease which is also associated with obesity and metabolic syndrome. 7 Compensatory hyperinsulinemia as a result of insulin resistance leads to pancreatic enzyme dysfunction in T2DM along with defective lipid metabolism and hepatic triglyceride accumulation. It has also been reported that micro vascular and macro vascular complications of diabetes are strongly related with NAFLD. 8 Fracanzani et al, also explained that T2DM and insulin resistance were closely associated with the severity of liver disease in subjects with normal liver enzymes. 6 In Pakistan very, few studies are available on the exact prevalence of the phenomenon. We carried out an observational retrospective study to determine the association of elevated serum level of ALT with type 2 diabetics at a tertiary care unit of Karachi, Pakistan. METHODS This retrospective study assessed the data records of type 2 diabetic subjects attending the outpatient department of Baqai Institute of Diabetology and Endocrinology (BIDE), Baqai Medical University, Karachi -Pakistan from January 2005 to May 2016. Ethical approval was obtained from Institutional Review Board (IRB) of the BIDE. Inclusion criteria were subjects with Type 2 diabetes having liver enzyme (ALT) done in routine visits. Subjects with hepatitis, active alcoholism and current history of liver disease were not included. Subjects with active malignancy, other severe diseases (congestive heart failure NYHA>2, chronic obstructive pulmonary disease GOLD>2, chronic kidney disease requiring dialysis, previous organ transplantation, and severe neurological diseases) were also excluded. Total of 1966 subjects with type 2 diabetes were included. Subjects were categorized into two groups: in Group-A subjects had ALT within the normal range (ALT≤35iu/l) and in Group-B subjects had elevated ALT (ALT>35iu/l). 9 Details of demographics, anthropometric measurements (age, gender, BMI) and baseline biochemical parameter (HbA1c, cholesterol, HDL, LDL, and triglyceride) were extracted from the health management system (HMS) of BIDE. Blood was collected for biochemical results. ALT was analysed by using fully autoanalyzer. Plasma triglycerides and serum total cholesterol were determined by GOD-PAP and CHOD-PAP method on the Selectra ProS, respectively. A homogeneous enzymatic colorimetric method was used for High Density Lipoprotein (HDL) cholesterol measurement. A direct method was used for Low Density Lipoprotein (LDL) cholesterol measurement. HbA1c was measured by HPLC method on a Bio-Rad D-10. 10 Height was measured to the nearest of 0.1cm, while individual standing in erect posture and weight was measured with portable weighing scale nearest of 0.1 kilogram (kg). Body Mass Index (BMI) was measured as the ratio of weight (kg) to height squared (m 2 ). Statistical Analysis: Data was analyzed by using Statistical Package for Social Sciences (SPSS) version 20. Variables with normal distribution (Age, BMI, HbA1c, total cholesterol and HDL cholesterol) were compared using student t-test, whereas triglycerides were compared using Mann-Whitney U-test, and sex was compared using Chi-squared test. P-value <0.05 was considered statistically significant. RESULTS The mean value of metabolic parameters and mean difference between the ALT ≤35 and the ALT>35 between the groups are shown in Table-I Significant difference was found between the groups in age if ALT>35, gender (more likely to be male) and triglyceride higher (if ALT>35) as shown in Fig.1A, 1B and 1C. Whereas no significant difference was observed between the groups in glycemic control, cholesterol, HDL and LDL (Table-I). DISCUSSION This study found higher rate of elevated liver enzymes (ALT) with type 2 diabetic subjects. Similar to other study, elevated ALT was used as (ALT) in patients with type 2 diabetes. a surrogate marker of NAFLD, and considered a common condition in subjects with T2DM. 9 Other main finding of this study also shows the significant differences between the groups in age (if ALT>35), gender (more likely to be male) and triglyceride higher (if ALT>35) but not with HbA1c and obesity. ALT level differs in gender, with higher values in men than in women was also reported previously. 11 Similarly, elevated value of ALT and TG are also a useful marker for the screening of NAFLD. 12 Elevated ALT level was significantly associated with obesity and low HDL cholesterol levels, but did not show association with glycemic control also reported previously. 7 But, in the current study no significant association was observed with obesity and low HDL. According to biological mechanism, relationships between liver markers and glucose metabolism are not yet clarified, but there were few potential candidates. Numerous studies have explained an association between liver markers and diabetes. 5 Ohlson et al, 1988; reported in Swedish male that baseline ALT was a predictor of the incidence of type 2 diabetes after 13.5 years of follow-up in a cohort of 766 subjects with a significant fourfold increased risk for males in the upper quintile compared to the lowest quintile. 13 However, in Pakistan previous study reported that elevated ALT levels are an independent significant risk factor for T2DM among women. Increased serum ALT, GGT and decreased AST/ ALT levels are involved in hepatic steatosis or visceral obesity. Biochemical examination shows that AST, ALT and GGT levels are used to signify hepatic inflammation and AST/ALT denotes an alcoholic etiology in fatty liver. 14 Furthermore, previous study signifies that highest or lowest AST/ ALT quartile within their respective normal ranges were used as surrogate biomarkers for T2DM.But, in this study only ALT enzyme was significantly higher in Type 2 diabetic subjects of age greater than 35 years. In subjects with NAFLD reduced insulin sensitivity was observed not on muscles level but at the level of liver and adiposetissue. 15 This study described the elevated aminotransferases with higher BMI value which support previous study results raised serum ALT in subjects of type 2 diabetes, about 80% of the subjects were obese with greater BMI by Shahid A et al. 16 Limitations: The retrospective and observational nature of this study limits the implications but the findings are worth mentioning due to the paucity of data on these subjects. As a representative of a resource constraint society, the physicians instead of complete liver functions profile usually request ALT only. Moreover, the non-availability of ultrasound imaging data is another limitation of this study. CONCLUSION High frequency of elevated ALT suggests the association of liver disease with type 2 diabetes. The type 2 diabetic subjects need to be routinely screened and further studies to assess the possible associations with NAFLD and insulin resistance are required to further clarify the disease process.
2018-09-16T04:19:32.297Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "59ed40f0833e7ccf2c4d737a85722bdb25e336b8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.12669/pjms.344.15206", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "59ed40f0833e7ccf2c4d737a85722bdb25e336b8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54987983
pes2o/s2orc
v3-fos-license
Odds Ratios Estimation of Rare Event in Binomial Distribution We introduce the new estimator of odds ratios in rare events using Empirical Bayes method in two independent binomial distributions. We compare the proposed estimates of odds ratios with two estimators, modified maximum likelihood estimator (MMLE) andmodifiedmedian unbiased estimator (MMUE), using the EstimatedRelative Error (ERE) as a criterion of comparison. It is found that the new estimator is more efficient when compared to the other methods. Introduction The odds ratio is a measure of association between two independent groups on a categorical response with two possible outcomes, success and failure.The two independent groups can be two treatment groups or treatment and control groups.The odds ratio is widely used in many fields of medical and social science research.It is most commonly used in epidemiology to express the results of some clinical trials, such as in case-control studies. A number of subjects in each group falling in each category can be summarized in a two-way contingency table.Total numbers of subjects in group 1 and group 2 are 1 and 2 , which are assumed to be fixed.Numbers of successes in group 1 and group 2 are 1 and 2 , which are considered as independent binomial random variables.Let 1 and 2 be probabilities of success in group 1 and group 2, respectively.The odds of success in group 1 are defined to be odds 1 = 1 /(1− 1 ), similar to group 2. The usual maximum likelihood estimator of odds ratio is defined as . ( Odds ratio is nonnegative real value.When successes are similar in both groups, the odds ratio is equal to 1, meaning that groups are independent of response.When the odds of a positive response are higher in group 1 than in group 2, the odds ratio is greater than 1 and vice versa for the value less than 1.The father of odds ratio from 1 in a given direction represents stronger association.In addition, its sampling distribution is highly skewed.Sample natural logarithm of odds ratio, which is less skewed, is often utilized for inference.However, odds ratio can be zero (if zero cell count appears in numerator of ( 1)) or infinity (if zero cell count is in denominator of ( 1)) or undefined (if there are zero cell counts in both the numerator and denominator of (1)).Haldane [1] and Gart and Zweifel [2] suggested to add a correction term 0.5 to each cell, when having zero cell count, which gives the modified maximum likelihood estimator (MMLE) as Even though ÔR MMLE still laid between 0 and infinity, some investigators discouraged adding 0.5 to each cell because of the appearance of adding "fake data"; see Bishop et al. [3] and Agresti and Yang [4].Among controversy, several similar alternatives to this modified maximum likelihood estimator have been proposed.Hirji et al. [5] proposed the median unbiased estimator (MUE) of the odds ratio, obtained from the conditional noncentral hypergeometric distribution.However, the median unbiased estimator of the odds ratio still caused a problem when 1 = 1 and 2 = 2 or 1 = 2 = 0, and then the MUE was undefined.Parzen et al. [6] proposed an estimator of the odds ratio based on MUE called the modified median unbiased estimator (MMUE) of which the estimated probability of success was always in the interval (0, 1), even if there were 0 or successes in each group.Consequently, the estimated odds ratio always laid between 0 and infinity.Additionally, this method performed well with respect to bias in small sample and was an alternative to adding "fake data." In this paper, we focus on "rare events" which occasionally observed zero or small counts of interesting events which happened within a given time period or a given sample such as natural disasters or some diseases.As aforementioned, rare events caused difficulty in estimation of odds ratio due to the occurrence of zeros or small observed counts in numerator or in denominator or in both, resulting in the large standard error and therefore less precise confidence interval.Only a rough estimate of the odds ratio is thus obtained.Researches involving association between categorical variables in contingency table have long been studied, using both classical and Bayesian approaches.Good [7] studied association factor, at early stage, in large contingency table with small entries, assuming log-normal and Pearson type III distribution.The author also mentioned that these assumptions may be less accurate but easy to handle.Fisher [8] estimated the odds ratio based on hypergeometric distribution utilizing exact method in a 2 × 2 table.Thomas and Gart [9] constructed a table for 95% confidence limits of differences and ratio of two proportions, including odds ratio and one-tailed value for Fisher-Irwin Exact test in various types of 2 × 2 table.Altham [10] studied association and exact value in a 2 × 2 contingency table based on the cumulative posterior probabilities which was not easy to extract.Nurminen and Mutanen [11] proposed Bayesian approach for the estimation of difference between two proportions, risk ratio and odds ratio, using independent beta prior and provided integral expressions for the cumulative posterior distribution.They also applied the proposed method to real data regarding malignant lymphoma and colon cancer cases exposed to phenoxy acids and chlorophenols in agriculture.Nouri et al. [12] presented the estimation of the odds ratio in 2 × 2 × tables when exposure was misclassified.They compared the matrix and inverse matrix methods to the MLE method using simulation study and found that the inverse matrix method having a closed form was more efficient than the matrix method. As previously mentioned, the estimates of association measure in two-way contingency table can be carried out based on classical and Bayesian approaches.The exact distribution using classical approach is, however, rather difficult for mathematical tractability.In Bayesian approach, where prior belief is incorporated into derivation of posterior density, the hyperparameters, characterizing the prior density, are often unknown to researchers and need to be assessed irrespective of current data.However, controversy still exists.Alternatively, the estimation of hyperparameters is plausibly carried out with the notion of Empirical Bayes method using current data to estimate the unknown hyperparameters, contrary to Bayesian approach.As a consequence, we focus on the utilization of Empirical Bayes method to estimate the odds ratio in a two-way contingency table, focusing on small proportions of success.Our purposed estimation tends to outperform the traditional estimator, MMLE, and MMUE without interference in the original data. The rest of this paper is organized in the following sequence.In the next section, we discuss the median unbiased estimator.The third section describes the odds ratio estimation using EB method.The forth section illustrates simulated results, and the efficiency of EB is compared with MMLE and MUE.The fifth section displays the application of our method to real data.Our conclusion is drawn in the final section. The Modified Median Unbiased Estimator of Odds Ratio Parzen et al. [6] suggested the modified median unbiased estimator (MMUE) in two independent binomial distributions. Let p be the estimator of success probability which satisfies To obtain p , they use the binomial distribution, ∼ ( , ), where denotes random variable representing success in the th group ( = 1, 2).Let be the observed value of . 𝑃 (𝑌 The MMUE can be computed from the distribution of sufficient statistics for binomial data. Compute the values of and to be those value of for which where and are the smallest and largest values of , respectively.Then, the MMUE is defined as When 0 < ≤ , we can find values of p and p which satisfy Then, solve from and solve p from The values of p and p can then actually be obtained by using the relationship between the cumulative beta distribution and the cumulative binomial distribution function as follows (Daly [13] and Johnson et al. [14]). Let ∼ Beta(, ): We need to find p and p such that In particular, where −1 ( | ,) is the th quantile of the betadistribution with parameters and .Now suppose = 0, and then Any value of p in the interval [0, 1] satisfies where p = 0 is the smallest possible value of p .Similarly, when = 0, p satisfies Consequently, = 0; p equals p = (p Similarly, when p = 1 is the largest possible value of p , then p satisfies when = and p = (p + p )/2.Then, the MMUE of odds ratio estimation is defined as where p1 and p2 denote success probability estimators in groups 1 and 2, respectively. Proposed Estimation of Odds Ratio In this section, we proposed a new method for odds ratio estimation using Empirical Bayes method in two independent binomial distributions.Let 1 and 2 be random variables, distributed as binomial with equal and unequal sample sizes and unknown probability, 1 ∼ Bin( ( Consequently, the posterior marginal distribution function of is the beta-binomial distribution (BBD).Then, both hyperparameters in each group can be estimated using maximum likelihood method.The likelihood function of posterior marginal distribution function is then written as Applying Newton-Raphson method to solve a nonlinear equation, the ( + 1)th maximum likelihood estimator of hyperparameters ( = 1, 2, . ..) can be obtained from where where the moment estimators of hyperparameters in betabinomial distribution are used as initial values; see Minka [15]. The posterior distribution function of is thus calculated, yielding Thus, the EB estimator of odds ratio can be obtained as follows: where 1 and 2 denote success probability estimators in groups 1 and 2, respectively. Simulation Study for MMLE, MMUE, and EB Method Simulation studies have been carried out using R program (version 3.2.0)[16] to assess the efficiency of the EB method in comparison with two existing methods.Binomial data are generated with equal and unequal sample sizes: ( 1 , 2 ) = (10, 10), (10, 30) with (10, 50) probabilities of success in group 1: 1 = 0.01, 0.03, 0.05, 0.1, and 0.15.For each value where OR denotes the usual maximum likelihood estimator of odds ratio and ÔR denotes the estimate of odds ratio using EB, MMLE, and MMUE ( = 1, 2, 3, . ..), respectively.The simulation results with odds ratio estimates for sample sizes ( 1 , 2 ) = (10, 10), (10, 30) and (10, 50) are given in Tables 1-3.The performance of estimation uses ERE given in Tables 4-6 and compares this result with graph in Figure 1; the other case provides similar results.It is found that the odds ratio estimation using EB method mostly yields smallest ERE with 78.67%, while those using MMLE and MMUE methods result in smallest ERE with only 6.67% and 14.66%, respectively. Illustrative Examples Using Real Data Our first example is taken from the studies of Good [7] and Hardell [17].As shown in Table 7, subjects with malignant The EB estimator of odds ratio is also more efficient than the other two estimators, MMLE and MMUE.In addition, our purposed estimator is an alternative method for odds ratio estimation to the MMLE method without disturbing the original data. 2 be estimators of 1 and 2 , respectively, where Table 7 : True odds ratios and their estimates using EB, MMLE, and MMUE, with corresponding percentages of ERE.
2018-12-11T20:19:10.930Z
2016-10-19T00:00:00.000
{ "year": 2016, "sha1": "e4a32c3b98f75e4cb6d0cb4310771d581f53bb73", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/jps/2016/3642941.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e4a32c3b98f75e4cb6d0cb4310771d581f53bb73", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
56055861
pes2o/s2orc
v3-fos-license
Java Performance Mysteries . While assessing software performance quality in the cloud, we noticed some significant performance variation of several Java applications. At a first glance, they looked like mysteries. To isolate the variation due to cloud, system and software configurations, we designed a set of experiments and collected set of software performance data. We analyzed the data to identify the sources of Java performance variation. Our experience in measuring Java performance may help attendees in selecting the trade-offs in software configurations and load testing tool configurations to obtain the software quality measurements they need. The contributions of this paper are (1) Observing Java performance mysteries in the cloud, (2) Identifying the sources of performance mysteries, and (3) Obtaining optimal and reproducible performance data. Introduction Java is both a programming language and a platform.Java as a programming language follows object-oriented programming model and it requires Java Virtual Machine (Java Interpreter) to run Java programs.Java as a platform is an environment on which we can deploy Java applications.Java Enterprise Edition (Java EE) [1]is a platform which is used to run enterprise class workloads and applications.This platform provides API and runtime environment for developing and running multi-tiered, scalable, reliable enterprise applications.An Enterprise application is a business application that is complex, scalable, distributed and multi-tiered. Java applications can be single tieredor multi-tiered.Single tier Java applications are self-contained applications that can perform certain operations.SPECjvm2008 [3]is an example of single tier Java benchmark.SPECjvm2008 benchmark comprises of dozens of Java applications.Some of these Java applications work on exercising encryption software like crypto.aes,crypto.rsa,crypto.signverifyworkloads while some of them are scientific mathematical algorithms like FFT, sparse matrix computation etc. that works with both large and small data set. Multi-tier Java workload or enterprise Java application is a software application distributed over multiple machines.Such applications can be separated into multiple tiers like, client tier, middle-tier and data tier.For testing a Java EE workload theclient tier would be a load generator machine using load generator software like Faban [2].This client tier will consist of users that will send the request to the application server or the middle tier.The middle tier contains the entire business logic for the enterprise application.Business logic is nothing but the code that gives functionality to the application.The data-tier is the data source for the application.It consists ofdatabase or the legacy data sources like mainframes. Java Performance Monitoring There are a number of factors that could affect the performance of Java program.Among them, garbage collection (GC) and just-in-time (JIT) compilation probably stand out as they are not present in native program performance monitoring. Garbage collection is implemented by the Java Virtual Machine (JVM) for automatic memory management.Java HotSpotTM VM [8] (originally implemented by Sun, acquired by Oracle) supports several different garbage collection algorithms which were designed for serving different pause time and throughput requirements.For the multi-tier Java workloads running HotSpotTM in cloud, typically, we consider choosing one of these three collectors:throughput collector, CMS collector and G1 collector, the best choice dependson workload characteristics. The parallel old collector (enabled by -XX:+UseParallelOldGC) is designed for maximizing the total throughput of an application, but some individual business transaction might be subjected to a long pause. The CMS collector (enabled by -XX:+UseConcMarkSweepGC) can concurrently collect the objects in old generation while application threads are running.The CMS algorithm is designed to eliminate long pauses.It has two quick stop-the-world (STW) pauses per old generation collection.Compared with throughput collector, the CMS may increase the CPU usage due to its background processing. The G1 collector (enabled by -XX:+UseG1GC) is designed for processing large heap with minimal pauses.The G1 divides the entire heap into equally-sized region and 'smartly' collects the regions with the least live data first (garbage first) [9].Similar with CMS, the collection for old generation in G1 is also done in concurrent way. We have the different GC collector choices as there is no one right algorithm for every workload, different GC collector optimizes for different user scenarios.In general, the parallel old collector is optimized for throughput.The CMS collector is optimized for pause time.The G1 collector is optimized for reducing pause time on large heaps more than 4GB. Selecting the right GC algorithm can significantly help the application performance.As a rule of thumb: if the GC overhead is > 10%, the wrong GC algorithmmight be probably used. The first important tuning for GC is the size of the application's heap.Smaller heap size means more frequent GC and not enough time for performing business logic.Larger heap size means less frequent GC, but the duration of each of GC pauses will be increased, as the time spent in GC pauses is generally proportional to the size of space being collected.Choosing proper heap size also depends on workloads.Heap occupancy is one important measure for heap size setting, as a rule of thumb: size the heap so that it is 30% occupied after a full GC [10]. The Just-In-Time compiler component in JVM is used to turn bytecode into much faster native code for accelerating Java programming language performance.There are a couple of typical tuning practices for JIT which can help the improvements in the performance of application. The size of code cache is the first basic thing which we should pay more attention to.Generally, just make sure that the code cache is large enough otherwise the application might mysteriously slow down if it is running out of code cache space. Inlining is one of important optimizations made by JIT to eliminate the overhead of method call.Thus small and frequently called methodsmight be good candidates for inlining. To identify and resolve Java performance issues, the following methods might be considered.(1) Monitor the logs, e.g.GC or JIT log, (2) Use the diagnostics JVM parameters which will provide more useful information for your debug, and (3) Use profiling tools, likes VisualVM, JMC(shipped with OracleJDK as commercial feature set). Logs are usually thekey to find possible performance bottleneck; there are some open source tools to visualize these logs, e.g.gcviewer[5] is a good tool for gc log analytics and jitwatch[6] provides visual insights into JIT compilation. Diagnostics JVM parameters [7] is another good way to explore the performance related issues, e.g.enable inlining diagnostics output by adding -XX:+PrintInlining parameter and then check which method is failed to inline. There are a lot of profiling tools for measuring application performance, technically speaking, the profiling can be made by either sampling or runtime instrumentation.Both instrumentation-based and sampling-based tools have different characteristics with their own advantages and disadvantages.Profiling does not come for free, we should pay close attention to the overhead introduced by profiling tool otherwise the results collected by them couldbe misleading for final conclusion. Java Performance Mysteries In a datacenter of a cloud service provider, before upgrading the hardware systems to a newer version it is very crucial to evaluate the goodness of the new hardware compared to the older ones.For evaluation we use multiple workloads/benchmarks to assess the performance [4]benefit it offers over the previous generation.In this paper we are evaluating two platforms called Platform A and Platform B using an Enterprise Javaworkload.We are using a two tier Java EE Web Application workload.The workload consists of a client tier which generates load using faban driver.Based on the injection rate the faban driver will start client users that send http request to the application server.The application server will process the request and send a response back to the client with appropriate web page.The web page is computed dynamically after processing the request from the client.When a user gets a response back from the application server it will immediately send another request.Advantage of this is, we can saturate the application server with very small number of clients to evaluate its maximum performance capability.This Impact of GC-1 Garbage Collection Algorithm on Platform A and b Based on Run ID web101.5Wvs web101.6Fand web101.5Ovsweb101.6Gwe can infer that larger heap gives better throughput.Large heap results in less frequent GC.The JVM spends more time running the application and less time collecting garbage.For GC-1 garbage collection algorithm we can also say that Platform B produced 24% higher throughput than Platform A with small heap size.This performance delta is reduced to 21% with large heap. Impact of GC-2 Garbage Collection Algorithm on Platform A and B With runs web101.6Bvs web101.6Iand web101.6Cvs web101.6Lwe see similar performance improvement with heap size.But if we look at platform comparison then we see only 5% performance improvement with Platform B over Platform A with small heap size and 4% performance improvement with large heap.After close inspection we realized that the performance impact with this GC algorithm is misleading because we are very close to saturate the network capacity.Hence these performance numbers should not be used to compare the two platforms. Impact of Different Garbage Collection Algorithms on Platform B For Platform B with small heap GC-2 algorithm provides only 16% benefit over GC-1 algorithm and with larger heap this delta is decreased to 11%. To characterize the behaviour of this Web Application workload the system level performance data is insufficient.We need information about the JVM behaviour, and the methods compiled on both the Platforms.We will have better understanding of the workload performance after looking at the JIT code and see how many resources were spent in the hot methods and GC methods for both the garbage collection algorithm on both the Platforms.For that we need to profile the application and collect JVM data. Java Profiling Data collected for performance evaluation does not come for free.It always comes at a cost of performance.Java profiling tools usually run in background, but every time a method is called or compiled the information gets logged in the tool.This has additional overhead.You should be very cautious when using such tools for performance testing.The overhead associated with the tools could be quite high.With one of the Java Profiling tool we saw 20% performance overhead.The throughput dropped down by 20% when the tool was collecting JVM information.Whenever such tools are used they might alter JVM behaviour and interfere with the workload performance. Hence for performance testing one should always be careful when looking at the summary statistics for the workload.We should always look into the detail workload performance data.The overall summary data could be misleading.Figure 1demonstrates the impact of one such profiling tool used.We saw 20% performance dip when the JVM profiling tool was running.The workload summary reported the average throughput overrun time.It did not notify of thedrop in performance.Only after looking at the detail data we could see the performance impact of the tool.With a low overhead tool we collected detailed data about the JIT code and we observed something really weird.There wasno change in performance when we moved to large heap on the same platform with same configuration. When we looked into the Java JIT code we realized that between these two configurations the compiled code was different.In one of the runthe print methods got inlined.Whereas in the other run we saw lot of function calls to the print method.Just by method inlining we can observe significant performance difference.JVM is designed to make run time compilation and optimization decisions.Most of the times it will do the right thing, but then occasionally we do see such anomalies that may cause us to make wrong inferences. With the previous number of users we were close tonetwork saturation.Hence for thenext set of experiments we reduced the number of users. Table 2summarizes the performance of both the platforms with lower number of users.The strange thing is now when you look at the performance delta between the two platforms the picture that we saw with Table 1completely reverses.With GC-1 algorithm we see Platform A being better than Platform B by 2% for smaller heap and platform B being better than platform A by 3% for larger heap.On the other hand GC-2 algorithm shows 15% performance benefit on Platform B with small heap and 20% performance benefit with large heap.This configuration with lower users tells a completely different story. Summary and Discussion We showed a couple of examples of Java performance mysteries and identified the sources of those performance mysteries.Understanding Java performance mysteries would enable us to obtaining optimal and reproducible performance data that would in turn help us to make the right decisions in the data center. Figure 1 . Figure 1.Performance Impact of a Java Profiling Tool Table 1 . Performance of Both Platforms at Higher Number of Users Table 2 . Performance of Both Platforms at Lower Number of Users
2018-12-07T21:34:40.029Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "03d80f9848765e7675165b8576869360ccd93566", "oa_license": "CCBY", "oa_url": "https://www.itm-conferences.org/articles/itmconf/pdf/2016/02/itmconf_ita2016_09015.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "03d80f9848765e7675165b8576869360ccd93566", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
268439967
pes2o/s2orc
v3-fos-license
Electroretinographical Analysis of the Effect of BGP-15 in Eyedrops for Compensating Global Ischemia–Reperfusion in the Eyes of Sprague Dawley Rats Retinal vascular diseases and consequential metabolic disturbances in the eye are major concerns for healthcare systems all around the world. BGP-15, a drug candidate small-molecule [O-(3-piperidino-2-hydroxy-1-propyl) nicotinic amidoxime dihydrochloride], has been formerly demonstrated by our workgroup to be retinoprotective both in the short and long term. Based on these results, the present study was performed to investigate the efficacy of BGP in an eyedrop formulation containing sulfobutylether-β-cyclodextrin (SBECD), which is a solubility enhancer as well. Electroretinographical evaluations were carried out and BGP was demonstrated to improve both scotopic and photopic retinal a- and b-waves, shorten their implicit times and restore oscillatory potentials after ischemia–reperfusion. It was also observed to counteract retinal thinning after ischemia–reperfusion in the eyes of Sprague Dawley rats. This small-molecule drug candidate is able to compensate for experimental global eye ischemia–reperfusion injury elicited by ligation of blood vessels in rats. We successfully demonstrated that BGP is able to exert its protective effects on the retina even if administered in the form of eyedrops. Introduction Retinal vascular diseases and consequential metabolic disturbances in the eye are major concerns for healthcare systems all around the world.In several ocular conditions, including age-related macular degeneration, cataract, glaucoma, and advanced stages of diabetic retinopathy [1], ischemia-reperfusion injury is involved.Retinal vascular occlusion-let it be partial or complete-may cause ischemic injury in these diseases, and the following reperfusion further damages the neuronal tissue in the eye, which contributes to impaired vision and visual loss [2].Oxidative stress presented by ischemia-reperfusion trauma further leads to inflammation, retinal edema, and ultimately necroptosis [3,4].Retinal vascular abnormalities induce neurodegeneration marked by activation of microglia, increased expression of glial fibrillary acidic protein (GFAP), apoptosis and necrosis of retinal cells, and a substantial decrease in certain layers of the retina [5].Abnormalities in the retinal tissue lead to retinal dysfunction, which can often be detected prior to additional clinical signs of retinopathies [5,6].Before the well-established clinical funduscopic analyses were carried out to screen for microangiopathy-related signs such as hemorrhages, hard and soft exudates or cotton wool spots in diabetic retinopathy, for example [7], other less visible changes had already been developed, like thickness reduction [8] and retinal cell apoptosis [9], which result in electroretinographical changes such as smaller a-and b-wave amplitudes, longer implicit times or weaker/missing oscillatory potentials [10], and the same is true of ischemia-reperfusion injury [11,12].Early detection and management of vascular retinopathy are important for preventing progression and preserving vision as well as starting a proper treatment regime, prior to tissue damage becoming severe enough to significantly impair visual function [13].Retinal ischemia-reperfusion (I/R) injury is one of the leading causes of visual impairment [14], and effective therapies to prevent or reverse retinal I/R injury have yet to be developed [15]. BGP-15 [O-(3-piperidino-2-hydroxy-1-propyl) nicotinic amidoxime dihydrochloride], a drug candidate small-molecule, was initially developed for its chemoprotective effects against the myelo, nephro-, and neurotoxic effects of cytostatic drugs.Furthermore, BGP-15 (BGP) has been demonstrated to improve type 2 diabetes mellitus-related insulin resistance and entered phase II clinical trials almost a decade ago: it was proven to be safe in shortterm toxicological studies in humans [16].Our workgroup also carried out studies with the agent, in which it was found to be retinoprotective in Goto-Kakizaki rats, a type 2 diabetes mellitus model animal [17].We also demonstrated long-term efficacy and toxicological evaluation in the Zucker Diabetic Rat, an obese type 2 diabetes model [18].In these experiments, BGP was able to exert functional preservations, i.e., restoration of decreased electroretinographical a-and b-wave amplitudes [16,17].Based on these results and bearing drug development aspects in mind, the present study was conducted to evaluate any potential retinoprotective effects of BGP in an eyedrop formulation administered to a rat model of retinal ischemia-reperfusion injury. Animals Male adult Sprague Dawley (SD) rats (8 weeks of age, 250-270 g, n = 20; Janvier Labs., Le Genest-Saint-Isle, France) were housed under standard conditions (22-24 • C) in the animal house of the Department of Pharmacology and Pharmacotherapy, University of Debrecen, Hungary. The rats were kept under a 12-to 12 h light-dark cycle and had ad libitum access to tap water and standard rat chow.Before the beginning of the experiment, the rats had a 2-week adaptation/acclimatization period. The animals received humane care, and all experimental procedures were carried out in accordance with the 'Principles of Laboratory Animal Care' of the EU Directive 2010/63/EU.All experimental protocols were approved by the local Ethics Committee of the University of Debrecen (8/2020/DEMÁB). Experimental Protocol After the 2-week acclimatization period, the rats were randomly assigned into two groups: vehicle-treated and BGP-treated groups (n = 10 in each). At the start of the experiment, ocular ischemia was triggered and maintained for 60 min.After the ischemic period, reperfusion lasted for 1 week, during which the animals were treated with eyedrops.We administered eyedrops with or without BGP twice a day for a week.The eyedrops were dripped into the eye through a pipette tip using a manually adjustable pipette. In accordance with the 3R rule of animal ethics, to minimize the number of animals needed for the experiment, we inflicted ischemic-reperfusion injury in the left eye of the animals while using the unligated right eyes as a healthy control group.This was also beneficial from the point of view of normalizing the incidental systemic effect of BGP among the treated and untreated groups. After the 1-week reperfusion and treatment period, animals underwent anesthesia and electroretinographical evaluation took place.The animals were then sacrificed and then eye samples were extracted for histological analysis (Figure 1). Biomedicines 2024, 12, x FOR PEER REVIEW 3 of 18 animals while using the unligated right eyes as a healthy control group.This was also beneficial from the point of view of normalizing the incidental systemic effect of BGP among the treated and untreated groups. After the 1-week reperfusion and treatment period, animals underwent anesthesia and electroretinographical evaluation took place.The animals were then sacrificed and then eye samples were extracted for histological analysis (Figure 1). Formulation of Eyedrops Eyedrops were formulated by our pharmacist colleagues at the Department of Pharmaceutical Technology, University of Debrecen, Hungary.The recipe of the formulation was as follows: 1000 mg BGP (Sigma-Aldrich-Merck KGaA, Darmstadt, Germany) and 1000 mg sulfobutylether-β-cyclodextrin (SBECD) (Cyclolab Ltd., Budapest, Hungary) were dissolved in 8 mL sterile distilled water (Molar Chemicals Ltd., Halásztelek, Hungary), and the pH was set between 7.1 and 7.4 with 1 M NaOH solution (Sigma-Aldrich-Merck KGaA, Darmstadt, Germany).Then, 300 mg hydroxyethylcellulose (Molar Chemicals Ltd., Halásztelek, Hungary) was added to the solution and it was topped up to 10 mL with sterile distilled water.All the ingredients of the formulation have been approved and authorized for pharmaceutical manufacturing of medicines by the European Medicines Agency (EMA) and are included in Pharmacopoeia Europaea inclusive of SBECD [19,20].The solution was heated to 50 °C and was filtered through a membrane filter with 0.2 µm pores.The concentration of BGP in the above-described eyedrop formulation was 100 mg/mL. Ocular Ischemia-Reperfusion The rats were anesthetized with ketamine/xylazine (100/10 mg/kg) (Calypsol, Gedeon Richter Plc., Budapest, Hungary; CP-Xylazin, Produlab Pharma BV, Raamsdonksveer, The Netherlands), then an oxibuprocaine-containing topical ocular anesthetic was administered to the eye (Humacain 4 mg/mL eyedrops, Teva Ltd., Debrecen, Hungary).Thereafter, by ligating the left eyes of the SD rats, experimental ischemia was induced using the previously reported methodology [18].Concisely, the left eye of each rat was slightly protruded with bent forceps, then a surgical suture composed of polyester fiber (Mersilene, 2 mm, Ethicon Inc., Cincinnati, OH, USA) was inserted behind the eyeball.Then, the suture was slip-knotted around the blood vessels supplying the eye, the optic nerve and the retrobulbar connective tissue.The ligature restricted the blood supply to the retina, which induced ischemia in the left eye.Ischemia was maintained for 60 min and confirmed macroscopically by fundoscopic examination with an ophthalmoscope (Heine mini 2000, HEINE Optotechnik GmbH & Co. KG, Gilching, Germany) and by ocular echography (see below).Then, the occluder was released to allow blood flow via retinal arteries. Formulation of Eyedrops Eyedrops were formulated by our pharmacist colleagues at the Department of Pharmaceutical Technology, University of Debrecen, Hungary.The recipe of the formulation was as follows: 1000 mg BGP (Sigma-Aldrich-Merck KGaA, Darmstadt, Germany) and 1000 mg sulfobutylether-β-cyclodextrin (SBECD) (Cyclolab Ltd., Budapest, Hungary) were dissolved in 8 mL sterile distilled water (Molar Chemicals Ltd., Halásztelek, Hungary), and the pH was set between 7.1 and 7.4 with 1 M NaOH solution (Sigma-Aldrich-Merck KGaA, Darmstadt, Germany).Then, 300 mg hydroxyethylcellulose (Molar Chemicals Ltd., Halásztelek, Hungary) was added to the solution and it was topped up to 10 mL with sterile distilled water.All the ingredients of the formulation have been approved and authorized for pharmaceutical manufacturing of medicines by the European Medicines Agency (EMA) and are included in Pharmacopoeia Europaea inclusive of SBECD [19,20].The solution was heated to 50 • C and was filtered through a membrane filter with 0.2 µm pores.The concentration of BGP in the above-described eyedrop formulation was 100 mg/mL. Ocular Ischemia-Reperfusion The rats were anesthetized with ketamine/xylazine (100/10 mg/kg) (Calypsol, Gedeon Richter Plc., Budapest, Hungary; CP-Xylazin, Produlab Pharma BV, Raamsdonksveer, The Netherlands), then an oxibuprocaine-containing topical ocular anesthetic was administered to the eye (Humacain 4 mg/mL eyedrops, Teva Ltd., Debrecen, Hungary).Thereafter, by ligating the left eyes of the SD rats, experimental ischemia was induced using the previously reported methodology [18].Concisely, the left eye of each rat was slightly protruded with bent forceps, then a surgical suture composed of polyester fiber (Mersilene, 2 mm, Ethicon Inc., Cincinnati, OH, USA) was inserted behind the eyeball.Then, the suture was slip-knotted around the blood vessels supplying the eye, the optic nerve and the retrobulbar connective tissue.The ligature restricted the blood supply to the retina, which induced ischemia in the left eye.Ischemia was maintained for 60 min and confirmed macroscopically by fundoscopic examination with an ophthalmoscope (Heine mini 2000, HEINE Optotechnik GmbH & Co. KG, Gilching, Germany) and by ocular echography (see below).Then, the occluder was released to allow blood flow via retinal arteries. Ocular Echography Ischemia and reperfusion were also confirmed by ultrasound imaging (Vevo 3000, Fujifilm Visualsonics Inc., Toronto, ON, Canada; MX550D transducer at 32 MHz) as detailed before [18].Briefly, rats were under anesthesia due to the ischemic ligation protocol as mentioned above and were laid on a temperature-controlled pad in a prone position.Using a contact gel (Aquasonic100, Parkerlab Inc., Fairfield, NJ, USA) standard color Doppler was recorded in longitudinal view (9 mm depth, 55 • , 0.27 mm gate size).Blood flow of the short ciliary artery seized upon ligation, and was restored after reperfusion, as analyzed by the software of the ultrasound system (VevoLab ver 5.1, Fujifilm Visualsonics Inc., Toronto, ON, Canada). Electroretinography A Ganzfeld-type flash electroretinography (ERG) visual monitoring system was used for stimulus generation and data acquisition (Hand-held Multi-species ElectroRetinoGraph (HMsERG), OcuScience, Henderson, NV, USA).ERG measurements were performed according to a previously described method [18].Vehicle-treated (n = 10) and BGP treated (n = 10) groups were anesthetized with a mixture of ketamine-xylazine (100/10 mg/kg).After deep anesthesia was reached, mydriasis was induced by topical application of cyclopentolate (Humapent, Teva Ltd., Debrecen, Hungary), and the animals were adapted to the dark for 20 min.Further experimental procedures were performed under dim red light.The animals were positioned on a heated (37 • C) pad (ATC 2000, WPI, Sarasota, FL, USA) in a prone position, and electrodes were placed as follows: a gold-coated corneal contact lens electrode (ERG-jet Contact Lens Electrode, Fabrinal SA, Switzerland) was placed on each eye, while reference and ground stainless steel needle electrodes were inserted subcutaneously above the jaw and tail base, respectively.Conductive gel was applied to the cornea to ensure sufficient electrical contact and to maintain hydration during the entire procedure (Vidisic, Bausch&Lomb, Berlin, Germany).Before measurement, the electroretinography equipment was covered with a Faraday cage.ERGs were recorded from both eyes simultaneously after the animals were placed in the Ganzfeld bowl.The bandpass filter width was 1 to 300 Hz for single-flash recordings that were obtained under both dark-adapted (scotopic) and light-adapted (photopic) conditions.Single white-flash stimulus intensity ranged from −2.5 to 1 log cd•s/m 2 .Light adaptation was performed with a background illumination of 30 cd•s/m 2 for 10 min before the photopic responses were recorded.For each flash intensity, 10 responses were averaged with an interstimulus interval varying between 2 and 20 s, depending on the flash intensity according to the pre-set protocols of the ERG system.Dark-adapted oscillatory potential (OP) measurements were derived from ERG waveforms recorded to 3000 mcd*s*m −2 flash stimuli by filtering with a bandpass of 100 to 300 Hz post-acquisition; the interval between stimulus flashes was 10 s.To measure OP amplitudes, the highest positive peak and lowest negative trough were measured from a baseline set at 0 µV, then the absolute values of the two numbers were added together.The implicit time was the time required to form the highest positive peak after flash stimuli.For each eye, four individual OPs were averaged.Electroretinograms were analyzed with the software supplied by the manufacturer of the ERG system (ERGView 4.380, Ocuscience, Henderson, NV, USA). Histology After the extermination of animals, their eye bulbs were immediately enucleated, and the upper part of the eyeball was marked for later positioning.Then, paraformaldehyde solution (PFA, pH 7.4, 4% in phosphate buffer: 10 g paraformaldehyde, 50 µL 10 N NaOH, 25 mL 10× PBS, 200 mL ddH 2 O; Sigma-Aldrich-Merck KGaA, Darmstadt, Germany) was injected into the bulbs, followed by 24 h of immersion to provide appropriate fixation of the retina.On the following day, corneas and eye lenses were removed for a complete removal of the PFA, and the tissue samples were washed in water for 60 min.Then, samples were stored in 70% alcohol until further processing (Sigma-Aldrich-Merck KGaA, Darmstadt, Germany).Dehydration (ascending concentration of ethanol: 70%, 90%, 100%) was the next step, followed by clearing with xylene and embedding into wax (Histowax, Histolab Products AB, Gothenburg, Sweden).Ultimately, sections of 4 µm thickness were cut from the paraffinized eye tissue blocks with a microtome.Sections localized near the optic disk were further processed.After deparaffinization and rehydration of the sections, they were stained with hematoxylin-eosin (H&E) as follows: first, they underwent 10 min incubation in hematoxylin (Gill-type, GHS2128, Sigma-Aldrich-Merck KGaA, Darmstadt, Germany).Then, they were rinsed in running tap water for 10 min until sections turned blue, followed by staining with eosin for 5 min.Images were taken near the optic disk, from the inferior part of the retina with a Nikon Eclipse 80i microscope (Nikon Instruments Inc., Melville, NY, USA) through a 40× objective (Nikon Plan Fluor 40×/0.75DIC M/N2 ∞/0.17 WD 0.66) with a DS-Fi3 Microscope Camera attached.Measurements were taken with the software of the microscope, Nikon NIS-Elements BR (Ver5.41.00). Statistical Analysis GraphPad Prism software (version 8.0, GraphPad Software Inc., La Jolla, CA, USA) was used for statistical analyses.After determining Gaussian distribution of data points using Shapiro-Wilk normality test, multiple comparison tests with post-test were applied: either one-way analysis of variance (ANOVA), if the data passed the normality test, or a nonparametric Kruskal-Wallis test.The result data comparisons were regarded as significant if the probability value turned out to be lower than 0.05.Asterisks were used to indicate significance levels, with 1 to 4 stars (* to ****) in cases of p < 0.05, p < 0.01, p < 0.001, and p < 0.0001, respectively.The data in columnal graphs are shown as the mean ± standard error of the mean (SEM). Results The results of the scotopic electroretinography are shown in Figure 2 as recorded waveforms, while oscillatory potentials (OPs) are shown in Figure 3 The waves in the BGP-treated IR eye-group are higher compared to control IR (Figure 2D vs. Figure 2C) and more closely resemble the physiological form (Figure 2A).The waves in the BGP-treated IR eye-group are higher compared to control IR (Figure 2D vs. Figure 2C) and more closely resemble the physiological form (Figure 2A). Furthermore, OPs are more pronounced and orderly in the BGP-treated groups compared to the control groups (Figure 3B,D vs. Figure 3A,C, respectively). Measurement results related to scotopic ERG are shown in Figure 4 (dots represent the BGP-treated group NO-IR eyes, squares represent BGP-treated IR eyes, upward-pointing triangles represent the control group NO-IR eyes and downward-pointing triangles represent the control group IR eyes), while the most statistically important comparisons are highlighted in Figure 5 in the following group-order: control NO-IR, BGP NO-IR, control IR, and BGP IR.Measurements related to the oscillatory potentials are shown in Figure 6. It can be observed that for both scotopic a-and b-waves, BGP-treatment was able to elicit higher mean amplitudes as compared to control group values (Figure 4A,C).For the a-wave implicit time (Figure 4B), the control IR values turned out to be the highest, the IR values of BGP-treated groups were observed to be near the NO-IR control values, although at some intensities they were even shorter; and at most intensities, the shortest a-wave implicit times were provided by the BGP-treated NO-IR group.Similar trends were observed in case of b-wave implicit times, but here, values of IR and NO-IR groups were more separated: implicit times of b-waves seemed to be much more sensitive to ischemia-reperfusion injury, which was alleviated by BGP-treatment at some intensities. According to Figure 5 It can be observed that for both scotopic a-and b-waves, BGP-treatment was able to elicit higher mean amplitudes as compared to control group values (Figure 4A,C).For the a-wave implicit time (Figure 4B), the control IR values turned out to be the highest, the IR values of BGP-treated groups were observed to be near the NO-IR control values, although at some intensities they were even shorter; and at most intensities, the shortest awave implicit times were provided by the BGP-treated NO-IR group.Similar trends were observed in case of b-wave implicit times, but here, values of IR and NO-IR groups were more separated: implicit times of b-waves seemed to be much more sensitive to ischemiareperfusion injury, which was alleviated by BGP-treatment at some intensities.IR vs. control IR comparisons).Similarly, in case of b-waves, a significant difference was measured between BGP IR and control IR values (83.38 ± 2.518 vs. 94.72 ± 3.504 ms for BGP IR vs. control IR, respectively, p < 0.01).There was no significant difference between BGP NO-IR and control NO-IR b-wave implicit times (60.11 ± 0.4902 vs. 61.63 ± 0.5596 ms for BGP NO-IR and control NO-IR, respectively).According to measurements related to oscillatory potentials (Figure 6), both the control NO-IR and BGP NO-IR groups had the highest OP amplitudes, while in the BGPtreated IR group, the treatment elicited higher values compared to the control IR group.There were no significant differences in OP amplitudes between the control NO-IR and BGP NO-IR groups (68.69 ± 3.917 vs. 57.49± 2.064 for control NO-IR vs. BGP NO-IR groups).There were statistically significant differences between the control IR and BGP IR groups (15.64 ± 2.064 vs. 26.06± 2.977 for control IR vs. BGP IR groups).Statistical According to measurements related to oscillatory potentials (Figure 6), both the control NO-IR and BGP NO-IR groups had the highest OP amplitudes, while in the BGP-treated IR group, the treatment elicited higher values compared to the control IR group.There were no significant differences in OP amplitudes between the control NO-IR and BGP NO-IR groups (68.69 ± 3.917 vs. 57.49± 2.064 for control NO-IR vs. BGP NO-IR groups).There were statistically significant differences between the control IR and BGP IR groups (15.64 ± 2.064 vs. 26.06± 2.977 for control IR vs. BGP IR groups).Statistical analysis of OP implicit times revealed no difference between control IR and BGP IR groups (54.14 ± 3.236 vs. 51.42 ± 4.456 for the control IR vs. BGP IR groups), while between control NO-IR and BGP NO-IR groups, there was a significant difference (38.29 ± 0.7049 vs. 35.93± 0.6051 for the control NO-IR vs. BGP NO-IR groups). Figure 7 shows waveforms and OPs for light-adapted, photopic electroretinography, whilst Figure 8 illustrates the most important statistical comparisons and measurement results related to photopic ERG. Photopic ERG measurement was carried out after 10 min background light-adaptation, representative waveforms of which are seen in Figure 7: as demonstrated, BGP treatment was able to restore the physiological course of the curve (compare Figure 7D with Figure 7A), while untreated IR groups showed a more severe deterioration of the waveform (Figure 7C).Similarly, the oscillatory potentials of the BGP-treated IR group (Figure 7H) are more pronounced and orderly compared to control IR (Figure 7G, compared with control NO-IR Figure 7E). (54.14 ± 3.236 vs. 51.42 ± 4.456 for the control IR vs. BGP IR groups), while between control NO-IR and BGP NO-IR groups, there was a significant difference (38.29 ± 0.7049 vs. 35.93± 0.6051 for the control NO-IR vs. BGP NO-IR groups). Figure 7 shows waveforms and OPs for light-adapted, photopic electroretinography, whilst Figure 8 illustrates the most important statistical comparisons and measurement results related to photopic ERG.Photopic ERG measurement was carried out after 10 min background light-adaptation, representative waveforms of which are seen in Figure 7: as demonstrated, BGP treatment was able to restore the physiological course of the curve (compare Figure 7D with Figure 7A), while untreated IR groups showed a more severe deterioration of the waveform (Figure 7C).Similarly, the oscillatory potentials of the BGP-treated IR group (Figure 7H) are more pronounced and orderly compared to control IR (Figure 7G, compared with control NO-IR Figure 7E).According to the ERG measurements, photopic b-wave mean amplitudes of BGPtreated groups turned out to be higher in almost every comparison (Figure 8A), while implicit times were shorter compared to untreated control groups (Figure 8B).Results of stimulation with photopic light intensity of 3000 mcd*s*m −2 are highlighted in Figure 8C,D.There were significant differences in mean b-wave amplitudes between BGP NO-IR According to the ERG measurements, photopic b-wave mean amplitudes of BGPtreated groups turned out to be higher in almost every comparison (Figure 8A), while implicit times were shorter compared to untreated control groups (Figure 8B).Results of stimulation with photopic light intensity of 3000 mcd*s*m −2 are highlighted in Figure 8C,D.There were significant differences in mean b-wave amplitudes between BGP NO-IR and control NO-IR as well as between the BGP IR and control IR groups, p < 0.05 in both comparisons (70.95 ± 2.786 vs. 64.46± 3.134 µV for BGP NO-IR vs. control NO-IR, and 34.49 ± 2.524 vs. 27.61± 2.783 µV for BGP IR vs. control IR groups, respectively).The differences between the mean photopic b-wave implicit times of BGP-treated and control groups did not reach the level of statistical significance. The results of the histological analysis are shown in Figures 9 and 10.According to our measurements, the BGP IR group had significantly thicker retina than the control IR group (182.4 ± 3.760 vs. 101.2± 2.640 µm, for BGP IR vs. control IR, respectively).The retinal thickness was significantly smaller in the control IR group compared to any other group, while it was the largest in the BGP IR group.There were no significant differences between the BGP NO-IR and control NO-IR groups (150.1 ± 2.553 vs. 130.9 ± 5.118 µm for the BGP NO-IR and control NO-IR groups, respectively).As shown in Figure 10, all retinal layers except the outer plexiform layer were significantly smaller in the control IR group compared to any other group (18.78 ± 0.5175, 27.93 ± 0.3172, 14.15 ± 0.2031, 18.39 ± 0.2462 and 11.63 ± 0.1921, for photoreceptor layer (PL), outer nuclear layer (ONL), inner nuclear layer (INL), inner plexiform layer (IPL) and ganglion cell layer (GCL), respectively for the control IR group).The BGP-treated IR group, on the other hand, exhibited significantly greater thickness values in every comparison (36.67 ± 0.4810, 39.92 ± 0.3564, 13.30 ± 0.2224, 21.90 ± 0.2496, 47.05 ± 0.5477 and 25.85 ± 0.4335, for PL, ONL, OPL, INL, IPL and GCL, respectively, for the BGP IR group). Discussion Several ophthalmological diseases are related to ischemia-reperfusion injury of the eye, even those that are consequences of metabolic disturbances, e.g., diabetic retinopathy [21].It is important to manage vascular retinopathies to prevent their progression and preserve vision, and thus new, effective and evidence-based therapies must be developed.Formerly, our workgroup carried out experiments with a small-molecule drug candidate, BGP, O-(3-piperidino-2-hydroxy-1-propyl) nicotinic amidoxime dihydrochloride.We found BGP to be protective in diabetic retinopathy in both the short and long term [16,17].Based on these results, the present study was performed to investigate the efficacy of BGP in an eyedrop formulation, since topical administration is a frequently used method when treating ophthalmic diseases.Physiologically, the retinal endothelial cells and retinal pigmented epithelium provide the inner and outer blood-retinal barrier, which prevents paracellular movement of hydrophilic compounds.For the drug to reach the posterior segment, it must first be absorbed through the corneal and conjunctival pathways to achieve therapeutical concentrations in the retina [22].Less than 3% of the drug passes through the cornea [23]; however, certain tissues can bind drugs, increasing the concentration of the drug in the eye: e.g., a considerable binding of topically applied beta-blockers to the retinal tissues has been described previously [24].In co-operation with the Discussion Several ophthalmological diseases are related to ischemia-reperfusion injury of the eye, even those that are consequences of metabolic disturbances, e.g., diabetic retinopathy [21].It is important to manage vascular retinopathies to prevent their progression and preserve vision, and thus new, effective and evidence-based therapies must be developed.Formerly, our workgroup carried out experiments with a small-molecule drug candidate, BGP, O-(3-piperidino-2-hydroxy-1-propyl) nicotinic amidoxime dihydrochloride.We found BGP to be protective in diabetic retinopathy in both the short and long term [16,17].Based on these results, the present study was performed to investigate the efficacy of BGP in an eyedrop formulation, since topical administration is a frequently used method when treating ophthalmic diseases.Physiologically, the retinal endothelial cells and retinal pigmented epithelium provide the inner and outer blood-retinal barrier, which prevents paracellular movement of hydrophilic compounds.For the drug to reach the posterior segment, it must first be absorbed through the corneal and conjunctival pathways to achieve therapeutical concentrations in the retina [22].Less than 3% of the drug passes through the cornea [23]; however, certain tissues can bind drugs, increasing the concentration of the drug in the eye: e.g., a considerable binding of topically applied beta-blockers to the retinal tissues has been described previously [24].In co-operation with the pharmaceutical technology department of our university, in the present study, we assessed the effect of an eyedrop containing BGP and sulfobutylether-β-cyclodextrin (SBECD), a complex-forming, solubility-enhancing cyclic oligosaccharide with a donut-shaped ring structure [25].Given that BGP shares a structural resemblance with propranolol [26], we proposed that by enhancing the bioavailability of BGP, this formulation may be able to deliver the retinoprotective effects of the agent to the site of action even in the form of an eyedrop. To distinguish the impact of BGP from cyclodextrin, animals in the control group were treated with vehicle, meaning they were given eyedrops without BGP.Therefore, any observed differences between the control and treated groups can be attributed to BGP-this is an indirect argumentation.A limitation of the study, however, is that the concentration of BGP was measured neither from the blood nor the aqueous humor, and thus, we cannot directly prove whether BGP successfully entered the aqueous humor, or if the measured effects resulted from systemic BGP ingested after it drained through Schlemm's canal.Future experiments are planned by our workgroup to determine the answer to this.Nevertheless, regardless of the route of entry, the present study registered significant retinoprotective effects of BGP after administration in an eyedrop formulation. Ischemia-reperfusion injury is known from the scientific literature to distort ERG waveforms and deteriorate a-and b-waves [27,28].Similarly, in our experiment, these waves were decreased both in scotopic (Figure 2) and in photopic electroretinograms (Figure 7A-D), but treatment with BGP attenuated this change (Figures 4, 5 and 8), which is a novel result.This might be expected, as BGP was already known to be protective against ischemia-reperfusion injury in the heart [29,30] and was proven to be efficient against diabetic retinopathy [16,17].However, this is the first demonstration which shows that BGP is able to improve a-and b-waves that have deteriorated due to ischemia-reperfusion injury.It is not uncommon to see the retinoprotective effects of an agent that is able to induce heat shock proteins (HSPs), as in the case of heme-oxygenase 1 inducer sour cherry seed extract [31] and alpha-melanocyte-stimulating hormone [32] or HSP co-inducer bimoclomol [33].BGP protects against heat-, metabolic-and oxidative stress situations through different pathways, including (HSP) induction [34], lipid-raft modification [35] and changing expression of proteins [16,17].BGP, accumulated in mitochondria, was found to be protective against apoptosis and necrosis in hydrogen peroxide-induced cell death [36] and was able to prevent neuronal death in a mouse model of impaired mitochondrial function [37]. Electroretinographical oscillatory potentials are known to be attenuated in case of diabetes mellitus-even before microvascular complications become visible during funduscopy [38,39]-or similarly in case of ischemia-reperfusion injury [40,41], as in our study as well (Figures 3 and 7E-H).According to the scientific literature, anti-ischemic treatments may restore OPs that have deteriorated due to ischemia-reperfusion damage [42,43], but the present article is the first to describe the OP-restoring effect of BGP (Figure 6). Ischemia-reperfusion injury and, similarly, oxidative stress (e.g., by H 2 O 2 ) are known to increase the latency times (i.e., implicit times) of a-and b-waves in the electroretinogram [44][45][46].This was evident in our electroretinographical measurements as well.Nevertheless, the present study is the first to demonstrate the implicit time-decreasing effect of BGP (Figures 4B,D and 8B).Similar anti-ischemic retinal protection of other agents was shown by various authors [43,47], although some even reported unchanged implicit times despite other evident effects on the electroretinogram [48]. According to our histology results, BGP was able to preserve retinal thickness, counterbalancing the retina-thinning effect of ischemia-reperfusion, which is a novel result (Figures 9 and 10).A similar protective effect is a hallmark of other anti-ischemic agents as well [31,32,49].It is well documented that a common consequence of ischemia-reperfusion injury is edema [46,50], and then necro-apopto-autophagy, i.e., initiation of different celldeath mechanisms, resulting in thinning of retinal layers [47, 51,52].Edema, however, may be transient, which implies a reversible injury as well [46,53].This was the case in our study, since the electroretinograms of BGP-treated animals showed significantly better retinal function than those of untreated animals, and thus the retinal cells remained viable.Similarly, in clinical scenarios, the ERG recordings, which correlate with retinal function even when edema is present, can be utilized to assess both the functionality of the retina and its response to therapy [54].Thus, the retinal layer thickness differences observed between treated and untreated NO-IR groups can be considered negligible.It is plausible that in the treated IR group, BGP would have exerted its retinal thickness-preserving effect over a longer period of time than our experiment lasted.Although we did not continue our study to measure retinal thickness and ERG at later time-points-a limitation of the study-based on a former study conducted by our workgroup [17], we already know that BGP is functionally effective even if administered for a long time.This further corroborates that the increase in retinal thickness seen on histology sections in the present study might be a late and most probably counteracted edema. Nevertheless, we have to take into account that the differences seen between treated and untreated NO-IR groups in any measurement throughout the study may be altered by the ligation the animal suffered on its other eye as compared to a healthy animal model.According to the fellow eye phenomenon discussed elsewhere, changes may develop in the unaffected eye following ischemia of the other eye [55].Nonetheless, by involving a new, healthy animal individual, we would have introduced an unavoidable error factor, namely the individuality (any individual differences) between animals.Furthermore, based on translational aspects of our animal study, it is advisable to use an internal control, because in most clinical cases, there is a fellow eye next to the affected eye, and thus we can see the effect of the treatment on the fellow eye.The relevancy of the comparison to a healthy eye would be to see the effect of the treatment as a prevention, but this was not among the purposes of the current study.Furthermore, according to the rules of 3R for laboratory animal studies (replacement, reduction, refinement), involving the minimal necessary and number of animals into a study must be a priority of any researchers carrying out experiments on laboratory animals. The directions of future research involve investigating further molecular morphological and biological targets in the action mechanism of BGP.Our workgroup already identified several effector molecules, the levels of which are changed in the eyes of ischemiareperfusion-related diabetic animal models in response to BGP treatment, including sirtuin 1 (SIRT1), matrix metalloproteinase 9 (MMP9), heat shock protein 70 (HSP70) and nuclear factor kappa B (NFkB) [16,17].Furthermore, future perspectives include research aimed toward understanding the absorption of BGP from the eyedrop formulation or the mechanism of how it reaches the retina; however, for this purpose, we first have to develop new techniques for measuring BGP concentration from tissues samples. In summary, we successfully demonstrated that BGP is able to exert its protective effects on the retina even if administered in the form of eyedrops, and validating this was our primary goal.In this study, BGP was shown to improve retinal a-and b-waves, shorten their implicit times and restore oscillatory potentials after ischemia-reperfusion.It was also observed to counteract retinal thinning in IR eyes of Sprague Dawley rats.This small-molecule drug candidate is able to compensate for experimental global eye ischemia-reperfusion injury in rats elicited by ligation of blood vessels. Institutional Review Board Statement: The animal study protocol was approved by the local Ethics Committee of the University of Debrecen (8/2020/DEMÁB, date of approval: 19 October 2020). Informed Consent Statement: Not applicable. Figure 3 . Figure 3. Representative scotopic oscillatory potentials elicited by flashlight intensity series (from the bottom (black line) to the top (blue line): 10, 100, 300, 1000, 3000, 10,000, 25,000 mcd*s*m −2 ).(A): control no-IR; (B): BGP no-IR; (C): control IR; (D): BGP IR.Furthermore, OPs are more pronounced and orderly in the BGP-treated groups compared to the control groups (Figure 3B,D vs. Figure 3A,C, respectively).Measurement results related to scotopic ERG are shown in Figure 4 (dots represent the BGP-treated group NO-IR eyes, squares represent BGP-treated IR eyes, upward-pointing triangles represent the control group NO-IR eyes and downward-pointing triangles represent the control group IR eyes), while the most statistically important comparisons are highlighted in Figure 5 in the following group-order: control NO-IR, BGP NO-IR, control IR, and BGP IR.Measurements related to the oscillatory potentials are shown in Figure 6. , at 3000 mcd*s*m −2 light intensity, the BGP-treated groups had significantly higher a-and b-wave mean amplitudes (92.09 ± 3.470 and 46.69 ± 2.269 µV for BGP NO-IR and IR group mean a-waves ± SEM, respectively, and 356.3 ± 10.00 and 165.3 ± 11.47 µV for BGP NO-IR and IR group mean b-waves ± SEM, respectively) compared to similar values of the control groups (79.27 ± 3.885 and 39.03 ± 2.884 µV for control NO-IR and IR group mean a-waves ± SEM, respectively, and 339.2 ± 10.40 and 100.3 ± 8.976 µV for control NO-IR and IR group mean b-waves ± SEM, respectively; for a-waves, p < 0.01 for BGP NO-IR vs. control NO-IR and p < 0.05 for BGP IR vs. control IR comparisons; for b-waves, p < 0.05 for BGP NO-IR vs. control NO-IR and p < 0.0001 for BGP IR vs. control IR comparisons). Measurement results related to scotopic ERG are shown in Figure4(dots represent the BGP-treated group NO-IR eyes, squares represent BGP-treated IR eyes, upward-pointing triangles represent the control group NO-IR eyes and downward-pointing triangles represent the control group IR eyes), while the most statistically important comparisons are highlighted in Figure5in the following group-order: control NO-IR, BGP NO-IR, control IR, and BGP IR.Measurements related to the oscillatory potentials are shown in Figure 6. Figure 6 . Figure 6.Results and comparisons of dark-adapted oscillatory potential amplitudes and implicit times in different groups, flashlight intensity: 3000 mcd*s*m −2 .(A): Mean oscillatory potential amplitudes for different groups (µV); (B): mean oscillatory potential implicit times (ms).All results are plotted as group mean ± SEM. ns = no significant difference.Statistically significant comparisons are marked with * p < 0.05; *** p < 0.001; **** p < 0.0001.There were significant differences in a-wave implicit time values as well: BGP values (17.98 ± 0.3497 and 19.96 ± 0.5929 ms for NO-IR and IR groups, respectively) were significantly shorter than control values (21.06 ± 0.3748 and 31.53 ± 1.931 ms for NO-IR and IR control groups, respectively; p < 0.0001 for both BGP NO-IR vs. control NO-IR and BGP IR vs. control IR comparisons).Similarly, in case of b-waves, a significant difference was measured between BGP IR and control IR values (83.38 ± 2.518 vs. 94.72 ± 3.504 ms for BGP IR vs. control IR, respectively, p < 0.01).There was no significant difference between BGP NO-IR and control NO-IR b-wave implicit times (60.11 ± 0.4902 vs. 61.63 ± 0.5596 ms for BGP NO-IR and control NO-IR, respectively).According to measurements related to oscillatory potentials (Figure6), both the control NO-IR and BGP NO-IR groups had the highest OP amplitudes, while in the BGP-treated IR group, the treatment elicited higher values compared to the control IR group.There were no significant differences in OP amplitudes between the control NO-IR and BGP NO-IR groups (68.69 ± 3.917 vs. 57.49± 2.064 for control NO-IR vs. BGP NO-IR groups).There were statistically significant differences between the control IR and BGP IR groups (15.64 ± 2.064 vs. 26.06± 2.977 for control IR vs. BGP IR groups).Statistical analysis of OP implicit times revealed no difference between control IR and BGP IR groups (54.14 ± 3.236 vs. 51.42 ± 4.456 for the control IR vs. BGP IR groups), while between control NO-IR and BGP NO-IR groups, there was a significant difference (38.29 ± 0.7049 vs. 35.93± 0.6051 for the control NO-IR vs. BGP NO-IR groups).Figure7shows waveforms and OPs for light-adapted, photopic electroretinography, whilst Figure8illustrates the most important statistical comparisons and measurement results related to photopic ERG.Photopic ERG measurement was carried out after 10 min background light-adaptation, representative waveforms of which are seen in Figure7: as demonstrated, BGP treatment was able to restore the physiological course of the curve (compare Figure7Dwith Figure7A), while untreated IR groups showed a more severe deterioration of the waveform (Figure7C).Similarly, the oscillatory potentials of the BGP-treated IR group (Figure7H) are more pronounced and orderly compared to control IR (Figure7G, compared with control NO-IR Figure7E). Figure 8 . Figure 8. (A,B): Results of photopic ERG measurements plotted against light-adapted flashlight intensities.(A): Photopic b-wave mean amplitudes (µV); (B): photopic b-wave mean implicit times (ms).Dots represent BGP-treated group NO-IR eyes, squares represent BGP-treated IR eyes, upwardtriangles represent control group NO-IR eyes and downward-pointing triangles represent control group IR eyes.All values are presented as group means.Statistically significant comparisons are marked with * in case of BGP NO-IR vs. control NO-IR comparisons, and # in case of BGP IR vs. control IR comparisons.The number of markers represents the statistical significance of the comparison * or # p < 0.05; ## p < 0.01.(C,D): Statistically most important comparisons in photopic ERG measurements; flashlight intensity: 3000 mcd*s*m −2 .(C): mean b-wave amplitudes for the different groups (µV); (D): mean b-wave implicit times (ms).All results are plotted as group mean ± SEM. ns = no significant difference.Statistically significant comparisons are marked with * p < 0.05; *** p < 0.001; **** p < 0.0001.
2024-03-17T16:20:34.395Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "8a41fec76db0b3dd31252eb1ccb5a104c8dcc558", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9059/12/3/637/pdf?version=1710328842", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "08e068f53268cdcc4ad2c7defd31c5f9b2222ad7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249422914
pes2o/s2orc
v3-fos-license
Splitting capacity of Eucalyptus globulus beams loaded perpendicular to the grain by connections In timber structures, knowledge of the splitting capacity of beams loaded perpendicular to the grain by dowel-type connections is of primordial importance since brittle failure can occur. In the present work, single- and double-dowel-type connections following different loaded edged distance arrangements are experimentally investigated to derive the splitting behaviour of Eucalyptus globulus L., which is a hardwood species of increasing interest for structural use due to its high mechanical performance, fast growth, and good natural durability. The correlation of experimental failure loads with those theoretically predicted by the expression included in Eurocode 5 and by eight analytical models based on an energetic approach is discussed. Most of the analytical models studied overpredict the splitting capacity. However, the code splitting expression, derived from softwoods, proves to be very conservative in predicting the eucalyptus splitting failure load. Introduction Nowadays, there is growing environmental awareness and increasing demands for sustainability, in response to which wood is established as one of the most suitable materials for building as a natural and renewable resource. Although the vast majority of products used in timber engineering are made of softwoods, hardwood species are gaining increasing attention for structural applications in the European market. This is mainly due to the large stocks of structurally unused hardwood resources in Central and Southern Europe, to the shortage of softwoods and consequent higher costs, and to continuous changes in reforestation policies toward hardwoods due to the greater suitability of several broadleaf species for soil and climate conditions [1]. This interest is demonstrated by emerging hardwood products, mainly glued laminated products [e.g., [2][3][4]. In this regard, Eucalyptus globulus Labill (also known as southern blue gum) stands out as a highperformance hardwood of significant interest, due to its fast growth, high mechanical properties, and good natural durability. Recently, there has been increasing industrial and institutional interest driving scientific research on E. globulus in different fields, such as silviculture [5][6][7][8], mechanical characterisation of the material with small clear specimens [9,10] and structural size [11,12], performance of bonded joints for laminated products [13,14], as well as research related to the development of higher added value building and engineering wood products using this species, such as finger-jointed solid timber [11], glued laminated timber [15], cross-laminated timber (CLT) [16,17], laminated veneer lumber (LVL) and plywood [18][19][20] or nail-laminated timber (NLT) and NLTconcrete composite floor panels [21]. In the last century, eucalyptus has become one of the most widely cultivated fast-growing species worldwide in forestry exploitations for production purposes, mainly focused on pulp and paper industries. The Eucalyptus globulus species is the most dominant hardwood plantation in Australia together with Eucalyptus nitens [22]. It is also one of the main hardwood species in South America. The Iberian Peninsula (Europe) hosts the most extensive E. globulus plantations, distributed primarily along the western and northern coasts, comprising Portugal and northern Spain [23]. In Europe, Eucalyptus globulus is assigned to the D40 strength class for structural use according to the European standard EN 1912:2012 [24]. The Spanish visual grading standard UNE 56546:2013 [25] only applies to E. globulus solid wood with maximum cross-sections of 60 9 200 mm 2 because, in practice, larger cross-sections are difficult to obtain due to drying problems. For such larger cross-sections, glued laminated products would be a choice. Thanks to the high performance of this species, eucalyptus solid timber or finger jointed solid timber with small cross-sections could be used in efficient structures, such as trusses, lattice structures, or gridshells [11,26]. The timber elements that form these structures are often joined by dowel-type connections, which, if loaded perpendicular to the grain, may lead to brittle splitting failure of the timber member at load levels below the bearing capacity required for desirable ductile behaviour [27] (as is well known, the strength and stiffness perpendicular to the grain are particularly low in wood). Therefore, this brittle failure is one of the most critical in timber structures and deserves special attention in design to achieve adequate reliability. Most design codes for timber structures currently include explicit expressions to quantify the splitting capacity of connections. These approaches are mainly based on a strength criterion (the former German DIN 1052:2008 [28] based on Ehlbeck et al. [29]) or on an energetic approach within the framework of fracture mechanics (Eurocode 5 [30], Canadian OS86:19 [31]). In particular, the expression adopted in Eurocode 5 describing the splitting capacity of a connection loaded perpendicular to grain by means of fasteners other than punched metal plates is a quite simple formula applicable only to softwoods, which considers only geometrical parameters (the dimension of the beam cross-section and the distance from the fastener to the loaded edge of the beam). The expression is based on the analytical model originally formulated by Van der Put [32] in the framework of linear elastic fracture mechanics (with a further publication by Van der Put and Leijten [33] in an effort to make the theory more transparent). As a drawback to its simplicity, the Eurocode 5 splitting capacity expression does not consider the influence of important parameters such as connection layout, type and number of fasteners, different loading cases, etc. Ongoing research has been addressed to propose alternative analytical expressions in order to extend or adapt the original expression, considering to a greater or lesser extent the effect of different geometrical and material parameters based on experimental tests or numerical analysis [34][35][36][37][38][39][40]. However, there is no general agreement between the results and the expression derived by Van der Put and Leijten still remains in the code. A comprehensive review of existing approaches can be found in Schoenmakers [41] and Jockwer and Dietsch [42]. Both the original van der Put and Leijten equation and the analytical variants based on fracture mechanics proposed in the literature consider the material fracture energy in their formulations, whereas the strength approaches are based on tensile strength perpendicular to the grain as a material property. There is no compilation of fracture energies for the different species in the standards, but this property must be obtained experimentally. The fracture energies in Modes I and II loading of E. globulus have been achieved in previous work by the authors [43][44][45]. However, there are no studies on the splitting capacity in eucalyptus. Furthermore, as mentioned above, the Eurocode 5 expression is only applicable to softwoods. Most subsequent research has also focused mainly on solid wood or timber products with softwoods. Therefore, the suitability to hardwood species of this or other analytical proposals in the literature requires particular research, considering that hardwoods show greater mechanical properties than softwoods, including higher fracture energy and higher tensile strength perpendicular to the grain. Therefore, the use of expressions calibrated for softwoods might be too conservative for hardwoods. The aim of the present work was to study, for the first time to the best of the authors' knowledge, the splitting capacity of Eucalyptus globulus L. solid wood loaded perpendicular to the grain by steel doweltype connections. Experimental tests were carried out on beams with single and double dowel connections placed at different loaded edge distances to derive the splitting capacity of this species. The adequacy to eucalyptus hardwood of the expression included in Eurocode 5 for softwoods was discussed. The correlation between experimental splitting failure loads and those predicted theoretically by different analytical models from the literature based on the experimentally determined fracture energy of the material was also addressed. . It should be noted that the boards were approximately free of knots. This is usually the case in E. globulus because it develops a natural pruning, so from the starting point the knots are few and very small. The boards were conditioned at 20°C and 65% relative humidity prior to specimen preparation, reaching an equilibrium moisture content of 12.8%. The 7 boards were subjected to edgewise bending tests under four-point loading according to EN 408:2011 [46] to obtain their static longitudinal modulus of elasticity (E L ). The boards were planed before testing to a final dimension of 29 9 116 9 3042 mm 3 . The test span was set at 18 times the depth (width of cross-section). Table 1 shows the result obtained for each board (identified with a reference number that will also identify the splitting specimens to be extracted from each of them). Table 1 also includes the densities (q) determined from its dimensions and total weight for a reference moisture content of 12%. The mechanical properties of the material required in the different analytical models for the splitting analysis carried out in the present work (see Sect. 3) are the longitudinal modulus of elasticity (E L ), the shear modulus of elasticity in the LR plane (G LR ), the tensile strength perpendicular to the grain (f t,90 ), and the fracture energy in Mode I loading in most of the models (G Ic ) and in Mode II (G IIc ). The E L value of each splitting specimen was taken from the board from which it was extracted. The rest of the material properties (G LR , f t,90 , G Ic and G IIc ) were taken from previous experimental work by the authors. Specifically, for the determination of G LR and f t,90 , eucalyptus boards with similar E L and q to those referred to above were used. A mean value of G LR = 1926 MPa was obtained from compression tests on ten small clear specimens [9] and a mean value of f t,90 = 7.5 MPa from perpendicular-to-grain tensile tests on thirty-six specimens [10]. Regarding the fracture properties of E. globulus, the evaluation of Mode I fracture energy is detailed in [43] and [44] using Double Cantilever Beam (DCB) specimens (Fig. 1a). A mean value of the critical strain energy release rate G Ic = 0.77 N/mm was derived from the resistance curves (R-curves) following the compliance-based beam method (CBBM) as a data reduction scheme. This method was also applied to determine the critical strain energy release rate in Mode II loading from end-notched flexure (ENF) tests (Fig. 1b), resulting in a mean value of G IIc = 1.54 N/ mm [45]. Table 2 summarises the mean values of these properties that will be considered in the different analytical models to study the splitting behaviour of E. globulus. Splitting tests 32 splitting tests were conducted on planed E. globulus specimens of 29 9 116 mm 2 cross-section, 580 mm length and 500 mm span. This and similar crosssections are readily available in this species. Two series of three-point bending tests were carried out on single and double dowel connections with different arrangements of loaded edged distances (h e ): splitting [37], with the splitting check restricted to a B 0.7 in the former German standard [28]). It should be noted that h e = 4d corresponds to the minimum loaded edge distance set out in Eurocode 5 [30], but there is always the possibility of execution errors when drilling holes on site, which are particularly relevant in structures with small depth elements, such as the ones studied here. Between seven and nine beams were tested for each configuration. (b) Two steel dowels in a row of d = 16 mm in diameter and quality S355, spaced 3d (48 mm) apart between their centres and placed at h e = 4d (64 mm) as shown schematically in Fig. 2b. Eight beams were tested with this layout. The diameter of the steel dowels was chosen to be sufficiently thick to prevent yielding. These were loaded by two outer plates made of Eucalyptus globulus with characteristics similar to those of the beams (note that no embedment damage occurred in the plates). A load cell of 50 kN maximum capacity was used. The tests were carried out at a constant cross-head displacement rate of the test device, adjusted to reach failure in approximately 5 min (2 mm/min, 1 mm/min and 0.5 mm/min for h e = 4d, h e = 3d and h e = 2d layouts, respectively, in the single-dowel tests; 0.7 mm/min velocity in the double-dowel tests). During the loading process, the applied load (P) and the displacement (d) were recorded. For the latter, two Linear Variable Differential Transformer (LVDT) displacement sensors, Solartron AX/10/S, with ± 10 mm measurement range and 20 mV/V/mm sensitivity, were used: one was located on the middle bottom side of the specimen and the other at the top of the steel dowel. Analytical models studied The considered analytical fracture mechanics models for the analysis of the splitting capacity of beams loaded perpendicular to grain by connections are shown in Table 3. A comprehensive review of most of these approaches can be found in [35,41]. Therefore, in this work, only a summary of these models will be presented. Most of the models are related and appear as special cases of a general one, where the failure load is determined for a linear elastic body loaded with a single force, based on the energy balance approach [47] and the fracture mechanics compliance method according to Eq. (1): being A and C the crack area and the model compliance, respectively. The different analytical models in Table 3 are derived using Eq. (1) making certain assumptions on how to calculate the compliance C(A), which leads to a good agreement with the experimental data. The external load is assumed to act at a single point in the middle of the beam. The parameters considered by the models are both material and geometrical. The material properties are the fracture energy, G f (G Ic of the Mode I loading in most models); the shear modulus of elasticity, G; the longitudinal modulus of elasticity, E; and the tensile strength perpendicular to the grain, f t . The main geometric parameters are the width of the beam, b; the height of the beam, h; the distance from the connector to the loaded edge of the beam, h e ; the relative height of the connection, a (= h e /h); the shear correction factor, b s , which takes the value of 6/5 for a rectangular cross-section according to ordinary beam theory; the number of rows of dowels parallel to grain, n; and the connection width, a r . Jensen [34] formulated one of the most general models as an extended version of the original Van der Put and Leijten model [33], which will be discussed below, but without any simplifying assumptions, such as neglecting the normal forces in the cracked parts of the beam. The model was derived considering a cracked beam structure modelled by beam elements, all rigidly connected. The failure load is given by Eq. (2). Another model which appears as a special case of the model expressed by Eq. (2) was later proposed by Jensen et al. [35], derived this time from considering that the part of the beam below the crack behaves like a beam with fixed ends, of length the crack and depth h e . The expression is shown in Eq. (3), which would also be arrived at by assuming h ? ? for a finite value of h e (i.e. a ? 0) in Eq. (2), that is, all beams except the beam with depth h e are assumed to be infinitely stiff. When only shear deformations are considered and thus bending deformations are neglected (i.e., finite G, A renowned simple analytical model that forms the basis for the design in Eurocode 5 [30] is the one proposed by Van der Put and Leijten [33] expressed in Eq. (5). Due to the similarity in behaviour, the same principle was followed as in the mechanical fracture model for the splitting of beams with notches previously derived by Van der Put [32]. The expression is obtained by analysing the cracked state of a beam under an energy balance approach when the joint load is perpendicular to the grain near the loaded edge, using experimental results from the literature as calibration. Although today it is still the model basis of the normative expression, it has been subject to subsequent alterations and adjustments by different researchers. In this sense, the resulting expression could again be seen as a special case of the general model presented in Eq. (2) neglecting bending deformations and taking into account only shear deformations (that is, G/E ? 0) and b s = 6/5 for rectangular cross-section. In the cases where h e /h ? 0, the van der Put model would lead back to the solution presented in Eq. (4). Ballerini [37] proposed a semiempirical model given by Eq. (6), in an effort to better fit the experimental data than using the van der Put and Leijten formula for single-dowel connections. The work also provides an approach for multiple-dowel connections and is further elaborated with parametric numerical analysis by Ballerini and Rizzi [48]. It considers the influence of the connection width and depth using correction functions applied to the singledowel formula. In particular, the correction factor to account for the influence of the width of the connection is f w = 1 ? 0.75(l r ? l 1 /h) B 2.2, where l r is the spacing between the dowels and l 1 is the distance between the dowel clusters (this correction function Table 3 Analytical models for the analysis of the splitting capacity of beams loaded perpendicular to grain by connections Reference Analytical Model Equation Larsen and Gustafsson (2001) Van der Put and Leijten (2000) [33] Equation (9) will be applied in the case of double dowel connections studied in the present work). Jensen postulated other analytical models based on the beam-on-elastic foundation (BEF) [38]. In this case, the crack plane is modelled by springs to which the fracture properties are assigned. After cracking, the beam below the fictitious fracture layer is considered as a beam resting on elastic Wrinkler springs connected to the upper part, which is assumed to be infinitely rigid (foundation). For a single load acting far from the end of the beam and small crack lengths, the failure load is given by Eq. (7), where c is the effectiveness factor. Unlike the models mentioned above, this expression includes the tensile strength f t in the parameter f, and the splitting failure load is not proportional to the square root of the fracture energy. Therefore, solutions cannot be encompassed within either linear elastic fracture mechanics (LEFM) or nonlinear fracture mechanics (NLEFM). They belong to quasi-nonlinear fracture mechanics, and LEFM solutions are considered special cases. Assuming E ? ? or f t ? ? in the analytical approach of Eq. (7), the solution P u = P u,LEFM would be obtained. The same solution could also be derived from Eq. (4) for b s = 6/5, and from Eq. (5) when h e / h ? 0. Therefore, it seems feasible to use, as P u,LEFM in Eq. (7) the linear elastic fracture mechanics solution given by Eq. (5). In this way, the analytical model expressed by Eq. (8) is obtained. When h e /h ? 0, Eq. (8) leads to Eq. (7), and when E ? ? or f t ? ? it becomes Eq. (5). Equation (8) is therefore a semiempirical generalization of Eq. (7) that considers the effect of the total beam height. Franke and Quenneville [39] presented a complete approach based on a quadratic failure criterion in which fracture modes I and II for tension and shear are considered using the corresponding fracture energies G I and G II . It should be noted that virtually all fractures are, in fact, mixed-mode fractures, but the mixedmode ratio is not considered by the aforementioned compliance method. As the G I of wood is usually much lower than the G II , the G I is usually considered in most splitting models as a conservative assumption and reasonably accurate approximation, but the mixed-mode fracture energy would represent the most realistic situation. The Franke and Quenneville formula also takes into account the width of the connection and the number of rows of dowels, and they found that the geometry of the connection influenced the ratio between fracture modes I and II. The design proposal is given by Eq. (9) as a result of an experimental and numerical investigation by finite element analysis of more than 100 different connection arrangements. All of the above models are similarly acceptable from a modelling perspective. From a practical design point of view, simple and robust models seem to be more attractive. Experimental failure loads The failure behaviour of the series of specimens with one and two dowels in a row and different distances from the loaded edges subjected to splitting tests is herein presented. In the single-connection tests, a main crack could always be observed growing from both sides of the dowel (Fig. 3a). This crack developed at slightly different positions, starting at the mid-down part of the dowel contact. The smaller the loaded edge distance, the faster the crack developed with respect to the maximum load capacity. In these test groups, the crack never reached the ends of the beam. Some of the specimens with the largest loaded edge distance (4d) showed embedding deformations under the dowels, but no bending of the dowels, as well as some cracks with small lengths beside the dowel (Fig. 3b). The beams with 2d and 3d loaded edge distances did not exhibit any significant embedment under the dowels. In any case, the brittle failure was always characterised by one main crack. Regarding the double-dowel tests, a similar crack growth process was observed as for the single-dowel batches in the sense that just one main crack developed on both sides of the beams. However, in all tests, the crack reached the beam ends, producing a complete separation of the specimen into two parts with a very brittle and sudden failure (Fig. 4a). No embedment deformations were observed under the two dowels (Fig. 4b, unlike in the case of a single dowel for the same loaded edge distance (4d), as the bearing area increases with the two closely spaced dowels. These failure modes are well represented by the corresponding load versus displacement curves measured at the top of the dowels. The results of the single-dowel splitting tests are shown in Fig. 5a, which reveal the different failure behaviour of the investigated connection arrangements. As can be seen, the load-displacement curves obtained from the beams with the greatest edge distance (4d) show a ductile behaviour characterised by embedment stresses and yielding followed by hardening. The connection is still able to force a splitting failure after considerable slip, although splitting is not the primary failure mode. On the The load-displacement curves of the double-dowel splitting tests with a 4d loaded edge distance are shown in Fig. 5b. In this case, a clear brittle behaviour is observed in all specimens for the a-value studied. Failure loads (P u ) achieved in all splitting tests for the beams with single-and double-dowel arrangements as well as the mean value, the standard deviation (SD) and the coefficient of variation (CoV) are compiled in Table 4. As can be seen from the results of the single-dowel beams, the load-carrying capacity increases with increasing loaded edge distance. The failure loads reached by the beams with two dowels are not necessarily twice as high as those achieved by the beams with a single dowel arrangement at the same loaded edge distance 4d (a = 0.55), but the mean value is only slightly higher (21% higher). These results are in agreement with those obtained by Quenneville and Mohammad [49], who tested different connection arrangements on sprucepine glulam beams, including series of one and two fasteners spaced horizontally 5d apart, with a = 0.6. The mean maximum loads in the case of double fasteners were found to be 23% higher compared to the single connection. Previous experimental investigations by Reshke [50] for the same material and joint geometry as specified for [49], also resulted in a small difference of approximately 14% higher failure load for specimens with two dowels compared to those with one dowel. The fact that two closely spaced dowels give basically the same splitting failure load as a single dowel was also stated by Kasim and Quenneville [51] using the concept of cluster (group) of fasteners. In their research, the capacity of two rows of bolts separated 4d in the direction parallel to the grain turned out to be lower or statistically not different from that of one row of bolts on spruce glulam beams with a = 0.44 and a = 0.70. As the spacing of the rows increased, so did the splitting capacity of the connection. The two-row joint behaved almost as two separate single rows if the spacing between the rows was C 2h e (75% of twice the capacity of the one-row connection was obtained). The angle of load distribution from a bolt towards the loaded edge was estimated to be 45°. In this regard, the former German code [28] limited to less than 0.5h the distance between groups of fasteners in the direction parallel to the grain to be considered as one group. For distances C 2 h between them, they are treated as separate groups. For distances between 0.5h and 2h, the groups are considered as one group, but a reduction factor is applied. Quenneville and Mohammad [49] stated that a connection can be assumed to be one cluster if the distance parallel to the grain between the rows of bolts does not exceed h e . The assumed angle of load distribution was 638 in this case. Comparison of theoretical and experimental failure loads The adequacy of the different analytical models compiled in Sect. 3 based on fracture mechanics for the prediction of the splitting capacity in timber connections loaded perpendicular to the grain in relation to the experimental data obtained for eucalyptus is discussed here. The material properties E L , G LR , G Ic , G IIc and f t,90 experimentally determined for this species (Sect. 2.1) were used as input parameters in the corresponding models. The ratios of the theoretically predicted failure loads to the experimental values for the single connection arrangement are presented in Table 5. These ratios normalise the strength estimates, allowing for easier comparison. Ratio \ 1 represents a conservatively predicted connection strength; ratio &1 means an accurately predicted connection strength; and ratio [ 1 depicts an overpredicted connection strength. Figure 6 shows graphically these failure load ratios for the single-dowel connection. The results of the h e = 2d specimens are represented in red, h e = 3d in blue, and h e = 4d in green. The results of each analytical model compiled in Table 3 are represented by a different symbol. As can be observed from Table 5 and Fig. 6, most analytical models overpredict the splitting capacity for single-dowel beams of eucalyptus, which could lead to a dangerous design situation. The models given by Eqs. (2) and (5) produce the worst predictions in single-dowel specimens (it should be noted that Eq. (5) is the basis for the Eurocode 5 splitting capacity formula). This performance is in line with that obtained by Jensen et al. [35] in research using Radiata pine LVL beams. In such research, Eqs. (2) and (5) did not lead to good agreement with the experimental data if the fracture energy obtained by the single edge notched beam (SENB) fracture tests was used directly. However, the agreement was fair if the fracture energy estimated from plate specimen tests was used instead, as its formulation is closely related to the models given by these two equations. This fact seems to suggest that the linear fracture mechanics model, on which Eqs. (2) and (5) are based on, may have some shortcomings. The models given by Eqs. (3) and (4) provide better predictions than Eqs. (2) and (5) but still overestimated. It is worth noting that the former are just special cases of the latter. A similar overestimation is also found by the semiempirical Ballerini model expressed by Eq. (6). Equation (7) stands out with the best predictions of experimental results in eucalyptus from all models related to the original Van der Put and Leijten equation, where only Mode I is taken into account in terms of fracture energy (Eqs. (2)-(8)). These findings are in line with those of Hindman et al. [52] and Patel and Hindman [53], who concluded that Eq. (7) performed better than Eq. (5) for Southern pine machined stress rated (MSR) lumber, laminated veneer lumber (LVL) composed mostly of southern pine with some eucalyptus, and also for yellow poplar parallel strand lumber (PSL). This can be justified by the fact that in Eq. (7), the tensile strength perpendicular to the grain of the member is added besides the fracture energy. It is worth remembering that Eq. (7) is enclosed within quasi-nonlinear fracture mechanics and the LEFM model described by Eq. (5) could be considered a special case of it. Equation (8) also includes the tensile strength perpendicular to the grain, and it should be recalled that it was postulated as a semi-empirical modification of Eq. (7) to take into account the total height of the beam. Thus, in the work of the authors who formulated this model [35], Eqs. (7) and (8) gave similar predictions on Radiata pine LVL beams with two dowels aligned along the grain, when the distance to the loaded edge was 4d (a = 0.21). However, for larger loaded edge distances of 8d (a = 0.43), predictions from Eq. (8) were clearly better. However, this improvement in results using Eq. (8) instead of (7) is not satisfied in the eucalyptus beams of the present study for any configuration of loaded edge distances in single-dowel joints (nor in beams with two dowels, as will be seen below). The only model that provides a conservative prediction of splitting failure for most of the eucalyptus specimens tested with a single dowel connection is the one presented by Franke and Quenneville (Eq. (9)) [39], which, unlike the previous ones, also considers the fracture energy in Mode II as a material parameter. In this respect, the authors stated that the fracture values achieved showed that the geometry of the connection, the distance from the loaded edge, and the depth of the beam influence the relationship between fracture Mode I and Mode II. Regarding the influence of loaded edge distance on the theoretical/experimental ratios, there seems to be a tendency for the mean ratio to increase as the distance increases from 2 to 3d for all models. However, in the case of the 4d distance, the ratio decreases to a greater or lesser extent for most models. Even so, the magnitudes of the differences are not uniform. Uniformity of the trends of the theoretical/experimental ratio in relation to the loaded edge distance was also not found by Hindman et al. [52] using the models described in Eqs. (5) and (7) for beams with the same span/depth ratio as those studied in the present work. Similarly, the ratios of the theoretically predicted failure loads to the experimental values of the specimens with double-dowel connections are presented in Table 6, together with the main values of each set of results. A visual comparison of these failure loading ratios for the double-dowel connections is shown in Fig. 7, where each symbol represents the ratio obtained using a different analytical model. For this arrangement, the Eq. (7) based on quasinonlinear fracture mechanics and including tensile strength perpendicular to the grain, again stands out, being in this case the one that provides the most conservative predictions. In any case, it is worth noting that for the two-dowel configuration, there is a higher number of specimens giving ratios below 1 compared to the single dowel arrangements, although Although all the models studied give load capacity predictions closer to the experimental values for beams with two dowels than for single dowels, Eqs. (2) and (5), the latter basis of the Eurocode 5 formula, are again the least appropriate for eucalyptus. The models described by Eqs. (6) and (9) are the only ones in the studied that include some parameter related to the multiple dowel connection. In particular, Eq. (6) includes a correction factor for the splitting capacity to account for the influence of the connection width (f w = 1.31 for the double connection studied), but also gives a poor prediction of the experimental values (ratio = 1.34, see Table 6). However, if this correction factor had not been considered, the average value of the experimental versus theoretical failure load ratio would result in 1.02, similar to the ratios obtained using Eq. (4), and therefore significantly better predictions than considering the correction factor. In turn, Eq. (9) formulated by Franke and Quenneville, considering the mixed mode of fracture, again gives good estimates of the experimental values, as in the case of single bolt connections, although this formula was calibrated using spruce laminated beams [39] and extended to Radiata pine LVL [54]. The results suggest that more comprehensive analytical models, where more material parameters are considered, such as Mode I and II fracture energies and tensile strength perpendicular to the grain, lead to more reliable predictions. In any case, further studies with different connection geometries and species and products should be performed to come up with an expression that can be optimally applicable to hardwoods. Applicability of the Eurocode 5 expression As mentioned above, a manifestation of Eq. (5) proposed by van der Put and Leijten [33] appears in Eurocode 5 [30] as a specific splitting capacity check for connections in softwoods loaded perpendicular to the grain (however, application to hardwoods and other wood-based products is not specified). It considers the verification of the shear force acting on the beam by the following expression, For softwoods connected by fasteners other than punched metal plates type, the characteristic value of the splitting load is described in the code as, where b and h are the width and depth of the beam, respectively, h e is the distance from the loaded edge to the dowel location, and C 1 = 14 N/mm 1.5 . From its origins, this 14 value derives from what is known as the apparent fracture parameter, (GG c ) 0.5 , which represents the square root of the shear modulus, G, times the critical energy release rate, G c , in the form C 1 = (GG c /0.6) 0.5 . The apparent fracture parameter was used by Van der Put and Leijten as a fitting parameter by taking test data from a limited number of sources with different connection types. The mean lower bound of the apparent fracture parameter of 12 N/mm 1.5 was selected, leading to a factor C 1 = 15.5 N/mm 1.5 . To obtain a characteristic value, the factor C 1 was further reduced to 15.5Á2/3&10 N/mm 1.5 , and this was the value suggested for use in the code design criterion. Even so, the value finally adopted by Eurocode 5 was C 1 = 14 N/mm 1.5 , which corresponds to an apparent fracture parameter of (GG c ) 0.5 = 10.84 N/mm 1.5 (= 14Á0.6 0.5 ). This apparent fracture energy parameter assumed in the expression of the European code has been discussed since its origins. Some studies suggest obtaining individual C 1 factors for each species [54] as G I fracture energy values are not specified in Eurocode 5 or related product standards. Therefore, a comprehensive experimental determination of this parameter would be desirable for each species or wood product. Proceeding in a similar way to that mentioned above, the values of factor C 1 of the Eurocode 5 formula that would correctly predict the experimental failure loads obtained for eucalyptus are shown in Table 7. As can be seen, the lowest characteristic value of factor C 1 (calculated as 2/3C 1 ) results in 19.45 N/ mm 1.5 , which is 1.39 times the value established by Eurocode 5 for softwoods (C 1 = 14 N/mm 1.5 ). However, when considering the mean values of G and G c obtained from the experimental tests on Eucalyptus globulus (G LR = 1926 MPa and G Ic-= 0.77 N/mm, respectively), the apparent fracture parameter (GG c ) 0.5 results in 38.51 N/mm 1.5 . It leads to a C 1 = 49.71 N/mm 1.5 , which can be reduced by 2/3 and gives the characteristic value of 33.14 N/mm 1.5 , 2.37 times the value stablished by Eurocode 5 and higher than the value obtained from the process shown in Table 7. This high C 1 value of eucalyptus respond to the high performance of hardwoods compared to softwoods due to their anatomical differences. The splitting capacity of the eucalyptus specimens was also predicted by directly applying the Eurocode 5 formula developed for softwoods (Eq. (11)), where C 1 = 14 N/mm 1.5 . The results are presented in Table 8. In this case, the design capacity does not explicitly depend on any material parameter. The only input parameters are the beam depth and width and the loaded edge distance from the dowel. The results were adjusted to the design values to account for the type of material, the duration of loading, and the effects of the moisture content (k mod = 0.9 and c M = 1.3). Table 8 also includes the averages and CoV values of the design factor of safety (DFS) for each configuration, defined as the ratio of the test capacity strength to the Eurocode 5 design splitting capacity. The DFS values generally ranged between 2.5 and 4.6, with no clear trend with respect to the loaded edge distance. Therefore, the prediction formula included in Eurocode 5 for softwoods is prone to underestimate the splitting capacity of the eucalyptus specimens, leading to very conservative predictions. Conclusions The results of the splitting capacity of Eucalyptus globulus L. beams loaded perpendicular to the grain by single and double steel dowel connections with different loaded edge distances are provided. These are essential preliminary data to gain knowledge on the splitting behaviour in hardwoods as there is no specific equation for its prediction included in Eurocode 5, only for softwoods. From the experimental study for the geometry of the tested beam, the load carrying capacity considering a single dowel increases with increasing distance to the loaded edge. Distances of 4d result in ductile failures and smaller distances in brittle failures. However, in the case of two dowels at 4d, the load carrying capacity is similar to that of one dowel and the failure is clearly brittle. This is of particular relevance, as the Eurocode 5 formula does not take into account the number of dowels. Of the eight analytical models based on fracture mechanics studied (including the model basis of the Eurocode 5 formula) and considering the actual eucalyptus properties required as input of these models, in general, all of them overestimate the splitting failure load of single-dowel eucalyptus beams, except the only model that considers Mode II fracture energy in addition to Mode I fracture energy in its expression. For the double dowel layout, an analytical model based on quasi-nonlinear fracture mechanics, which also includes the tension perpendicular to grain of the member, provides the most conservative predictions and also good agreement for the single dowel beams. In all arrangements, the analytical model basis of the Eurocode 5 expression gives the worst agreement with the experimental eucalyptus results. The splitting capacity formula adopted by Eurocode 5 for softwoods, which does not take into account any material parameter, only the depth and width of the beam and the distance from the dowel to the loaded edge, proves to be very conservative in predicting the ultimate failure load of Eucalyptus globulus with the connection geometries analysed in this study. Further experimental research on the splitting behaviour of this and other hardwood species would be desirable in order to derive an optimal general formula valid for all of them, taking into account the different material properties, as well as the geometry and type of connection. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2022-06-07T15:05:07.810Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "2101467e1a4f2f94729851699f5bb04c4fe8db57", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1617/s11527-022-01983-z.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "f129d2ced953de896fbea718bd55ed015304cb94", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
221854782
pes2o/s2orc
v3-fos-license
Epilepsy Research in Mali: A Pilot Pharmacokinetics Study on First-Line Antiepileptic Drug Treatment Background and Purpose The indication and benefit of plasma level of antiepileptic (AEDs) has been debating in the monitoring of people living with epilepsy and the epilepsy treatment gap has largely been documented in developed countries. This study was aimed to highlight the epilepsy treatment gap between rural and urban Mali. Methods We conducted a pilot study on AEDs treatment from September 2016 to May 2019. For 6 months, 120 children and young adults living with epilepsy (rural site, 90; urban site, 30) received phenobarbital, valproic acid and/or carbamazepine. At our rural study site, we determined the AED plasma levels, monitored the frequency, severity and the duration of seizure, and administered monthly the McGill quality of life questionnaire. At our urban study site, each patient underwent an electroencephalogram and brain computed tomography scan without close monitoring. Results At the rural study site, patients were mostly on monotherapy; AED levels at 1 month (M1) (n=90) and at 3 months (M3) (n=27) after inclusion were normal in 50% at M1 versus 55.6% at M3, low in 42.2% at M1 versus 33.3% at M3 and high in 7.8% at M1 versus 11.1% at M3. AED levels at M1 and at M3 were significantly different p<0.0001. By M3, seizures (n=90) were <1/month in 26.7%, and lasted less than 1 minute in 16.7%. After a yearlong follow up, all 90 patients reported a good or excellent quality of life. At our urban study site, patients (n=30) were on carbamazepine and valproid acid in 66.67% and monotherapy (carbamazepine) in 33.33%. By November 2018, only six out 30 patients (on bi-therapy) were still taking their medications. Conclusions Epilepsy diagnostic and treatment are a real concern in Mali. Our data showed appropriate AED treatment with close follow up resulted in a better quality of life of patients in rural Mali. We will promote the approach of personalized medicine in AED treatment in Mali. Introduction In this pilot study, we addressed a relevant research question on epilepsy treatment gap in a developing country, Mali. Our findings highlighted the importance of the disparity between urban and rural Mali in the diagnostic and drug treatment of epilepsy. We proposed a sequential evaluation of the frequency, duration of seizure and quality of life of patients as a follow up tool of people living with epilepsy and antiepileptic drug (AED) treatment in rural Mali. Our data have been well discussed and put into the context of existing knowledge in the literature and a review of previous work on epilepsy in Mali. This study will significantly contribute to the ongoing debate on how and when to use plasma dosage of AEDs in clinical care. This pilot pharmacokinetic study was our first step towards future pharmacogenomic studies for personalized AED treatment in Mali, West Africa. Epilepsy is a chronic disease with a severity ranging from benign idiopathic condition to extremely severe form due to encephalopathies. 1,2 People living with epilepsy may experience spontaneous resolution of their condition in 70% of cases, but 20-30% will have resistant epilepsy over time. 3 Epilepsy affects 50 million people worldwide and 5-10% of the total population in the United States of America (USA) and Europe. 4 Sub-Saharan Africa hosts 80% of the world epilepsy population with a prevalence of 15.4% and a frequency of 50-105 cases each year. 5 However, the global burden of epilepsy has been difficult to estimate in Africa due in part to the heterogeneity of the reports. 6 Uncontrolled seizures can seriously impact the socio-professional and family life of patients. 3 Care in epilepsy usually requires AEDs. The efficiency of AED treatment depends on the etiology and clinical forms of epilepsy as well as the adherence of patients to the treatment. 7,8 The overall goal of AED treatment is to prevent further seizures, avoid adverse effects, and enable patients to be active with improved quality of live. 8,9 This requires an appropriate choice and posology (efficient mg/kg/day) of AEDs coupled to a good treatment monitoring plan. 10 Monitoring of AED treatment follows different standards worldwide. The best medical practice requires that an ambulatory patient starts with monotherapy (efficient mg/kg/day). Bi-therapy or add-on therapy should be rationally chosen. In the USA and Europe, sequential monitoring of AED plasma level is recommended especially for clinical toxicity, clinical response, and switching or adding another AED to the treatment. 11 In Sub-Saharan Africa, best clinical practices in epilepsy widely vary across countries. As a trend, in low income countries, about 60% of people living with epilepsy receive no antiepileptic treatment due largely to economic and social reasons, 12 and a much higher pro-portion have no access to electroencephalogram (EEG) and AED plasma dosage tests. In middle income countries (Tunisia and Kenya), continuous EEG monitoring and sequential AED plasma level are only for pediatric or non-traumatic comatose patients. 13,14 In high income African countries (South Africa and Egypt), clinical application of epilepsy genetics in pediatric or refractory patients has been considered. 15,16 In Mali, one of the poorest countries in the world, the prevalence of epilepsy has been estimated at 14.6-15.6%. 17 Generalized seizures are well-known and recognized by many health professionals across the country. 17 Therefore, the socio-cultural context deeply affects the beliefs and perceptions of the disease, and affected women of childbearing age are the most stigmatized. 9 Epilepsy screening, diagnosis and treatment are widely disparate between rural and urban areas. In the capital city Bamako, people living with epilepsy are referred from public and private health facilities to neurologists at the two main university teaching hospitals where neurologists, psychiatrist and child neurologists can easily be found. 18 Here, patients benefit from a good medical history taking and physical examination, EEG and second line AEDs. Outside the capital city, people living with epilepsy are diagnosed and treated by medical doctors or general practitioners (GPs) as well as trained nurses. In 2018, the non-governmental organization Santé Sud trained 18 GPs from rural Mali to diagnose and treat mainly convulsive epilepsy. The knowledge gap in epilepsy in general physicians can be traced back to their formal training at the medical school. 19 For example, up to date, plasma AED level monitoring is part of the follow up of the AED treatment to either monitor side effects or change/add AEDs, but not yet done due to financial constraints. Epilepsy treatment gap is defined as the proportion of people who should normally be treated, but are not receiving any treatment. This gap ranges from 75% to 90% in low income African countries. 20 Disparity in epilepsy diagnostic and treatment is obvious between rural and urban settings in Mali. 17 Therefore, the responsibility falls on the shoulders of the scientific community in general and the Malian league against epilepsy in particular to coordinate the effort in addressing epilepsy care as a public health problem in Mali. We wanted to estimate the epilepsy diagnostic and drug treatment gap between rural and urban areas in Mali. How could such a treatment gap be minimized or bridged as much as possible through evidence-based practices? To answer these questions, we hypothesized that implementing two different questionnaires (a well-designed and easy-to use sequential epilepsy treatment follow up and the McGill quality of life), and eventually, plasma level of AEDs as part of the AED treatment could lead to a better quality of life in people living with epilepsy in rural Mali. In this study, we sought to highlight the epilepsy treatment gap between rural and urban settings and to raise awareness for changing the mindsets on AED treatment in Mali. Patient recruitment and data collection We conducted a prospective pilot study from September 2016 to May 2019. Patients received free first line AEDs (phenobarbital, valproic acid, and carbamazepine) for 6 months, prescribed by a neurologist, psychiatrist or a pediatrician. Our study sites included Bamako, the capital city, representing the urban cities and Segou, a rural site (Fig. 1). We choose our two study sites because all the neurologists and most general physicians with interest in epilepsy worked at the three main university teaching hospitals in the capital city, Bamako. Epilepsy diagnostic and management should therefore be the best possible in the country. Rural Segou is renowned for one of the highest rates of consanguineous and even endogamous marriage and a relatively high likelihood for marriage between people living with epilepsy. At our urban study site, from September 2016 to August 2017, we enrolled 30 autistic children aged 3-14 years old, who were treated for focal onset epilepsy with impaired awareness. At inclusion, patients underwent ophthalmologic and ear, nose and throat examination, EEG, brain computed tomography (CT) scan and received AEDs for 6 months. Plasma dosage of AEDs was carried out at first month M1 (n=30) and at three months M3 (n=6) and patients were followed up till November 2018. EEG recordings were independently reviewed by a neuro-pediatrician and a neurologist. No patient was given phenobarbital in Bamako due to the suspicion of ASD as a co-morbidity of epilepsy and the younger age of our study participants. It is well-known that phenobarbital has been associated with hyperactivity and low appreciation rate when prescribed to young children with ASD. 21 At our rural study site in Segou, from October 2017 to November 2018, we recruited and followed up 90 patients with generalized epilepsy at the local community health center. Either frequency and severity or the Quality of Life in Epilepsy-31 or the WHO quality of life-BREF scale has been used separately to monitor the effectiveness of epilepsy treatment. [22][23][24] We chose to use a follow up questionnaire and the McGill quality of life in our study. First, we administered to each patient with our follow up questionnaire to inquire about the frequency, and duration of seizures at M1 and M3 as well as the monthly phone call for the occurrence of side effects from the initiation of the treatment. We also administered the McGill quality of life questionnaire to inquire about the global quality of life of patients at the following time points (M1, M3, M6, M9, and M12). Epilepsy frequency was categorized before the treatment into <60/month and ≥60/month and after the initiation of the treatment <1/month and ≥1/month. Epilepsy duration was categorized before the treatment into <5 minutes and ≥5 minutes and <1 minute and ≥1 minute after the initiation of the treatment. Based on the patient's subjective assessment of the most severe seizure in the last month before the administration of the study questionnaire, epilepsy intensity was grouped into three categories before and after the initiation of the treatment as followed: mild (not assessed due to no seizure to less severe than a regular seizure before treatment), moderate (no change in the perceived seizure severity even with efficient mg/kg body weight antiepileptic drug treatment), and severe (the patient felt the initiation of the antiepileptic drug worsened his/her seizure). AED plasma level were determined in patients at M1 (n=90) and at M3 (n=27) (Fig. 2). Plasma sample collection and analysis At our rural study site, after community consent and individual informed consent, we took two 4 mL of blood samples from each enrolled patient in the morning. Within 1 hour, we centrifuged the whole blood to collect plasma samples in two 1.5-mL Eppendorf tubes to be stored immediately at 4˚C. Within 72 hours, plasma levels of AEDs were determined using an immune-enzymatic method at the Algi laboratory or a chromatography-based approach at the Rodolphe Merieux laboratory in Bamako. We received AED plasma levels with the range of normal values (therapeutic range in mono-or bi-therapy) by age and sex, which we easily categorized as low, normal or high AED plasmatic levels. DNA extraction DNA was extracted from the buffy coat using the Gentra kit Cat#158545 for future pharmacogenomic studies. Identification of cytochrome P450 isoforms for AED pharmacogenomics studies We searched dbSNP database to identify isoforms of cytochrome P450 implicated in the metabolism of first line AEDs (phenobarbital, carbamazepine, and valproic acid) ( Table 1). Ethical considerations Our study protocol, consent forms and study questionnaire were approved in 2016 by the Institutional Review Board (IRB) at the Faculty of Medicine and Odonto-Stomatology (FMOS). Despite the community consent in rural settings, we sought and obtained informed individual consents and assents. Each participant was compensated at an IRB-approved rate and transport fees were reimbursed. Only the medical student, her mentor and the data analyzer had access to the database. Information was coded and the data analyzer did not access the code keys. To maximize the benefits and minimize the harm, and also to ensure adherence to treatment, we provided patients with a 6-month supply of AEDs. Plasmatic dosages were determined free of charge for patients. Statistical analysis We used SPSS version 25 (IBM Corp, Armonk, NY, USA) to generate frequency tables and compare data at different time points using chi square, Fisher exact test, or ANOVA. p-values less than 0.05 were considered statistically significant. Results In our cohort, the sex-ratio was 1.8 in favor of males. At the urban study site, patients (n=30) were 10 years-old on average with the extremes of 3 and 14 years-old. All patients were suspected with autism spectrum disorder (ASD) with focal onset epilepsy. None was prescribed phenobarbital. In total, 66.67% patients (20/30) were on bi-therapy (carbamazepine and valproic acid) at inclusion. Patients had good adherence to their treatment (100%) by the end of the follow up. Only 20% (6/30) on bi-therapy did both plasma AED dosages at M1 and M3 and they were still taking their medications by the end of the follow up in November 2018 (27 months in total). Thus, 80% (24/30) were off medication. EEG abnormalities supporting the clinical diagnostic of epilepsy were found in 55.8% (17/30). Brain CT scan was normal in all 30 patients. At our rural study site, the mean age of patients (n=90) was 17.7±9.15 years old. The most represented age range was 16-30 years old with 50% (45/90). Patients clinically diagnosed with generalized epilepsy were treated with either phenobarbital or carbamazepine or valproid acid in monotherapy. The adherence rate of the AED treatment was 79% (71/90) after the first 6 months. The co-morbidity of epilepsy and ASD was found in 8.9% (8/90). Patients estimated their global quality of life as "good or excellent from 95.6% (86/90) at inclusion to 100% (90/90) at 12 months' time points. No correlation was made between the frequency and severity of epilepsy and the quality of life of patients in our cohort (Table 5). Discussion Our findings suggested an overall improved quality of life of people living with epilepsy in both rural and urban areas through a close monitoring using our study questionnaires (easy-to use sequential epilepsy treatment follow up and the McGill quality of life). Such clinical improvement in epilepsy could be better felt using the McGill quality of life over 12 months every 3 months from inclusion. After 12 months of follow up, all enrolled patients or their care takers/parents from our rural study site estimated their global quality of life to be good or excellent. One underlying reason for such improved quality of life in people living with epilepsy was the readily availability of free AEDs for the first 6 months of treatment. A good observance of the AED treatment is essential to control epilepsy. We made sure that AEDs were available and affordable to patients at their local community health center. Another reason for such improved quality of life in people living with epilepsy was the regular doctor visits every 3 months. Overall, our rationale of developing an epilepsy follow up questionnaire based on frequency, severity, and duration of the seizures along the McGill quality of life was to foster a solid physician-patient relationship and regular doctor visits for a better outcome of the treatment. Poor follow-up has been reported in a large multinational study in Europe in which approximately 50% of the people living with epilepsy had not seen their specialist in the year preceding the survey. 25 An optimal epilepsy treatment plan is partly based on an accurate diagnosis of the patient's seizure type(s), an objective measure of the intensity and frequency of the seizures, and the awareness of the side effects of available AEDs. 26 At our rural study site, we found at 3 months' time point during the antiepileptic treatment that seizures were less frequent (<1/month) in 26.7% (24/90), and shorter than 1 minute in 16.7% (15/90) ( Table 2). Even though, in people living with epilepsy, seizure severity and frequency have a limited negative impact mostly on the social aspects of the quality of life, 27 epilepsy frequency not severity is the main factor, which determines in practice how stigma and overall quality of health are perceived. 28 Our follow up questionnaire could be useful to physicians to monitor epilepsy treatment at rural community health centers in Mali. From our rural and urban study sites, plasma level of AEDs was significantly different at M1 and M3 in 27 patients who were tested twice (p<0.0001). Only half of our study participants had a normal plasma level of AED at the first test (Table 3). As long as patients were seizure free and experienced no serious side effects, low or high plasma levels of AEDs were concerning. Physicians and patients expect that optimal plasma level directly correlate with the clinical efficacy of the treatment. Total plasma AED level is just an indication of the total quantity of medication in the blood, but only the free proportion of the medication (not bound to a protein) is active. Nevertheless, prescription of AEDs solely based on body weight or efficient mg/kg/day by age and sex as suggested by the drug manufacturer may not often guarantee such expectation. Our study participants received a monotherapy at our rural study site and polytherapy in Bamako. Polytherapy usually comes with suboptimal mg/kg/day dose for each individual medication due caution for synergic mechanism of action of AED. 29 Valproic acid and carbamazepine were always combined to treat patients in Bamako. This could be the case when people living with epilepsy either consulted more than one healthcare provider or less likely the same specialist. In the other hand, epilepsy associated with ASD and especially focal onset epilepsy is more prone to become resistant to fist-line AEDs. Physicians then needs more than one AED to control the seizures. Knowing that anticonvulsant polytherapy has widely and traditionally been used in the treatment of epilepsy without any real added value to monotherapy, 29 monotherapy with phenobarbital was mainly prescribed in rural Mali. The affordability of phenobarbital by our study participants after our follow up period was also a key determining factor for monotherapy with phenobarbital. The disparity between rural and urban Mali in terms of healthcare services and AEDs available to people living with epilepsy was obvious. For instance, before our study, phenobarbital used to be discontinued at the local community health center in Segou. In our rural study site, epilepsy was fairly common and the most frequently represented age range was 16-30 years old with 50% (45/90). This result confirmed the highest prevalence of epilepsy in the first two decades of life (70.8%) in Libya (n=568). 30 Active convulsive epilepsy usually starts in childhood 31 and it peaks in the age group 20-29 years old with a prevalence of 11.5/1000. A high prevalence of epilepsy, especially in young adults, has important negative consequences for both the workforce and the community in Sub-Sahara Africa. 31 Especially, at our rural study site, parents/caregivers of people living with epilepsy were farmers, livestock breeders or fishmen. In addition, moderate and severe epilepsy may not be compatible with independence or autonomy for the patients most of the time. Taken together, parents may be required to watch closely after their children living with epilepsy. At least, an improved quality of life in people living with epilepsy may therefore free up their caregivers to take care to their daily activities. Our findings should prompt decision makers to issue health policy to trained health professionals in epilepsy and to make essential antiepileptic medicines and EEG available to the district health center in each province of the country. We have shown an improved quality of life with a close monitoring of AED treatment in our rural study site. How much of this improved quality of life translated into patients' autonomy or independence to relieve the people living with epilepsy, their families and their community from the burden of epilepsy? We still don't know yet. In the future, a large and well-designed epilepsy prevalence study will help reassess the cartography of epilepsy across the country. Such study could be combined with genetic and pharmacogenomic studies on epilepsy in rural Mali. To anticipate, genetic testing to predict adverse effects of AEDs will result in a more efficacious and safer AED treatment. 32 For that purpose, the plasmatic level of AEDs will even more relevant to characterize clinically study participants. Pharmacogenomic studies on classic AEDs may focus on cytochrome P450 genes (CYP2B6, CYP2C9, and CYP3A4) ( Table 1). To ensure the quality of life of people living with epilepsy, physicians will wonder in years from now about "Which drug?" and "Which dose?" for my patient living with epilepsy. 32,33 Plasma dosage of AEDs may not be commonly used in clinical practice, but it will be more relevant for pharmacogenomic studies and personalized medicine. Our study had few limitations. First, our initial study design was to find perfectly matched controls (age, sex, clinical diagnostic, and AED treatment) in rural areas for the patients recruited at the urban study site. We failed to follow the study participants concomitantly from both study sites. Alternatively, we stuck to the reasonably appropriate monitoring that could be implemented routinely on a case-to-case basis in both rural and urban settings. Second, we did not compare data from rural and urban study sites because our patient population was not representative of pediatric epilepsy in the urban setting and it was completely different from our patient population in the rural setting (age group, type of epilepsy, and mostly used AED). The most comparable data was the plasma level of AEDs, but neither the treatment regimen (mono-vs. bi-therapy) nor the age range of patients were similar. Such comparison would be almost erroneous and clinically meaningless. Despite the limitations, this pilot study yielded useful preliminary data to highlight the disparities in epilepsy diagnostic and treatment between rural and urban areas and the potential to improve the monitoring of epilepsy treatment in both settings in Mali. Our data allow to raise questioning and wonder about the epilepsy treatment gap among stakeholders and health policy makers in Mali. Without strong evidence to drive policy change, the epilepsy diagnosis and treatment gap will persist for long time in developing countries. 20 Our data showed that the quality of life of people living with epilepsy could be improved using a close monitoring of AED treatment in the rural setting in Mali. Epileptic drug treatment was disparate between rural and urban setting in Mali. To close the equity gap in epilepsy diagnosis and treatment in Mali, health professionals in the rural areas should be trained in epilepsy and mindsets about plasmatic dosage of AEDs in the urban setting to improve current standard of epilepsy diagnosis and treatment should be also changed. The scientific community of epilepsy should come together in a concerted effort to tackle these challenges. With a continuous and well-coordinated epilepsy research collaboration and public engagement, per-sonalized AED prescription is potentially feasible in Mali in the future.
2020-09-10T10:22:45.843Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "1b7b40387ae4a8908b32b32bed7e7d1682a29e7f", "oa_license": "CCBYNC", "oa_url": "https://www.j-epilepsy.org/upload/jer-20006.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bb3f20c291fea409643bb4d9e3d7784c5bec8ee5", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
11504098
pes2o/s2orc
v3-fos-license
Analysis of the FnrL regulon in Rhodobacter capsulatus reveals limited regulon overlap with orthologues from Rhodobacter sphaeroides and Escherichia coli Background FNR homologues constitute an important class of transcription factors that control a wide range of anaerobic physiological functions in a number of bacterial species. Since FNR homologues are some of the most pervasive transcription factors, an understanding of their involvement in regulating anaerobic gene expression in different species sheds light on evolutionary similarity and differences. To address this question, we used a combination of high throughput RNA-Seq and ChIP-Seq analysis to define the extent of the FnrL regulon in Rhodobacter capsulatus and related our results to that of FnrL in Rhodobacter sphaeroides and FNR in Escherichia coli. Results Our RNA-seq results show that FnrL affects the expression of 807 genes, which accounts for over 20 % of the Rba. capsulatus genome. ChIP-seq results indicate that 42 of these genes are directly regulated by FnrL. Importantly, this includes genes involved in the synthesis of the anoxygenic photosystem. Similarly, FnrL in Rba. sphaeroides affects 24 % of its genome, however, only 171 genes are differentially expressed in common between two Rhodobacter species, suggesting significant divergence in regulation. Conclusions We show that FnrL in Rba. capsulatus activates photosynthesis while in Rba. sphaeroides FnrL regulation reported to involve repression of the photosystem. This analysis highlights important differences in transcriptional control of photosynthetic events and other metabolic processes controlled by FnrL orthologues in closely related Rhodobacter species. Furthermore, we also show that the E. coli FNR regulon has limited transcriptional overlap with the FnrL regulons from either Rhodobacter species. Electronic supplementary material The online version of this article (doi:10.1186/s12864-015-2162-4) contains supplementary material, which is available to authorized users. Background The purple non-sulfur α-proteobacterium Rhodobacter capsulatus possesses a metabolically versatile metabolism that allows growth in a wide variety of environments. Much is known about its photosynthetic growth metabolism along with transcription factors that control anaerobic photosystem gene expression such as RegA, CrtJ, and AerR [1][2][3][4][5]. However, the redox responding transcription factor FnrL, which is a homologue of FNR (for fumarate nitrate reduction) from E. coli, has not been well characterized in Rba. capsulatus [5][6][7]. FnrL from Rba. capsulatus is reported to have a role in production of respiratory cytochromes but not in the production of the photosystem machinery [2,5,7,8]. Beyond these observations, the involvement of FnrL in controlling anaerobic gene expression is unknown. FNR from E. coli has a central role in controlling many changes in metabolism that occurs when these cells shift from aerobic to anaerobic growth conditions [6,9]. FNR directly senses changes in oxygen tension via the presence of a redox sensitive 4Fe-4S cluster that is coordinated by four cysteines [10]. Under anaerobic conditions, the iron cluster is stable allowing FNR to form a dimer that binds to target DNA sequences [11,12]. However, under aerobic conditions, this cluster becomes oxidized leading to its disassembly with a concomitant loss of FNR dimerization and ultimately loss of DNA binding activity [8,11]. FnrL from Rhodobacter capsulatus, and its homolog in Rhodobacter sphaeroides, also contain four Fe coordinating cysteines as described for E. coli FNR, however their placement within the peptide sequence is different from FNR. This suggests that the coordination of the 4Fe-4S cluster may be altered and/or there exist dissimilarities in redox regulation and allosteric behavior between the FnrL homologs and FNR. Analysis of the FNR regulon in E. coli has been well characterized most recently using a combination of the deep sequencing technologies; RNA-seq and chromatin immunoprecipitation sequencing (ChIP-seq) [6]. This recent study has established that the FNR regulon is quite large and complex and is responsible for controlling variety of genes that affect the ability to effectively grow under conditions of oxygen limitation. For example, FNR controls the expression of high oxygen affinity terminal oxidases and a DMSO reductase that uses DMSO as an alternative electron acceptor under anaerobiosis [6]. The FNR regulon not only includes genes whose expression are directly regulated by FNR, but also genes indirectly regulated by FNR via secondary regulation [6,13]. The latter occurs when FNR directly controls the expression of a transcription factor that subsequently regulates expression of downstream genes either directly or through additional downstream transcription cascades. Analysis of the E. coli FNR regulon is further complicated by the observation that a number of FNR binding sites as defined by ChIP-seq occur near or within genes that do not exhibit a corresponding difference in expression upon deletion of FNR [6]. Thus, there appears to be a number of "silent" FNR binding sites that presumably are involved in control of gene expression under conditions that have not yet been tested. Additionally, these silent sites may have a role that does not affect transcription but instead have a role in providing chromosomal structural integrity. For example, FNR may have a yet to be defined nucleoidassociated role that would affect such processes as chromosome packing [14]. Both RNA-seq and ChIP-seq analysis of the Rba, sphaeroides FnrL regulon has recently been reported [18]. Their analysis indicated that FnrL is directly involved in regulating anaerobic respiration, tetrapyrrole biosynthesis and iron metabolism. However, there does not appear to be direct control of the photosynthetic structural proteins with overall photosynthetic events negatively regulated by FnrL. In contrast, detailed analysis of the Rba. capsulatus FnrL regulon has not been undertaken, but is necessary as there are key differences between the observed phenotypes of FnrL deletions in these species. For example, FnrL mutants in Rba. sphaeroides are unable to grow photosynthetically while a FnrL deletion mutant of Rba. capsulatus remains viable during photosynthetic growth [5,7,[15][16][17]. To address these differences, we utilize a combination of ChIPseq and RNA-seq analyses to provide a high-resolution description of the FnrL regulon in Rba. capsulatus. We have identified a large set of genes scattered throughout the genome involved in diverse metabolic pathways that are directly and indirectly regulated by FnrL. We present a global picture of the regulatory involvement of FnrL and also provide a detailed depiction of the photosynthetic events controlled by FnrL in Rba. capsulatus. For completeness, we compare the Rba. capsulatus FnrL regulon with the FnrL regulon from Rba. sphaeroides and the FNR regulon in E. coli [6,18]. While the FnrL regulons from Rhodobacter species do share similarities, they differ significantly and are unambiguously different from the E. coli FNR regulon. Consequently, there is considerable plasticity in number and type of genes that constitute members of FNR regulons in different organisms. Results and discussion Identifying direct and indirect members of the FnrL regulon using comparative RNA-Seq and ChIP-Seq We identified members of the FnrL regulon by performing RNA-seq transcriptome analysis of anaerobically (photosynthetically) grown wild-type versus ΔfnrL strains. Over 10 million (M) strand specific RNA-seq reads were collected per sample from three biological replicates. Differentially expressed genes (DEGs) from pair-wise comparison of wild type and ΔfnrL data sets were identified as those that had altered photosynthetic/aerobic changes in expression with a p-value ≤ 0.05. The motivation behind using a p-value cutoff of ≤ 0.05 was to make our results directly comparable to that of previously published E. coli and Rba. sphaeroides FNR/FnrL RNA-seq data sets that used a similar p-value of ≤ 0.05 [6,18]. With a p-value cutoff of ≤ 0.05 we categorized 807 DEGs as members of the Rba. capsulatus FnrL regulon (Fig. 1, Table 1, Additional files 1 and 2: Table S1 and S2). This number of genes in the Rba. capsulatus FnrL regulon is comparable to that observed for the FnrL regulon from Rba. sphaeroides, which has 917 genes DEG's with p-value ≤ 0.05 We also note that several FnrL ChIP-seq peaks containing well-defined FnrL binding consensus sequences are present upstream of DEGs with p-values between 0.05 and 0.1. These genes are noted in the ChIP-seq data set in Additional file 3: Table S3 and suggest that a p-value ≤ 0.05 at times acts as too stringent of a filter. Nevertheless we used the p value ≤ 0.05 as a cut off so as to be confident that the genes that are included in the FnrL regulon are not falsely identified and to be consistent with similar studies in other species. We determined which DEGs are directly controlled by FnrL by identifying FnrL binding sites in vivo using ChIPseq analysis. Our ChIP-seq results provided near-complete representation of the entire genome with significant peaks called that exhibited a false discovery rate (FDR) cutoff of 5 % (corresponding to an unadjusted p value <1E-5) using the MACS package. In making our results comparable to datasets available for E. coli and Rba. sphaeroides, we present FDR values with a cutoff of 5 %. As shown in Additional file 1: Table S1 we identified 82 ChIP-seq peaks that were above this significance threshold. These peaks were found primarily within the intergenic regions where 47 ChIP sites (57 %) are enriched in promoter regions and of these 28 show a corresponding differential expression. Using chi-squared test it was determined that this exhibits statistical enrichment for promoters since intergenic regions only make up 9.19 % of Rba. capsulatus' genome. Furthermore, we also identified peaks that were located within a gene next to neighboring genes that exhibited differential gene expression in the ΔfnrL strain (12 cases). We also found 34 called FnrL ChIP-seq peaks that did not exhibit an alteration in neighboring gene expression (Additional file 1: Table S1). It is difficult to reconcile the possibility that the latter category represents false positives on the basis of excellent enrichment coupled with a clear FnrL recognition sequence; rather, it may signal that FnrL bound to these location either has long range expression effects that are not being recognized or that additional auxiliary regulatory factors supersede the activity of FnrL. Furthermore, since only the photosynthetic state was investigated, these binding sites may be important in gene regulation during other growth states such as dark anaerobic or microaerobic growth or under nutrient limiting conditions. A consensus FnrL recognition sequence was obtained using the MEME server from called ChIP-seq sites (Fig. 2). The derived sequence (T/C/A)TGA-N6-TCAA has second and third positions that were invariably TG while the 12th and 13th positions were invariably CA. The first position was somewhat variable with T, C, or A accounting for 37, 34 and 24 %, respectively, whereas the 14th position was an A at a frequency of 90 %. As shown in Fig. 2, the derived FnrL consensus sequence is highly similar to consensus sequences derived from similar studies from Rba. sphaeroides and E. coli. Variants of the Rba. capsulatus FnrL recognition sequence were identified by MEME in 69 out of 82 called ChIP-seq sites (Additional file 1: Table S1) with potential FnrL binding recognition sequences also manually found in ChIP peaks where no consensus sequence was identified by MEME. These manually identified potential recognition sequences are not listed in Additional file 1: Table S1 since flanking TTG/CAA sequences are common throughout the genome. We also screened the Rba. capsulatus genome for additional FnrL sites with Virtual Footprint using FnrL recognition sequences identified from ChIP-seq peaks [19]. Our motivation for this stemmed from the fact that technical limitations exist that likely limit effective in vivo crosslinking of FnrL and/or immunoprecipitation of crosslinked DNA segments thus prohibiting our ability to identify all sites that are bound with FnrL. For example, we utilized formaldehyde as a crosslinker as it is typically used for ChIP-seq analysis. However, formaldehyde is known to form an ineffective adduct with B-form double stranded DNA and is thought to only be an effective crosslinker in cases where DNA binding proteins have perturbed or melted the DNA structure to allow formaldehyde to interact with the amine group of adenine [20]. Therefore, it is conceivable that FnrL bound to some sites may be ineffectively crosslinked with formaldehyde. Consequently the additional screening for potential FnrL sites using the MEME identified recognition sequences not surprisingly resulted in the identification of 332 additional potential FnrL recognition sites for a total of 414 possible sites in the genome. These additional sites were subsequently analyzed for their location relative to FnrL dependent differential gene expression. From this analysis, we were able to determine that an additional 77 genes are likely under direct control of FnrL as evidenced by the presence of a putative FnrL recognition site near a differentially expressed gene (Additional file 4: Table S4). Note that even thought some of these additional genes are likely directly regulated by FnrL they have remained in the "indirectly regulated" category (Additional file 2: Table S2) as it will require additional experimentation to determine which of genes are indeed under direct control by FnrL. COG assignment of the FnrL regulon members To address the role of members of the FnrL regulon in controlling anaerobic physiology, we placed individual genes into different "Clusters Of Orthologous Groups" (COGs) (as categorized in Additional file 5: Table S5. Inspection of the bar chart in Fig. 3 shows that the largest set of genes directly controlled by FnrL are in the category "Function Unknown", which, accounts for 27 % of the genes in this regulon. This underscores that the role of many gene products in microbial physiology remain to be discovered. The largest COG categories that have a defined function are "Amino Acid Transport and Metabolism" and "Energy Production and Conversion". These major COG categories highlight that FnrL has a role in controlling the energy metabolism of these cells. Another major category is "Signal transduction" of which more genes are repressed than activated. Signal transduction, along with the COG category "Transcription", underscores that FnrL is an overarching global regulator that indirectly regulates a large number of genes. FnrL regulates a variety of transcription factors and signal transduction components Analysis of regulatory proteins that are directly regulated by FnrL shows that MerR (rcc03147) and TetR (rcc03059) transcription factor family members are directly repressed by FnrL (Additional file 4: Table S4). There is also a ChIPseq identified FnrL binding site located directly upstream of a BadM/Rf2 family regulator (Additional file 1: Table S1). FnrL also directly regulates several two-component signal transduction components. For example, FnrL binds upstream of three sensor histidine kinases coded by rcc03452, rcc02198, and RegB2 (rcc01026). RegB2 is divergently transcribed from its cognate response regulator partner RegA2 so FnrL may control expression of both signaling components with the caveat that no affect of deleting FnrL was observed on RegB2 and RegA2 expression under the assayed growth conditions. The physiological role of RegB2, RegA2 is unknown, but they do share some degree of similarity (28 and 44 %) to RegB/ RegA system, which is a well-characterized redox response system in Rba. capsulatus. A two-component histidine kinase (rcc02198) is also a direct member of the FnrL regulon with its presumed Rba. capsulatus genes clustered based on orthologous groups/functions. All FnrL directly and indirectly controlled genes clustered based on orthologous groups with orange representing repressed and green activated gene counts. COGs were determined using eggNOG server cognate transcription response regulator (rcc02197) immediately upstream. These regulators are next to a propanediol gene cluster and may have a function in propanediol metabolism. The DNA binding site is located in the intergenic region of rcc02198-rcc02199 thus only rcc02198 is counted in the direct FnrL regulon. The ChIP-seq peak is located 185 bp upstream of the histidine kinase coding region with a corresponding 2-fold difference in transcription expression (Fig. 4c). We also observed that expression of DorS is induced 4-fold by FnrL with the presence of a ChIP-seq peak upstream of DorS, which is required for activation of the torCAD operon that codes for the DMSO/TMAO reductase system. It has been reported that a deletion of FnrL leads to a defect in utilizing DMSO as a terminal electron acceptor [16]. Finally, FnrL also directly activates several genes that control synthesis and or hydrolysis of di-c-GMP (rcc02540, rcc01110 and rcc00783), which is often involved in regulating motility and biofilm biosynthesis suggesting that FnrL also has a role in controlling these processes [21]. FnrL is a direct controller of anaerobic respiration and photosynthesis Cytochrome cbb 3 (ccoNOQP) appears to be under direct control of FnrL. A ChIP-seq peak was found containing an FnrL binding sequence 100 bp upstream of the ccoN start codon and a second recognition site within the ccoN gene (Fig. 4a). RNA-Seq indicates that FnrL upregulates expression of the ccoNOQP operon 1.5-fold under photosynthetic conditions. This is peculiar since this operon is repressed by several additional redox regulators such as by RegA [5,7,22]. One explanation might be that significant FnrL activation of the divergently transcribed neighbor uspA, overpowers FnrL repression of ccoNOQP. The second FnrL binding site located within the ccoN gene may be used for regulation of downstream cytochrome biogenesis proteins ccoGHIS since FnrL represses this second downstream operon. To this point, it is likely that the actual protein content of assembled cytochrome cbb 3 is lower even with higher RNA transcription levels of ccoNOQP. Even though the ΔfnrL strain is capable of photosynthetic growth, it appears that FnrL is directly involved in Fig. 4 Selected ChIP and RNA-seq signal profile and statistics. Selected FnrL ChIP-seq signals of, A, cytochrome cbb 3 promoter region, B, ABC transporters rcc02659/rcc02660 with low enrichment but one with an FnrL binding site and a corresponding differential expression based on RNA-seq, C, promoter region of DMSO histidine kinase for DMSO reductase induction, D, bacteriochlorophyll biosynthesis bchF and CrtJ antirepressor aerR for photosynthetic induction regulating photosynthesis in this species. This conclusion is supported by spectral analysis of anaerobically grown ΔfnrL mutant strain of Rba. capsulatus which exhibits a clear reduction in photosystem spectral components relative to that observed with wild type cells (Fig. 5). A mechanism for this reduction in pigment synthesis is revealed by the presence of an FnrL ChIP-seq peak containing a FnrL recognition sequence in the intergenic region between the divergently transcribed bacteriochlorophyll biosynthesis gene bchF and the bacteriochlorophyll regulator aerR (Fig. 4d). Two potential FnrL binding sites were identified within the bchF-aerR intergenic region with both sites exhibiting good similarity to the consensus sequence. AerR is a cobalamin binding anti-repressor of the bacteriochlorophyll/carotenoid/light harvesting repressor CrtJ and thus the 2-fold activation of AerR expression by FnrL would relieve repression by CrtJ (Fig. 6) [1]. Furthermore this RNA-seq data is validated by a previous in vivo expression study using lacZ reporter plasmids which showed that AerR expression increases 2-fold under anaerobic conditions [7,23]. We have also identified FnrL binding sites in the puc and puf light harvesting and reaction center operons (Additional file 1: Table S1). Specifically, there is a FnrL site that overlaps with the translational start site of pucA as well as a second site located 250 bp downstream of the start codon of pucC. The expression of pucB and pucDE up-regulated by FnrL indicating one or both of these sites may indeed be involved in activation of puc operon expression. There is also a ChIP-seq peak that spans the genetic space of pufLM with an FnrL binding sequence within pufM (42 bp upstream of the pufX start codon). RNA-sequencing show that pufLM is also up-regulated. FnrL has a limited but suppressing role in motility A number of flagellar, chemotaxis, aerotaxis and gas vesicle genes are either directly or indirectly repressed by FnrL (Additional files 1 and 2: Table S1 and S2). Many structural flagellar genes are located, in large part, in five operons. RNA-seq and ChIP-seq results indicate that FnrL directly represses a 5-gene operon (rcc03522-rcc03525) that codes for an unknown function flagellar protein, FlbT, FlaF, and FlaA (flagellin protein needed for synthesis of the flagella filament). A ChIP-seq peak was observed that spans this operon with a consensus FnrL binding site located 42 bp upstream of the FlbT start codon (Table 1). In addition to flagellar structural proteins, FnrL also represses cheA1 that codes for chemotaxis signal transduction protein, a number of methyl-accepting chemotaxis receptors (rcc00644, rcc02611 rcc02887, rcc02139, and rcc01667), two aerotaxis receptors (rcc02075 and rcc03176) and several gas vesicle proteins (rcc01054 and rcc01056) ( Table 1, Additional files 1 and 2: Table S1, and S2). One possible explanation for FnrL repression of motility may be that there is selective pressure to suppress motility under anaerobic photosynthetic growth conditions where light driven energy production is not limiting. Under photosynthetic growth conditions these metabolically diverse cells are very capable of directly synthesizing all essential cellular metabolites and likely not as reliant on chemotaxis. Repression of these motility components by FnrL would be relieved in the presence of oxygen that would disrupt the DNA binding activity of FnrL. This would allow the cell to synthesize components needed to either aerotax to areas with increasing oxygen content or increase their buoyancy so that they can rapidly "float" in an aquatic environment towards an oxygen source. FnrL's role in anaerobic carbon metabolism FnrL is not directly involved in glycolysis or gluconeogenesis; however, there are two of steps in glycolysis/gluconeogenesis that are indirectly activated such as phosphopyruvate hydratase (rcc01715) and glyceraldehyde-3-phosphate dehydrogenases (rcc02160). Table S2). Of the TCA genes, succinate dehydrogenase is directly activated by FnrL and contains a consensus binding sequence 26 bp upstream of the sdhD start codon and within sdhC coding region. Succinate dehydrogenase in turn provides reducing power to ubiquinone to drive cytochrome bc 1 (petABC) complex that is indirectly activated (Fig. 7). Rba. capsulatus contains two forms of RuBisCO where form I is coded by cbbLS and form II is coded by cbbM. Form I and II cbb operons are regulated by related LysR family transcription factors CbbR I and CbbR II , respectively. FnrL does not control these regulators, but deletion of fnrL causes the, expression of cbbLS to be reduced. Regulation of tetrapyrrole biosynthesis and iron transport by FnrL The common trunk of the tetrapyrrole pathway from δ-aminolevulinic acid to uroporphyrinogen III is used for cobalamin, heme and bacteriochlorophyll biosynthesis [5,7,24]. There is indirect activation of hemA expression (Additional file 2: Table S2) with possible direct activation of ferrochelatase (hemH) expression with a predicted FnrL binding site that shows good similarity to the FnrL consensus recognition sequence. While there is no detectable FnrL binding site in the intergenic region between divergently transcribed hemB and rcc01809 genes, there is a ChIP-seq peak with an FnrL recognition sequence located within rcc01809. This suggests that the promoter for hemB may be within the rcc01809 coding sequence. Interestingly, FnrL has an indirect role in repressing cobalamin (cob gene) synthesis (Additional file 2: Table S2). We hypothesize that the cell attenuates cobalamin biosynthesis in order to divert intermediates for the biosynthesis of PPIX and bacteriochlorophyll (unpublished observation). We did not find any direct regulation of FnrL on siderophore or iron transport genes. Iron is an essential component of heme as well as the redox responding cofactor in FnrL and we were surprised to find a limited direct role of FnrL in iron transport. We did observe that FnrL does indirectly repress a siderophore ABC transporter (rcc02116), a FeoA family protein (rcc02028), a Fe(III) type ABC transporter (rcc02579) and FeoA2 that codes for a ferrous iron transporter (rcc00091) (Additional file 2: Table S2). One of the highest enriched (21-fold) sites was found in one uncharacterized set of genes (rcc3401-rcc3402) the first of which is a band 7/SPFH family protein thought to be the core of an ion channel while the second is a hypothetical protein that shares 24 % identity to a membrane protease found to be important for virulence in P. gingivalis W83 [25]. These two genes are typically found in an operon and appear to form the foundation of an ion channel. The role of this gene cluster is unclear in Rba. capsulatus, but it may be used for acquiring or sensing depleting ions including iron. Indeed it has been found that a knockout of homologous gene cluster in S. oneidensis shows a strong effect on iron metabolism with the disruption leading to a decrease in intracellular iron which affected proteins involved in respiratory chain that utilize iron [26]. Comparison of FNR/FnrL differentially expressed genes in Rba. capsulatus, Rba. sphaeroides, and E. coli The number of genes that encompass the Rba. capsulatus FnrL regulon (807 genes) is similar to the number of genes reported for the Rba. sphaeroides FnrL regulon (917 genes) [6,18]. However, analysis for congruence shows that only 171 genes are differentially expressed in common (Tables 2 and 3 in Additional file 6: Table S6). This means that 78 and 81 % of the genes in the Rba. capsulatus and Rba. sphaeroides FnrL regulons, respectively, are uniquely regulated by FnrL in these photosynthetic species [18]. Among the 171 commonly regulated genes, 52 are convergently activated and 36 are convergently repressed with 83 exhibiting differences in regards to activation versus repression. Divergent roles of FnrL in these species is also highlighted by the fact that only 9 FnrL ChIP-seq peaks are located in common positions relative to a common downstream gene out of the 82 FnrL peaks in Rba. capsulatus and 28 FnrL peaks in Rba. sphaeroides (Additional file 7: Table S7). The large number of uniquely regulated genes in these two Rhodobacter species indicates that FnrL has adopted dissimilar regulatory roles. This conclusion is highlighted by divergent roles of FnrL in regards to the regulation of tetrapyrrole biosynthesis and photosystems. For example, FnrL directly activates hemA in Rba. sphaeroides but not in Rba. capsulatus. Bacteriochlorophyll genes bchM, bchJ, bchO, and bchD are also convergently repressed by both species while bchC, bchE and bchF are activated in Rba. capsulatus and repressed in Rba. sphaeroides. Furthermore, an FnrL ChIP signal is observed in the light harvesting complex pufALM operon from Rba. capsulatus which is positively regulated by FnrL, but not in Rba. sphaeroides where this operon appears to be negatively regulated by FnrL [18]. This difference also extends to downstream secondary photosystem regulators. Specifically, we found an FnrL ChIP signal in the Rba. capsulatus promoter region of AerR which is a photosystem regulator that functions as an antirepressor of the bch/crt repressor CrtJ [1][2][3][4][5]. In Rba. sphaeroides the control of this downstream regulator by FnrL does not appear to exist [18]. These differences signal that there is significant variation in the role of FnrL for the control of photosystem synthesis between these species. Some notable similarities do, however, exist between these Rhodobacter species. For example, FnrL directly ctivates DMSO reductase and cbb 3 cytochrome oxidases and has direct negative effects on cobalamin biosynthesis in both of these species (Table 2). Furthermore, both organisms use FnrL to indirectly activate cbbLS (Calvin-Benson-Bassham cycle). Searching for convergence of FnrL/FNR regulons across genera we observed that there is only a handful of examples where the E. coli FNR regulon shows congruence with either of the Rhodobacter regulons. For example, the DMSO reductase system and uspA (universal stress protein) is directly activated by FnrL/FNR in all three species (Additional files 8, 9 and 10: Tables S8, S9, S10) [6]. Similarly, the fadBA (fatty acid metabolism) operon is repressed in all three species though in all cases this repression appears to be indirect. The E. coli and Rba. capsulatus FNR/FnrL orthologues also directly control nrdD (anaerobic ribonucleoside reductase) but this does not appear to be the case in Rba. sphaeroides. These results clearly demonstrate that there exist considerable divergence in function of FNR/FnrL orthologues from distant and more closely related bacteria. Conclusions This study shows that genes constituting the FnrL regulon from Rba. capsulatus are remarkably dissimilar from the published FnrL regulon from Rba. sphaeroides. Indeed only 9 genes in these two photosynthetic species have FnrL binding sites upstream from common targets. This dissimilarity is striking given that these organisms share similar anoxygenic photosynthetic physiologies and therefore presumably face similar challenges in controlling energy balance (redox poise) in response to light, oxygen, and nutrient availability. The fact that these FnrL orthologues exhibit high sequence identity (Fig. 8) and utilize similar target sequences (Fig. 2), and yet control many different target genes, indicates that there is significant evolutionary drift in the location of transcription factor recognition sequences even among related species that occupy similar environmental niches (Fig. 9). It is informative to note similarities and differences that exist between these Rhodobacter FnrL regulons as this can highlight areas of conservation that may apply to a broad spectrum of alpha-proteobacteria. For example, iron transport is controlled by FnrL in Rba. sphaeroides but not in Rba. capsulatus (Table 2) [27,28]. Differences also exist for heme synthesis where FnrL from Rba. sphaeroides directly controls hemA, hemN and hemZ while FnrL in Rba. capsulatus is not directly involved in heme biosynthesis with the possible exception of hemH. We also note that numerous cobalamin biosynthesis genes are indirectly down-regulated by FnrL in both Rhodobacter species. This may not be an intuitive result since cobalamin is needed for anaerobic biosynthesis of bacteriochlorophyll where BchE uses cobalamin as its cofactor [29]. However, both Rhodobacter species undergo an extensive increase in bacteriochlorophyll biosynthesis (>100-fold) when they are grown anaerobically and yet both species show FnrL mediated repression of the cobalamin pathway. In regards to the FNR regulon from E. coli [6], this species does not possess the ability to undergo photosynthesis and anaerobically relies on fermentative growth. Consequently, member of the E. coli FNR regulon are quite divergent from that of the Rba. capsulatus and Rba. sphaeroides FnrL regulons. Indeed despite the large number of genes that constitute the FNR/FnrL regulons from these species, we only found a few instances where all three organisms have direct orthologues that share the same direct FNR/FnrL control; the DMSO reductase system and the universal stress protein uspA. Although all three species do not share direct cytochrome oxidase orthologues, all three organisms do use FnrL/FNR to control the expression of oxygen utilizing terminal respiratory chain components [13,16,30,31]. Finally, an example of metabolic divergence of E. coli from Rhodobacter species is highlighted by the direct Fig. 8 Comparison of FNR/FnrL homologues. Similarities of FnrL from Rba. capsulatus and Rba. sphaeroides and their differences to FNR from E. coli. Fe/S motif was taken from the N-terminus (solid) and HTH domain was taken from the C-terminus (dashed). Red colored amino acids denote critical residues, green denote similarities between Rhodobacter species, blue denote similarities between E. coli and Rhodobacter species, grey are unique to each organism and redundantly represented by '.', '*' and '!' involvement of E. coli FNR in regulating glycolysis while in the Rhodobacter species FnrL is not directly involved. Logically, in a non-photosynthetic organism such as E. coli it makes sense to direct phosphoenolpyruvate for either aerobic or anaerobic growth by an oxygen sensing transcriptional factor while it appears that both Rhodobacter species have adopted alternate modes of glycolytic routing mechanisms [6]. FnrL's from Rba. capsulatus and Rba. sphaeroides are also indirectly involved in cobalamin repression while E. coli does not undertake de novo cobalamin biosynthesis and instead must go through a cobinamide intermediate [32]. The divergences observed with the FnrL/FNR regulons from Rba. capsulatus, Rba. sphaeroides and E. coli highlights the fact that analysis of transcription factor regulons must be experimentally derived on an individual basis as corollary regulatory events clearly differ between closely related organisms. This divergence can occur even among highly homologous transcription factor orthologs that bind to similar recognition sequences. Strains, media, and growth conditions The Rba. capsulatus parental strain SB1003, and its ΔfnrL derivative have previously been described [16]. These strains were routinely grown in peptone/yeast extract (PY) either in liquid or on agar plates with liquid media supplemented with MgCl 2 and MgSO 4 to a final concentration of 2 mM. Biological replicate strains were first grown semi-aerobically overnight as a 5 ml PY culture in culture tubes at 34°C shaking at 200 rpm. Subsequently, these cultures were transferred and grown anaerobically in screw-cap vials overnight at 34°C with four 75 W light bulbs after which the cells were subcultured to an optical density of 0.03 and spectrally monitored until harvesting at OD 660~0 .3. The optical density in the anaerobic vials was checked using Unico 1100 RS Spectrophotometer. RNA isolation, validation, and sequencing (RNA-Seq) After cultures reached OD 660~0 .3 the cultures were harvested by placing immediately into an ice/water bath and then transferred into 2 mL Eppendorf tubes, centrifuged at 6000 rpm for 3 min at 4°C. The entire 2 mL cell Fig. 9 Phylogenetic relatedness of Rhodobacter species. The evolutionary history was inferred using the Neighbor-Joining method [42]. The bootstrap consensus tree inferred from 1000 replicates [43] is taken to represent the evolutionary history of the taxa analyzed [43]. The evolutionary distances were computed using the Maximum Composite Likelihood method [44] and are in the units of the number of base substitutions per site. The analysis involved 19 16S rRNA sequences. All positions containing gaps and missing data were eliminated. There were a total of 1325 positions in the final dataset. Evolutionary analyses were conducted in MEGA6 [45]. pellet was then used for extracting total RNA using a Bioline Isolate II RNA extraction kit. Briefly, the bacterial pellet was dissolved in 100 μL of TE (10 mM Tris-HCl, 1 mM EDTA, pH 8) buffer containing 10 mg/mL lysozyme and incubated for 3 min at room temperature. After isolation of total RNA the DNA was removed by addition of 1 unit of Turbo DNAse and further incubated for 30 min at 37°C. A cleanup step was performed with Zymogen Direct-zol RNA extraction kit according to manufacturers instructions. To check for residual DNA, qRT-PCR of the rpoZ housekeeping gene was performed with and without reverse transcriptase. Total RNA was submitted to the University of Wisconsin-Madison Biotechnology Center where it was verified for purity and integrity with a NanoDrop2000 Spectrophotometer and Agilent 2100 BioAnalyzer, respectively. Samples that met Illumina sample input guidelines were prepared according the TruSeq® Stranded Total RNA Sample Preparation Guide (15031048 E) using the Illumina TruSeq® Stranded Total RNA kit (Illumina Inc., San Diego, California, USA) with minor modifications. For each library preparation, 2 μg of total RNA was reduced of ribosomal RNA using the EpiCentre RiboZero™ rRNA Removal (Bacteria) kit (EpiCentre Inc., Madison, WI, USA) as directed. Subsequently, each rRNA-depleted sample was fragmented using divalent cations under elevated temperature. The fragmented RNA was synthesized into first strand cDNA using SuperScript II Reverse Transcriptase (Invitrogen, Carlsbad, California, USA) combined with Actinomycin D and random primers followed by second strand synthesis using Second Strand Marking Master Mix. The blunt-ended double-stranded cDNA was purified by paramagnetic beads (Agencourt AMPure XP beads (Beckman Coulter, Indianapolis IN, USA). The cDNA products were incubated with A-Tailing Mix to add an ' A' base (Adenine) to the 3′ end of the blunt DNA fragments followed by ligation to Illumina adapters, which have a single 'T' base (Thymine) overhang at their 3′end. The adapter-ligated products were purified by paramagnetic beads. Adapter ligated DNA was then amplified in a Linker Mediated PCR reaction (LM-PCR) for 10 cycles using the PCR Master Mix and PCR Primer Cocktail and purified by paramagnetic beads. Quality and quantity of the finished libraries were assessed using an Agilent DNA1000 chip (Agilent Technologies, Inc., Santa Clara, CA, USA) and Qubit® dsDNA HS Assay Kit (Invitrogen, Carlsbad, California, USA), respectively and standardized to 2 μM. Cluster generation was performed using standard Cluster Kits (v3) and the Illumina Cluster Station. Single 100 bp sequencing was performed, using standard SBS chemistry (v3) on an Illumina HiSeq2000 sequencer. Images were analyzed using the standard Illumina Pipeline, version 1.8.2. Construction and sequencing of ChIP libraries (ChIP-Seq) A plasmid expressing a FnrL 3xFLAG Tag with an isopropyl β-D-1-thiogalactopyranoside (IPTG) inducible lac promoter was constructed with the following reverse primer ctaGCTAGCttaCTTGTCATCGTCATCCTTG-TAGTCGATGTCATGATCTTTATAATCACCGTCAT GGTCTTTGTAGTCggatc containing NheI restricted site and forward primer acatGCATGCGGTTCATCCCC-GATTGCGCCAG containing SphI restriction site and cloned into pSRK (complementation plasmid containing gentamycin resistance marker to produce pSRK-FnrL. This expression plasmid is described in detail in the following reference [33]. pSRK-FnrL was subsequently mated into Rba. capsulatus using S17-1 E. coli mating strain with complementation checked by growing cells anaerobically with 50 mM DMSO in the presence of 1.0 mM IPTG. FnrL mutants fail to utilize DMSO as a terminal electron acceptor due to their inability to express sufficient amounts of DMSO reductase [16] and also have reduced levels of photopigments (Fig. 5). The FnrL deletion strain complemented with pSRK-FnrL was subsequently able to restore growth on DMSO and to resort wild type photopigment levels (Fig. 5) identical to that of wild type cells. Photosynthetically grown FnrL-3xFLAG complemented cells were treated with 37 % formaldehyde to a final concentration of 1 % for 15 min at room temperature. Crosslinking with formaldehyde quenched by the addition of Tris-HCl pH 8.2 to a final concentration of 500 mM for 5 min at room temperature after which the cells were harvested by centrifugation. The cells were washed with 40 mL TBS buffer and resuspended in 4 mL buffer composed of 50 mM Tris pH 7.5, 150 mM NaCl, 1 mM EDTA, 1 % Triton X100. After disruption by French press lysis, the DNA was sheared three times by sonication using a small tip sonicator with 15-W power output. Protein bound to DNA was then reverse crosslinked by heating to 65°C overnight with concurrent removal of contaminating RNA by the addition of 1 μg of RNAse A per 100 μL sample. Immunoprecipitation was performed according to manufacturers instruction using ANTI-FALG® M2 Affinity Gel (Cat. Number A2220). Purified immunoprecipitated and input DNA was submitted to the University of Wisconsin-Madison Biotechnology Center for library construction and sequence analysis. DNA concentration and sizing were verified using the Qubit® dsDNA HS Assay Kit (Invitrogen, Carlsbad, California, USA) and Agilent DNAHS chip (Agilent Technologies, Inc., Santa Clara, CA, USA), respectively. Samples that met the Illumina sample input guidelines were prepared according the TruSeq® ChIP Sample Preparation kit (Illumina Inc., San Diego, California, USA) with minor modifications. Libraries were size selected for an average size of 350 bp using SPRI-based bead selection. Quality and quantity of the finished libraries were assessed using an Agilent DNA1000 chip and Qubit® dsDNA HS Assay Kit, respectively with DNA concentration standardized to 2 μM. Cluster generation was performed using standard Cluster Kits (v3) and the Illumina Cluster Station. Single 100 bp sequencing was performed, using standard SBS chemistry (v3) on an Illumina HiSeq2000 sequencer. Images were analyzed using the standard Illumina Pipeline, version 1.8.2. Data pre-processing, computer software and data analysis for RNA-sequencing and ChIP-sequencing All computations were performed on a custom built computer running Ubuntu 13.10 equipped with Asus Z9PE-D8 WS motherboard, 2 x Intel Xeon E5-2630 V2 CPU, 128 GB DDR3-1600 RAM. Each fastq file was checked for good quality using FastQC and trimmed of low quality sequences using Trimmomatic program using a sliding window of 5:25 and a minimum length of 40. The reads were aligned to the genome using Bowtie2 [34] mapped individual genes using HTSeq-count [35]. Raw counts generated from HTSeq-count program were used to generate differentially expressed genes with DESeq2 package in R [36,37]. Default parameters with noted exceptions were used for Trimmomatic, Bowtie2 and HTSeq-count programs. For processing ChIP-seq, a pipeline consisting of Trimmomatic with a sliding window of 5:25 and a minimum length of 40 was used to trim poor quality reads, Bowtie2 to align the reads to the SB1003 reference genome, MACS to determine significantly enriched sites, and MEME for binding sequence extraction using default parameters [38]. All packages are available for download via github and/or bioconductor [33][34][35][38][39][40]. Raw sequence data from our RNA-seq and ChIP-seq analysis can be accessed via NCBI Sequence Read Archive server under the accession number (PRJNA274121).
2016-05-12T22:15:10.714Z
2015-11-04T00:00:00.000
{ "year": 2015, "sha1": "04b6bb8cc5d0c1c3bedcf150b69bee8cba426a48", "oa_license": "CCBY", "oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/s12864-015-2162-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4b0e4f015cd62e3ab15b194ba6d9ea11d431c2bb", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
133895010
pes2o/s2orc
v3-fos-license
Differing Modes of Biotic Connectivity within Freshwater Ecosystem Mosaics Abstract We describe a collection of aquatic and wetland habitats in an inland landscape, and their occurrence within a terrestrial matrix, as a “freshwater ecosystem mosaic” (FEM). Aquatic and wetland habitats in any FEM can vary widely, from permanently ponded lakes, to ephemerally ponded wetlands, to groundwater‐fed springs, to flowing rivers and streams. The terrestrial matrix can also vary, including in its influence on flows of energy, materials, and organisms among ecosystems. Biota occurring in a specific region are adapted to the unique opportunities and challenges presented by spatial and temporal patterns of habitat types inherent to each FEM. To persist in any given landscape, most species move to recolonize habitats and maintain mixtures of genetic materials. Species also connect habitats through time if they possess needed morphological, physiological, or behavioral traits to persist in a habitat through periods of unfavorable environmental conditions. By examining key spatial and temporal patterns underlying FEMs, and species‐specific adaptations to these patterns, a better understanding of the structural and functional connectivity of a landscape can be obtained. Fully including aquatic, wetland, and terrestrial habitats in FEMs facilitates adoption of the next generation of individual‐based models that integrate the principles of population, community, and ecosystem ecology. INTRODUCTION Ecologists have postulated that understanding movements and interactions of individuals within an ecosystem context is necessary to unify ecological theory (Huston et al. 1988). However, progress integrating individual-based approaches (e.g., DeAngelis and Grimm 2014) with ecosystem-based approaches (e.g., Walters et al. 2000) in ecology has been slow (Grimm et al. 2017). The next generation of individual-based models (Grimm et al. 2003) requires simulation of not just flows of individual organisms across a landscape from a population-and community-ecology perspective, but also flows of energy and materials from an ecosystem-ecology perspective. Consideration of aquatic and wetland habitats in inland landscapes as pieces of a complete mosaic rather than as islands separated by inhospitable environments is an approach that meshes directly with these next-generation models designed to integrate the principles of population, community, and ecosystem ecology. Here we use the term "connectivity" to describe linkages among habitat patches through the flow of energy, materials, or organisms. Connectivity can be divided into structural connectivity that refers to the spatial distribution and arrangement of habitat patches in a landscape, and functional connectivity that refers to the actual movement of energy, materials, or organisms across the landscape (Baudry and Merriam 1988). Thus, structural connectivity describes the underlying physical foundation, or matrix, upon which functional connectivity can occur. Temporal connectivity refers to how habitat patches can be connected through time. For example, if the environment of a patch varies between favorable and unfavorable conditions, an organism might connect the patch through time if it has a mechanism to persist through periods of unfavorable conditions. Therefore, temporal changes in environmental conditions must be considered when identifying the overall structural connectivity of a landscape. Likewise, physiological, morphological, and behavioral traits that allow individual organisms to persist during periods of unfavorable conditions must be considered when determining the overall functional connectivity of that same landscape. The movement of organisms among aquatic, wetland, and terrestrial habitats, and flows of energy and materials associated with these movements, can be modeled and quantified through explicit consideration of interactions between structure and function. Additionally, flows of materials and energy between habitats (e.g., terrestrial to wetland, wetland to terrestrial, wetland to aquatic) and the influence of these flows on individual organisms and communities can be incorporated. Only when all components of a landscape are considered as integrated and inseparable parts of a mosaic (sensu Sayer 2014) can important influences of ecosystem flows be fully considered and incorporated into models that link population, community, and ecological theory. To explore the relationship between structural and functional connectivity of landscapes and to facilitate the integration of population, community, and ecosystem models, we set out to (1) define the concept of freshwater ecosystem mosaics (FEM: pronounced "fem"); (2) link the concept to existing ecological frameworks; (3) demonstrate the usefulness of the FEM concept for assessing structural and functional components of biotic connectivity; and (4) illustrate, through specific examples, a range of different FEM types to which we hypothesize species are likely to be differently adapted, via specific physiological, morphological, and behavioral traits. FEMS DEFINED Most inland (i.e., nonoceanic or coastal) landscapes consist of some combination of aquatic (both lotic and lentic) and terrestrial habitats, with wetlands as transitional areas (Cowardin et al. 1979), although wetlands also occur as distinct landscape features without transition to either an aquatic or a terrestrial habitat, and as transitions between inland and coastal ecosystems. Therefore, a perspective that adopts the view of an inland landscape as a mosaic (e.g., Wiens et al. 1993) consisting of aquatic and wetland habitats interspersed within a terrestrial matrix provides a foundation upon which to describe a landscape and the underlying structural connectivity to which biota must adapt to persist. For inland landscapes, we describe a collection of aquatic and wetland habitats, and their variable spatial and temporal patterns of occurrence within a terrestrial matrix, as a "FEM." When viewing an art mosaic of individual tiles of different colors and shapes set within a mortar matrix, the full pattern of tile and mortar must be considered to see a complete vision of the artist's work and appreciate it in its entirety. Similarly, the size, type, and spatial arrangement of aquatic and wetland habitats must be considered in conjunction with the terrestrial matrix in which they exist in order to get a complete vision of any FEM ( Figure 1). However, unlike an art mosaic, FEMs are not set in stonethey are constantly shifting over time, and those shifts have repercussions for movement of organisms. Additionally, humans undoubtedly can alter FEMs, and the need to consider the entire mosaic holds in both natural and human altered landscapes. In fact, including the effects of human alterations to the tiles (aquatic and wetland habitats) and the mortar (terrestrial landscapes) can greatly influence the structural and functional connectivity of the FEM. Often, the space between "tiles" is viewed by ecologists as an inhospitable environment across which organisms must cross. This is the view promulgated within island biogeography theory and is often the case for true islands (MacArthur and Wilson 1967). However, inland aquatic and wetland habitats are only partially or superficially analogous to oceanic islands (e.g., Smith and Green 2005). The terrestrial environment, either natural or altered, may or may not be an inhospitable habitat that acts as a barrier to movement, and the differences in its effects on movement are often seasonal and species-specific. Just as with habitats, species occur along a continuous gradient from fully aquatic to fully terrestrial (e.g., Allen et al. 2014). Additionally, the flows of energy and materials between aquatic habitats and the terrestrial matrix must be considered. When a landscape is viewed as a mosaic, the role that the surrounding terrestrial matrix plays as another (or multiple) habitat type(s) becomes apparent as an integral part of the complete picture of species-species, species-landscape and ecosystem interactions. As an example, to fishes, the terrestrial environment separating two aquatic habitats is, under most circumstances, an inhospitable barrier. In contrast, an amphibian might require that same terrestrial environment as essential foraging habitat. A mallard, on the other hand, might not directly interact with the terrestrial matrix at all as it flies overhead but may rely on it later to provide essential nesting habitat. This view of FEMs as landscapes within which all habitat types, including those of the terrestrial matrix, are potentially connected provides context for not just flows of organisms, but also flows of abiotic energy, inorganic nutrients, and detritus among various ecosystem types. These flows can come directly from the movement of organisms or through abiotic processes such as runoff, erosion, and nutrient transport. Loreau et al. (2003) defined a meta-ecosystem as "a set of ecosystems connected by spatial flows of energy, materials and organisms across ecosystem boundaries." The meta-ecosystem concept moves us away from considering aquatic and wetland habitats as metaphorical "islands" separated by inhospitable terrestrial environments. Instead, the meta-ecosystem concept necessitates consideration of functional interactions between the "islands" and the matrix in which they exist when considering spatial flows among habitats. The FEMs we define here are fully compatible with the meta-ecosystem concept and facilitate its incorporation into connectivity assessments and combined population, community, and ecosystem modeling efforts. MOVEMENTS WITHIN FEMS Movement of organisms among aquatic, wetland, and terrestrial habitats within a FEM occurs via diverse means in multiple directions, including longitudinal (up-and downgradient), lateral (across), vertical (surface, subsurface), and temporal dimensions (Ward 1997). Therefore, biotic connectivity within a FEM includes not only movements through the water column and over intervening land surfaces but also through the atmosphere (e.g., Beisner et al. 2006) and through underlying groundwater and sediments. In dry seasons or droughts, for example, many organisms typically found in surface waters survive by moving into underlying sediments within wetlands or hyporheic zones in stream networks, which provide refuge and alternative pathways for movement (e.g., Sedell et al. 1990;DiSalvo and Haynes 2015). In many cases, the movement of water is unidirectional a b c d over and through the landscape (Malard et al. 2002; seiche and tidal influenced waters, and wetland-towetland and stream-to-wetland flows are exceptions); however, many organisms are capable of multidirectional movement along longitudinal, lateral, and vertical routes over their life spans. Riverine stonefly larvae, for example, can move over long distances through floodplain aquifers (~2 km laterally and 10 m vertically from surface channels, Stanford and Gaufin 1974;Stanford and Ward 1988) to take advantage of nutrient-rich hyporheic habitats. In many landscapes, surface waters are highly dynamic (e.g., Lang et al. 2012;Vanderhoof et al. 2016) such that availability and spatial arrangement of habitats and resources can vary dramatically over an individual's lifetime or across multiple generations (Roshier and Reid 2003). Flows of energy and materials greatly affect the condition of habitats and resources encountered by moving organisms. Changes in this availability, arrangement, and condition of habitats influence movements of biota both directly and indirectly by affecting flows of resources, environmental cues and stressors (e.g., Gaston et al. 2013;Shannon et al. 2016). Vannote et al. (1980) introduced the River Continuum Concept to describe the continuous gradient of physical conditions along the length of a river; the predictable effects of those conditions on biological communities; and the consistent patterns of organic matter loading, transport, utilization, and storage that result. Key to the River Continuum Concept is the realization that patterns of inputs from habitats outside the river environment are needed to explain longitudinal patterns in stream habitat conditions. Similarly, Euliss et al. (2004) introduced the Wetland Continuum Concept to describe the influence of atmospheric water and groundwater flows on differing patterns of community structure in wetland ecosystems. While the focus on flow types differs (primarily surface water in the River Continuum vs. surface, atmospheric, and groundwater flows in the Wetland Continuum), the concepts can be viewed as working in tandem, with flows that emanate from wetlands ultimately influencing the concentration and net flux of chemicals, including nutrients, in flows that extend to rivers and streams. CONTINUOUS GRADIENTS WITHIN FEMS The FEM concept provides a framework within which a more unifying approach that links the River Continuum Concept, the Wetland Continuum Concept, and other important conceptual models (e.g., Kratz et al. 1997) can be developed. A FEM view of a landscape requires acknowledgment of a complete picture of flows that includes flows to and from terrestrial habitats in addition to aquatic and wetland habitats. Both the River Continuum and Wetland Continuum Concepts implicitly include flows to and from surrounding, often terrestrial, habitats. However, interactions between wetlands, especially nonriverine wetlands, and water, energy, and material flows in rivers have been poorly represented (USEPA 2015). Consideration of the landscape as a mosaic of aquatic and wetland habitats bound together by a terrestrial matrix incorporates the often-overlooked interactions among differing freshwater and terrestrial ecosystem types. STRUCTURAL CONNECTIVITY OF FEMS Water runoff from the terrestrial matrix during rain events or snowmelt often is a major input of water into the aquatic and wetland habitats of a FEM. This surface-water inflow also transports nutrients, energy, and organic and inorganic particles from surrounding terrestrial lands. Additionally, in many FEMs, the composition of soils and bedrock influences groundwater inputs to and outflows from aquatic and wetland ecosystems (Winter 2001). Both surface and groundwater flows work in conjunction with biogeochemical and physical processes (Cohen et al. 2016) to help form the abiotic environment to which biota must be adapted to persist in a particular habitat (Southwood 1988;Euliss et al. 2004). However, the number, size, shape, arrangement, and hydrology of aquatic and wetland habitats on the landscape are parameters of the structural connectivity to which organisms must also be adapted. These factors determine the distance to another favorable environment, setting a minimum threshold distance beyond which individuals must be able to move to connect populations, communities, habitats, and ecosystems. The surface-water permanence of a habitat can place a temporal limit on how often movements must be made; for example, if a stream or wetland habitat dries seasonally, organisms living there must either move seasonally or have a life-history form (e.g., desiccation resistant eggs) that allows them to survive periods without ponded or flowing surface water. For species restricted to aquatic environments (e.g., fishes), a spatially continuous surfacewater connection is typically required for unassisted movement among habitats. For organisms that can traverse terrestrial lands (e.g., amphibians, mammals, many insects), the distance they must be capable of traveling may be a straight-line distance between favorable habitats; however, often dispersal requires movement over longer distances along favorable pathways (e.g., along riparian corridors) or through risky environments. While the number, size, shape, arrangement, and permanence among landscape features set the underlying structural connectivity in a FEM, the diverse adaptations of organisms in terms of morphological, physiological, and behavioral traits that influence movement abilities ultimately determine biotic movement, and this functional connectivity. Therefore, to identify the functional biotic connectivity of a landscape, the underlying structural connectivity must first be understood; adoption of a FEM perspective facilitates development of such an understanding. BIOTIC ADAPTATIONS: TRANSLATING STRUCTURAL CONNECTIVITY INTO FUNCTIONAL CONNECTIVITY Ecosystem and community ecology can only be fully and mechanistically integrated when we revert to their building blocks and acknowledge individual organisms and their traits and adaptive behaviors. (Grimm et al. 2017) Freshwater ecosystems in any given landscape can be connected to and influence each other in diverse ways (USEPA 2015). One key way is through the movement of organisms among ecosystems (Schofield et al. 2018). In many cases, individuals must move for populations to persist. This movement can take many forms, both active and passive, and includes seeds dispersed by the wind, fish swimming among aquatic systems, amphibians traveling through uplands, birds flying along migratory pathways, and freshwater invertebrates clinging to the feathers of ducks. As put by Tiner (2003), ". . . most, if not all, wetland scientists would agree that there is no such thing as an isolated wetland from an ecological standpoint." Thus, to truly understand any ecosystem, the movement needs and abilities, and therefore the traits and adaptive behaviors, of the individual species within these systems must be considered ( Table 1). The arrangement and variability of aquatic and wetland habitats within a FEM will influence the traits and behaviors of the organisms that persist there. In a FEM where aquatic or wetland habitats are separated by short distances, traits needed to traverse intervening terrestrial habitats may differ from traits needed for a species to persist in a FEM in which favorable habitats are situated far apart. Similarly, in FEMs where surface water is ephemeral, the need to move will be different than in FEMs with sustained surface-water permanence. Flowing-water habitats often offer linear pathways through which species can move, but typically these systems can also be connected to other aquatic or wetland habitats in a FEM through movements of biota across the terrestrial matrix. The community-wide composition of traits that facilitate movement evolves as a result of interactions between species and the environments in which they occur (Loreau 2010). Therefore, more heterogeneous habitats theoretically could support more diverse species pools, representing a greater diversity of movement traits and capabilities. Additionally, communities in landscapes with greater degrees of structural connectivity could also increase biotic diversity simply by allowing for multiple spatially based classes of dispersers. For example, in a landscape with a high density of aquatic and wetland habitats, species capable of short-distance movements would flourish; however, species capable of long-distance travel (through water, overland or aerially) could also occur. But if required habitats are widely dispersed or highly ephemeral, species that can move only short distances and lack adaptations to dry conditions would likely be excluded from resultant communities, or not persist for long. Decreasing structural connectivity has a filtering effect (Figure 2), limiting communities to some hierarchically reduced sets of traits as the degree of habitat connectedness decreases (Poff 1997). Therefore, understanding the traits of species that exist in any FEM will indicate the degree of habitat connectivity within, integrated over time. However, the influence of energy and material flows on condition of the habitats, and ultimately the functional connectivity of a landscape, cannot be ignored. EXAMPLE FEMS While the structural connectivity of a region from a biotic perspective undoubtedly incorporates factors beyond an area's hydrologic landscape, the hydrologic landscape principles and maps of Winter (2001) and Wolock et al. (2004) provide a useful classification framework for exploring biotic structural connectivity in the absence of mapping products derived specifically to delineate FEMs. Wolock et al. (2004) identified 20 noncontiguous, hydrologic landscape regions JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION (HLRs) in the continental United States (U.S.) based on similarities of multiple land-surface-form, geologic, and climate characteristics. Here we discuss five of these HLRs (Figure 3), the structural characteristics that we expect to occur in each, and the functional traits and biotic adaptations that would be needed by biota to convert structural connectivity of the mosaic into functional connectivity required for population persistence. The five HLRs we have selected as examples are: desert washes with a low average density of wetlands and seasonal springs (HLR 14; Figure 3, inset 5); arid playa landscapes with a moderate average wetland density and low stream density (HLR 10; Figure 3, inset 4); semiarid prairie potholes with a high average wetland density and low stream density (HLR 8; Figure 3, inset 3); subhumid plains with a high average density of streams and riparian wetlands and high influence by subsurface flows (HLR 1; Figure 3, inset 1); and humid plains with a high average density of streams and wetlands (HLR 2; Figure 3, inset 2). Until FEMs are more explicitly mapped, the HLRs of Wolock et al. (2004), and other HLR classifications (e.g., Wigington et al. 2013), can facilitate exploration of physiological, morphological, and behavioral adaptations of organisms to the unique structural connectivity within differing landscapes. Wolock et al. (2004) described HLR 10 as "arid plateaus with impermeable soils and permeable bedrock." The hydrology of this area is dominated by overland flows and deep groundwater. While these groundwater flows support deep aquifers (e.g., the Ogallala Aquifer), overland flows on impermeable soils result in the development of the numerous playa lakes and wetlands that define this region. Thus, this FIGURE 2. Structural connectivity acts as a landscape filter (sensu Poff 1997), supporting reduced sets of movement abilities and associated traits (represented by colored arrows) in biotic communities as degree of structural connectivity decreases. Endemism is a special case in which lack of connectedness results in unique species through adaptation to local conditions. HLR is typified by aquatic and wetland habitats that, as a result of the region's arid climate, often go dry and are separated by relatively large distances. Consequently, biota without a mechanism to survive drying would be minimal or absent in a FEM within HLR 10 (although these taxa could persist in the few streams or rivers that pass through the HLR Anderson and Smith 2004). HLR 8 is described as semiarid plains. Overland flow (primarily from snowmelt) drives the hydrology of the numerous wetlands on relatively impermeable glacial till within this HLR. The wetter climate of HLR 8 contributes to a greater density of wetlands on the landscape than under the arid climate of HLR 10, but flowing water and permanent lakes are largely absent in the region. Therefore, fishes were not historically a significant component of the region's biota (McLean et al. 2016). The FEMs in this region are amenable to species capable of short-distance overland and aerial movements (e.g., painted turtles [Chrysemys picta], Griffin 2007; some midges [Chironomidae], Bataille and Baldassarre 1993), while still supporting long-distance dispersers (e.g., waterfowl) and some species with resistance traits due to longer term periodic drought (e.g., some cladocerans [Gleason et al. 2004]). Additionally, species that may outwardly appear to be capable of only short-distance movements (e.g., many seeds, small freshwater crustaceans) may be capable of long-distance dispersal if they have adaptive traits allowing for ecto or endo-zoochory or phoresy (i.e., "hitch-hiking" on or within dispersing hosts). (Wolock et al. 2004). Insets 1, 2, 3, 4, and 5 provide enlarged views of areas within HLRs 1, 2, 8, 10, and 14, respectively. In the insets, linear features are stream networks, green features are wetlands, blue features are deep-water habitats, and white areas are upland habitats. Note: These static images represent a snapshot of dynamic landscapes and not the full range of structural connectivity encompassed at different points in time. In contrast to HLR 8, HLR 1 is underlain by permeable soils and bedrock, and aquatic and wetland ecosystems are primarily driven by groundwater flows rather than overland runoff. In the mosaics of HLR 1, we find a larger number of flowing-water systems supported by groundwater inputs. There is also a greater number of riparian wetlands supported by bidirectional connections with stream networks, and upland embedded wetlands, also influenced by groundwater inputs through the permeable substrates. Biotic communities within FEMs characteristic of HLR 1 would be much more likely to contain fishes and other organisms with adaptive traits that facilitate movements along stream channels and riparian corridors. Conversely, organisms exhibiting drought-resistant traits would be less prominent here than in either HLR 8 or 10. In areas designated as HLR 2, the climate is very wet and an even greater abundance of flowing-water and permanent-water systems exist in this region. Therefore, the landscape is conducive to dominance by physiological, morphological, and behavioral traits facilitating linear movements along continuous aquatic or wetland habitats such as fishes and many stream-dwelling amphibians. Also, generalists that can live in both streams and wetlands are likely to occur given the conditions present in HLR 2. At the opposite extreme of HLR 2 are the arid plateaus with permeable soils and bedrock found in HLR 14. A FEM within HLR 14 is expected to be dominated by the terrestrial matrix. The few aquatic or wetland habitats would be streams or rivers that originated in distant mountains or spring-fed habitats supported by groundwater flows through the permeable substrates. The great distances between lake and wetland habitats would tend to result in high levels of endemism (e.g., the many endemic pupfish such as Desert Pupfish [Cyprinodon macularius]), except for those species with traits that make them capable of moving long distances between suitable habitat patches (e.g., waterfowl). CONCLUSIONS The FEM perspective views aquatic and wetland habitats as integral components of a mosaic with terrestrial habitats as the mortar that binds together, rather than separates, landscape components. This perspective reflects the underlying reality stated by John Muir (1911, 110), "when we try to pick out anything by itself, we find it hitched to everything else in the Universe." FEMs adopt the interconnected view of inland landscapes and facilitate the merging of aquatic, wetland, and terrestrial perspectives of hydrology, biogeochemistry, and ecology consistent with the next generation of simulation models that combine concepts of population, community, and ecosystem ecology. A FEM perspective also acknowledges flows of nutrients, energy, particles, and organisms among ecosystems, consistent with the functioning of meta-ecosystems. Viewing aquatic, wetland, and terrestrial ecosystems from a FEM perspective serves to facilitate their study and management given the inherent integration of natural landscapes. For example, wetlands outside of floodplains, along with ephemeral, intermittent, and seasonally flowing streams, have been defined as vulnerable waters due to their susceptibility to degradation and destruction from anthropogenic activities such as agricultural development and urban expansion (Creed et al. 2017). From a FEM perspective, these vulnerable waters are critical as they are typically the first water feature to interact with terrestrial solute and particle fluxes (Alexander et al. 2007). If such a perspective is adopted, concepts such as the "geographic isolation" of wetlands (Tiner 2003) lose their validity and relevance (Mushet et al. 2015;Calhoun et al. 2017), because geographic connections of aquatic and wetland habitats to terrestrial habitats are recognized as integral parts of the system. Additionally, from a FEM perspective, the continuum of wetlands may be characterized as transitional lands between aquatic and terrestrial habitats rather than aggregated with aquatic ecosystems and divorced from terrestrial systems. Placement of wetlands, including vulnerable waters, back into their important transitional position more fully acknowledges the role that they play in a fully functional landscape. Given the myriad species and associated physiological, morphological, and behavioral traits that exist in any ecosystem, a perspective that acknowledges the interconnected nature of terrestrial, aquatic, and transitional ecosystems as the default position, and integrative models that can be tailored to different representations of landscape mosaics, will lead to improved understanding of freshwater ecosystems through the integration of population, community, and ecosystem ecology. ACKNOWLEDGMENTS We thank Stephen LeDuc, Daniel McLaughlin, Caroline Ridley, and two anonymous reviewers for providing their reviews of an earlier draft of this manuscript. This article is a product of the "North American Analysis and Synthesis on the Connectivity of Geographically Isolated Wetlands to Downstream Waters" working group and was made possible through funding provided by the U.S. Geological Survey's John Wesley Powell Center for Analysis and Synthesis, and the U.S. Environmental Protection Agency
2019-04-27T13:10:29.991Z
2018-08-24T00:00:00.000
{ "year": 2018, "sha1": "2d517b027030117126e014fe844979e86f04678c", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1752-1688.12683", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "beb790ea84657ba0fd952bb376c65f0698b4a3d1", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Environmental Science" ] }
268122870
pes2o/s2orc
v3-fos-license
Evolution mechanism of water-conducting fractures in overburden under the influence of water-rich fault in underground coal mining Based on the 7618 working face in Yaoqiao coal mine of Datun mining area, the activation mechanism of water-rich faults and the development characteristics of water-conducting fractures in overlying strata under the influence of faults are studied by theoretical analysis, numerical simulation and field measurement in this paper. The research results show that Anderson model and Mohr–Coulomb strength criterion are combined to establish the fault failure mechanical model, and the fault activation criterion under the influence of mining is obtained. FLAC3D numerical simulation results show that with the advance of the working face, the fault begins to be affected by the mining effect of the working face at the distance of 20 ~ 30 m from the fault. Meanwhile, with the advance of the working face, the overburden shear failure range also expands, and the fault fracture gradually expands from top to bottom. The failure zone of the working face roof is connected with the fault fracture zone. Then the fault is "activated" and causes the fault to become a water gushing channel, and finally the water gushing disaster occurs. Through numerical simulation and comparative analysis, the development height of water-conducting fracture is 73.2 m in the absence of fault, and 73.7 m in the presence of fault, indicating that the fault has little influence on the maximum development height of water-conducting fracture. The actual development height of the water-conducting fracture zone in the 7618 working face is 73.97 m and the fracture production ratio is 13.7. The research results can provide theoretical reference for the safe mining of similar working faces across faults. Geological conditions and mining conditions Yaoqiao coal mine is located in Xuzhou city, Jiangsu province, China.The 7618 working face in Yaoqiao coal mine is adjacent to the 7620 working face to the north and the 7616 working face to the south.The buried depth of the working face is 660 m ~ 715 m.The strike length of the working face is 1035 m, the dip length is 180 m, and the average seam thickness is 5.6 m.In the mining process of the 7618 working face, DF25 normal fault with a dip angle of 70° and a drop of 6 m is encountered, which has no influence on the safe mining of the working face.The position of the working face, the position of the fault and the column shape are shown in Fig. 1. Activation mechanism of water-rich fault According to the research results in literature 30,31 , it is concluded that the mining of working face has a significant impact on the stability of the fault.The change of stress state in the fault determines whether the fault is activated, and the change of stress state is the result of the redistribution of in-situ stress caused by the disturbance of mining of working face.In order to better understand the stability of faults under mining disturbance, Anderson model 32 is adopted to effectively capture the relationship between ground stress and fault stability after the fault is disturbed by the mining of working face.The Anderson fault model is shown in Fig. 2. σ 1 and σ 3 are the vertical and lateral stresses of the fault under the influence of mining effect, and α is the fault inclination, which is in the range of (0, π/2). According to the stress state analysis of the points, the stress on the fault plane is www.nature.com/scientificreports/where σ is the normal stress on the fault surface, MPa; τ is the shear stress on the fault surface, MPa; p i is the fault pore pressure, MPa. When the fault is in equilibrium state, the normal stress σ and the shear stress τ satisfy the Mohr-Coulomb strength criterion. where c is the cohesion of the fault plane, MPa; t is fault friction factor; φ is the internal friction angle of the fault, φ ∈ (0, π/2), rad. Formulas ( 4) and (5) show that the greater the friction angle is, the greater the friction factor k is, the greater the shear stress τ is, and the higher the shear strength is. Based on the results of previous studies 33,34 , the analysis shows that the internal friction angle changes little after fault failure, but the cohesion decreases significantly.Therefore, the slope of the strength criterion line on the fault plane remains basically unchanged; due to the influence of mining, the cohesion c on the fault plane decreases, which causes the whole criterion line to move downward. Under the influence of mining, the minimum distance between the Mohr stress circle at a certain point on the fault surface and the Mohr-Coulomb strength criterion line on the fault surface is r, and the fault state determination diagram is shown in Fig. 4. From the geometric relationship shown in Fig. 4, Substituting the relationship between OO 1 and R, Under the influence of mining in the working face, the Mohr-Coulomb strength criterion line moves down as a whole.As shown in Fig. 4, there are three relationships between the strength criterion line and the stress circle, namely, there are three fault states. 1.When r > 0, the fault is in a stable equilibrium state.2. When r = 0, the fault is in the limit equilibrium state.3. When r < 0, the fault is in activated state. Numerical calculation model Based on the geological conditions of the 7618 working face in Yaoqiao coal mine, FLAC3D numerical simulation software is used to study the development law of the water channel of the working face.The model size is 400 m × 300 m × 160 m (X × Y × Z).In order to eliminate the boundary effect, 50 m boundary coal pillars are established before and after the working face strike direction, and 50 m boundary coal pillars are established on both sides of the inclined direction.The advancing direction of the working face is set to 300 m, and the numerical model is shown in Fig. 5. The vertical initial stress of the simulation model is calculated according to the weight of the overlying strata, and the average volume force of the overlying strata is 24 kN/m 3 .According to the buried depth of 700 m, the vertical initial stress is set at 16.7 MPa.Therefore, the vertical load of 16.7 MPa is applied to the upper part of the model to simulate the weight load of the overlying strata, and the horizontal initial stress is set at 20.87 MPa.There are fixed constraints around and on the bottom of the model.The construction of the numerical model is 1. Failure, movement and evolution law of overlying strata Evolution law of leading bearing stress.The overlying strata is subjected to continuously changing disturbance stress due to coal mining.When the rock stress exceeds its elastic-plastic bearing force, it breaks and produces water-conducting fractures.Therefore, based on the analysis of the stress change law of overlying strata, the current state of overlying strata can be indirectly reflected, and then how the stability of overlying strata is affected by mining is analyzed.The change of stress state in stope is shown in Fig. 6. When the working face advances to 50 m, the surrounding rock stress along the coal seam makes the pressure relief arch with a small range appear in the upper part of the goaf.The vertical stress is symmetrically distributed along the central axis of the goaf, and the bottom floor of the goaf also forms a pressure relief zone.It shows that there are caving or separation fractures in the overburden in this area, and the stress concentration is generated near the coal wall and near the palm face in front of the working face, the maximum is 34 MPa.When the working face advances to 100 m, the mining disturbance presents a more significant influence, and the relief arch gradually extends upward; the stress concentration is significant, and the stress value increases.When the working face advances to 200 m and 250 m, the overall change trend of stress is basically similar to the position of the working face to 150 m.Through continuous advancement of the working face, the pressure relief arch continues to expand, and the concentrated stress is still near the coal wall of the opening hole and near the palm face in front of the working face.When the working face advances to 300 m, the contact between the top and bottom areas causes the stress value in the middle of the goaf to change; the stress concentration in the area before and after the goaf is relieved, and the stress concentration value decreases, and the relief arch gradually self-differentiated at this time.With the collapse, subsidence and compaction of the overlying strata, the pressure in the middle relief area slowly recovers, and the stress in the overlying strata in the goaf slowly becomes stable. Variation rule of overlying strata displacement.In order to study the displacement deformation of overlying strata after coal mining in the working face, the overall evolution process of water-conducting fracture zone is indirectly reflected by analyzing the change of vertical displacement of overlying strata in the stope.The change of vertical displacement of the working face is shown in Fig. 7. As shown in Fig. 7, subsidence displacement occurs in overlying strata above the goaf at 50 m, and the displacement cloud map forms a small "arch" shape.Since it is the initial mining stage, subsidence of overlying strata is not large, and the maximum subsidence of direct roof and basic roof is 0.52 m.Floor heave occurs in Variation rule of plastic zone in overlying strata.Selecting the distribution characteristics of plastic zones at different distances of excavation in the direction of the model, shown in Fig. 8, and observing the damage zone caused by excavation of the working face to the roof strata.After coal seam excavation, the plastic zone of overlying strata develops to the depth.After excavation of coal seam is 50 m, the maximum failure height of the plastic zone is 30.9 m; and when the working face advances to 100 m, the development range of the plastic zone expands with the advancement of the working face.Shear failure and tensile failure in the direct roof occur simultaneously, and shear failure of deep overlying strata continues.This indicates that the maximum development height of the water-conducting fracture at this time is 52.9 m.With the continuous expansion of mining scope, the influence range of mining expands.By comparing the distribution of the model plastic zone in the strike of 200 m, 250 m and 300 m, when the working face is excavated to 200 m, the scope of the model plastic zone continues to expand along the strike direction, but the plastic zone in the model Z direction no longer develops upward.At this time, the overlying strata on the working face has reached full mining movement.The fracture zone of water conduction reaches its maximum height.The distance of coal seam roof coordinate and the maximum height coordinate of the plastic zone are calculated as the development height of the water-conducting fracture zone, and the calculation results are shown in Fig. 9. According to the calculation results, the deformation and failure of the overlying strata in the goaf develop gradually from bottom to top.Firstly, caving zone is formed in the lower rock layer, and then fractures and expansion occur in the middle rock layer.With the continuous expansion of mining scope, the fractures in the overlying strata further develops.After the working face advances to a certain distance, the height of waterconducting fracture zone gradually becomes stable, and the final development height of water-conducting fracture zone in working face is 73.2 m. Variation law of leading bearing stress of working face Evolution law of leading bearing stress of working face.Through numerical simulation calculation, the distribution characteristics of the leading supporting stress at different distances from the fault under the influence of the fault are simulated and analyzed.The distribution diagram of the leading supporting stress at the working face is shown in Fig. 11, and the change curve of the leading supporting stress is shown in Fig. 12. With the advancement of the working face, the influence range of the leading supporting stress on the working face does not change significantly, and the peak value of the leading supporting stress is about 20 m in front of the working face, indicating that the influence range of the fault is about 20 m in front of the working face.When the working face advances to 30 m away from the fault, due to the continuous advancement of the working face, the leading supporting stress of the working face gradually enters the area affected by the fault, and the leading supporting stress of the working face is affected by the fault barrier effect, leading to the result www.nature.com/scientificreports/ the "coal pillar"; and its bearing capacity for the overlying strata is also reduced, thus triggering the "activation" of the fault, resulting in the aquifer pouring into the working face through the fault. Variation rule of overlying strata displacement.The fault geological structure has an influence on the stability and displacement of overlying strata on the working surface.In the mining activities of the working face, the stress balance of the overlying strata is damaged.Due to the action of the overlying strata load, the overlying strata further lose stability and break.In order to achieve the balance again, the overlying strata break and sink under the action of gravity load until it contacts the goaf floor and reaches the balance again.Therefore, the whole evolution process of water-conducting fracture zone can be indirectly reflected through the change of overlying strata displacement.The displacement change law of overlying strata on the stoping fault in the working face is shown in Fig. 13, and the displacement change curve is shown in Fig. 14. When the working face begins to be mined, overlying strata above the coal seam begin to bend and sink.The area where overlying strata bend and sink gradually expands from the direct roof of the working face to the basic roof of the coal seam above the working face; the deformation scale of overlying strata becomes larger, namely, the displacement generated by overlying strata becomes larger.When the working face advances to 20 m, the affected area of displacement and subsidence gradually spreads to the fault.As the working face continues to advance, the displacement subsidence of overlying strata above the working face continues to increase, and the www.nature.com/scientificreports/displacement subsidence decreases obviously when approaching and passing through the fault, and the increasing area of displacement subsidence rises firstly and then decreases.Combined with Fig. 13, when the working face advances close to the fault or passes through the fault, the displacement and subsidence of the upper and lower plates of the fault affect the regional discontinuity, and the fault boundary is obvious.It is inferred that the roof and overburden above the working face have caved totally, forcing stress transfer, and the concentration of supporting stress in the clamping area between the fault.Therefore, the "activation" possibility of fault is also the greatest when the working face is close to the fault and passing through the fault. Variation rule of plastic zone of overlying strata.After the mining starts, the overlying strata enters a state of plastic failure under the influence of mining effect, and the existence of fault geological structure forms the condition of unbalanced stress distribution, which have a great influence on the stability of surrounding rock.At www.nature.com/scientificreports/ the same time, fault activation further affects the development height of the plastic failure zone, so it is necessary to analyze the plastic failure law of overlying strata in the working face to reflect the overall evolution process of the water-conducting fracture zone.The evolution law of the plastic zone during the mining of the working face is shown in Fig. 15, and the development height of water-conducting fractures in the overlying strata is shown in Fig. 16. It can be seen from the Fig. 15 that the tension failure of the direct roof above the coal seam enters a plastic state in the initial stage of working face.The front and bottom of the working face are subjected to shear failure and enter a plastic state.The failure development height of overlying strata in the working face is shown in Fig. 16.When the working face advances to 170 m, the plastic zone of the working face presents a "saddle type" distribution, and the failure height of the overlying strata is 72.9 m, and the failure height of the overlying strata is 73.7 m at the end of the working face.When the working face is far away from the fault, the fault is not disturbed by the mining of the working face.When the mining distance of the working face is 30 m from the fault, the fault begins to be affected by mining.With the advancement of the working face, the scale of shear failure of the fault gradually increases, extending from the upper part of the fault to the lower part of the fault.When the working face advances to the fault position and passes through the fault, a large area of plastic zone appears in the rock mass of the fault foot-wall; the roof plastic failure zone of the working face penetrates through the fault fracture zone, and the plastic zone of the roof near the fault extends significantly to the fault wall; the fault becomes an "activated" slip, resulting in the fault becoming a water gushing channel. Drilling parameters The method has the advantages of small engineering amount, low cost, high precision, and simple operation.A total of 3 boreholes are designed for exploration, among which 1 borehole is the original fracture borehole before mining (comparison borehole) and 2 borehole are the fracture development boreholes after mining.The construction drilling hole (drilling field) is arranged 5 ~ 8 m in front of C3 wire point of material track 7620, namely, 21 m outside the stop-mining line.1# and 2# detection holes are arranged in the stoping section of the working face; 3# detection hole is arranged in the coal pillar section of the working face as a comparison hole; by comparing the measurement data before and after mining, the development height of water-conducting fracture zone in overlying strata can be accurately determined.Drilling construction parameters are shown in Table 2.The horizontal and sectional views of the probe borehole are shown in Fig. 17. Analysis of detection results As can be seen from Fig. 18, under the condition that the overlying strata of the working face is not damaged, the average variation of water injection flow in the 3# pre-mining comparison hole is about 2.51 L/min.The As shown in Fig. 19b, water leakage from hole 2# fluctuates between 0.35 and 2.0 L/min at hole depths ranging from 80 to 102.5 m.The comparison with the corresponding section of hole 3# shows that the strata in this section are not damaged.The water leakage in the range of 50 m ~ 78.5 m hole depth is significantly higher than that in the previous section, and the leakage reaches 0.15 ~ 5.65 L/min, indicating that this section is the top of the water-conducting fracture zone.The rapid increase of water leakage in this section indicates that the rock formation damage is more serious from this section and enters the range of water-conducting fracture zone.Therefore, the top boundary of the water-conducting fracture zone in the overlying strata in the working face determined by hole 2# is 80 m deep, and the vertical height of the seam roof is 73.97 m. www.nature.com/scientificreports/After the end of mining, the overlying strata is affected by mining, and a large number of new fractures occur, and the leakage of borehole 1# reaches 3.8 ~ 6.55 L/min.The loss of borehole 2# is 0.15 ~ 5.65 L/min.The loss in hole 2# is generally less than that in hole 1#.This is due to the poor plugging effect of borehole 2# and the gradual closure of overlying strata fractures under the action of compaction of upper rock and loose layer after mining in the working face, so the water leakage of 2# borehole is significantly less than that of 1# borehole. The maximum development height of water-conducting fracture zone in borehole 1# is 70.22 m, and the fracture production ratio is 13.The maximum height of water-conducting fracture zone in borehole 2# is 73.97 m, and the fracture production ratio is 13.70.Therefore, the maximum measured development height of waterconducting fracture in 7620 working face is 73.97 m, so the fracture production ratio is 13.70. Drilling and peering Borehole 2# is selected as the observation hole for observation.The depth of borehole observed in the field is 105.25 m, the first 10 m includes borehole casing, and the actual effective observation depth is 95.25 m.The development of overburden fracture after mining is shown in Fig. 20. There is no obvious mining-induced fracture in the overlying strata within 20 m before the hole depth, indicating that the hole segment in this range has not entered the fracture zone.From the hole depth of 35 m, obvious mining-induced fractures begin to appear in overlying strata, but the width and number of fractures are small, and the overlying strata are not obviously affected by mining.When the hole depth continues to increase, the discern-ability of fracture strike and width increases, and the oblique fracture is the main one, showing obvious regularity.With the increase of the advancing distance of the working face, the fracture of overlying strata decreases.When the hole depth is 80 m, namely, the vertical depth is 73.97 m, the fracture completely disappears, indicating that the height of the water-conducting fracture zone is 73.97 m.The fracture production ratio is 13.70, which is consistent with the fracture zone determined by the borehole leakage observation method.There is little change of mine water inflow during the mining process in the working face, which indicates that the compound rock fracture does not spread to the strong rich aquifer in the loose layer, and the overlying strata is "three zones" after the advancement of working face. Conclusions Aiming at the development characteristics of water-conducting fracture zone in overlying strata under the influence of water-rich fault in working face, a fault failure mechanical model combined with Mohr-Coulomb strength criterion is proposed based on Anderson model.Numerical simulation and field measurement methods are used.In this paper, the overlying strata movement and the development and distribution characteristics of fractures in the working face are studied, and the evolution law of water-conducting fractures in the working face under the influence of water-rich faults is systematically clarified.The following conclusions are obtained as follows: (1) The Anderson model is employed to explain the relationship between ground stress and fault stability after the fault is affected by mining in the working face.According to the Mohr-Coulomb strength criterion analysis, the greater the internal friction angle of the fault is, the greater the friction factor is, the greater the shear stress is, and the higher the shear strength is.When r > 0, the fault is in a stable equilibrium state.When r = 0, the fault is in the limit equilibrium state.When r < 0, the fault is in activated state.(3) The maximum height of the water-conducting fracture zone is 73.97 m, which is consistent with the research on the evolution law of overburden fractures, according to the results of in-situ drilling leakage detection and drilling observation.The water inrush phenomenon can be prevented when the working face passes through the fault by leaving water-proof coal pillar at the fault.In order to prevent the fault from diverting water, it is necessary to set up water-proof coal pillar or carry out advance exploration and release water.The method of grouting can be used to strengthen the weak area and reduce the probability of water inrush. 2 Figure 1 . Figure 1.Schematic diagram of the position of the working face. Figure 3 . Figure 3. Mohr stress circle of fault plane. www.nature.com/scientificreports/goaf floor due to the compression of bearing overlying strata on both sides, and the floor heave value is about 0.101 m.When the working face advances to 100 m, the uneven arch deformation zone appears in the middle rock layer above the coal seam, which is caused by the uneven settlement of the roof and roughly deviates to the position of the cutting hole.The displacement of the basic roof and the direct roof is large.When the working face advances to 150 m, 200 m and 250 m, the overall movement and deformation trend of the overlying strata is basically the same as that at 100 m when the working face advances to 100 m, and the "arch" of the displacement cloud map of the overlying strata in the goaf continues to increase.When the working face advances to 300 m, the subsidence value of overlying strata in the goaf continues to increase, and the sum of the subsidence of the basic roof and the floor heave value of the coal seam is close to the mining height; the mining state is basically sufficient, and the development of water-conducting fractures is basically stable at this time.It is obvious that the overall movement and deformation range of overlying strata continue to expand due to continuous advancement of the working face.If the working face advances to a corresponding distance, the overall increase of vertical movement and deformation of overlying strata continue to decrease and become increasingly stable, indicating that the development height of water-conducting fractures has become stable. Figure 6 .Figure 7 . Figure 6.Distribution of leading bearing stress on the working face. Figure 8 . Figure 8. Variation of plastic zone of excavation in strike direction. Figure 12 . Figure 12.Curves of leading bearing stress at different stages of working face. Figure 13 . Figure 13.Vertical displacement distribution of overlying strata on working face. Figure 14 . Figure 14.Displacement change curves of working face at different stages. Figure 15 . Figure 15.Distribution of stoping plastic zone on the working face. Figure 16 . Figure 16.Development height of water-conducting fracture zone. Figure 17 . Figure 17.Schematic diagram of layout and profile of probe holes.(a) Drilling plan, (b) Borehole profile. Schematic diagram of fault state identification. Table 1 . Calculation parameters of the numerical model.
2024-03-03T06:17:46.242Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "785689d90cc4552747065ad2a99445c0749e5265", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-024-54803-5.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aaf74c10dbb07de183f8cfc8b3590bd1cb4ee8ae", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Geology" ], "extfieldsofstudy": [ "Medicine" ] }
222160269
pes2o/s2orc
v3-fos-license
Experimental evidence and network pharmacology-based analysis reveal the molecular mechanism of Tongxinluo capsule administered in coronary heart diseases Abstract Background: Tongxinluo (TXL) capsule, a polypharmacy derived from traditional Chinese medicine (TCM), has been widely used in coronary heart disease (CHD), while the underlying mechanism of TXL capsule is still unclear. The present study aimed at investigating the underlying mechanism of TXL acting on CHD patients and providing substantial evidence in molecular evidence by means of a network pharmacological analysis. Method: Active compounds and targeted genes of TXL were retrieved from TCM systems pharmacology (TCMSP) and TCM integrative database (TCMID). CHD and coronary artery disease were treated as search queries in GeneCards and Online Mendelian Inheritance in Man (OMIM) databases to obtain disease-related genes. Visualization of disease–targets network was performed under administration of Cytoscape software. Besides, Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analyses were administered. H9c2 cells were used to validate the predicted results in cardiomyocytes/reoxygenation model, and anti-inflammatory ability was examined. Results: A network of a total of 212 nodes and 1016 edges was obtained. Peptide and ubiquitin-like protein ligase binding occupied a leading position of GO enrichment. For KEGG analysis, fluid shear stress and atherosclerosis, as well as inflammation-related pathways were enriched. Cellular validation revealed the anti-inflammatory effect of β-sitosterol, eriodictyol, odoricarpin, and tirucallol as active compounds of TXL. Conclusion: Our study provided substantial molecular evidence that TXL capsule possessed the characteristics of multitargets with safe profile, and the main component is capable of regulating cytokine level in CHD patients. Introduction Coronary heart disease (CHD), one of the most common cardiovascular diseases is caused by reduction in blood flow to cardiomyocyte owing to build-up of plaque in arteries of heart [1,2]. CHD has become a leading cause of death and the mortality increased from 5.2 million to over 7 million between 1990 and 2010 [3]. It affects individuals at any age while becomes approximately triple in progressively elder populations compared with other age groups, and the morbidity in males is larger than that in female population [4]. Statin, as the cornerstone in anti-atherosclerotic regimen, has demonstrated the substantial efficacy at reducing cardiovascular events. However, even with intensive statin therapy, many patients still suffered from high residual risks in cardiovascular events [5]. Thus, exploration of alternative anti-atherosclerotic medications with high efficacy as well as low side-effect is needed. Targets of active compounds We comprehensively searched the direct targeted receptors of each active compound via DrugBank database, a specific bioinformatics and cheminformatics resource with detailed drug data, as well as targeted receptors (https://www. drugbank.ca). Full names of targeted protein receptors were obtained and converted into gene symbol on the basis of UniProt ID (https://www.uniprot.org/) for following analysis. Gene Ontology and Kyoto Encyclopedia of Genes and Genomes enrichment analysis Overlapped genes were retrieved for GO and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analysis with the criterion of P-value <0.05. Bar plots of GO and KEGG were exported and signal pathways involved in this network analysis were visualized in forms of diagram. Cells H9c2 cells were purchased from Tongpai Technology Company (Shanghai, China) and cultured in Dulbecco's modified Eagle's medium (DMEM) bought from Thermo Fisher Scientific (Guangzhou, China), with the supplement of 10% v/v FBS and 1% v/v penicillin/streptomycin in CO 2 incubator at 37 • C and 95% relative humidity. Cell models Regarding the investigation of the protective effect of TXL, hypoxia/reoxygenation (H/R) model was administered. Cells were put into an incubator with Krebs-Ringer bicarbonate buffer medium saturated with 99.99% N 2 for 140 min [15]. Cells were reoxygenated through changing the DMEM back and cultured under normal oxygen level (21%) for 1 h. The molecules were applied for 48 h before hypoxia until the end of oxygenation. Cell viability test Cell viability test was performed under the assistance of cell counting kit-8 (CCK-8) after the administration of abovementioned active components of TXL. Cells with different molecules were seeded in a 96-well plate at a density of 1 × 10 4 cells for 24 h. Then, 10% CCK-8 was added and OD value was read at 450 nm after 1 h. In addition, optimal concentration of each molecule was explored ranging from 5 to 100 μM [16][17][18][19]. Each cell viability test with different molecules was repeated five times and measurement of relative cell viability was recorded. Investigation of anti-inflammatory effect For anti-inflammatory effect, cells were seeded in a 96-well plate incubated for 24 h, and treated with 0.01 μg/ml LPS 30 min after incubation with optimal concentration of abovementioned molecules was obtained. Then, supernatant was collected by adding 150 μl dimethyl sulfoxide (DMSO) and stored at −80 • C for downstream analysis. Concentration of cytokine was measured by enzyme-linked immunosorbent assay (ELISA) under corresponding protocol and IL-6 (K4144-100, Biovision) and IL-8 (K4169-100, Biovision) ELISA kits were administered in the presentstudy. Each test with different molecules was repeated five times and average concentration of corresponding results was recorded. Identification of putative ingredient targets With the mentioned search queries of Panax Ginseng C. A. Mey., Radix Paeoniae Rubra, Ziziphi Spinosae Semen, Dalbergiae Odoriferae lignum, Santalum Album L., Olibanun, Cicadae Periostracum, Borneolum Syntheticum, hirudo, Scorpio, Scolopendra, Cicadae periostracum and criteria of OB ≥ 30% as well as DL ≥ 0.18, a total of 111 chemical ingredients were collected within TXL prescription from TCMSP and TCMID databases. Besides, the targeted genes of each retrieved chemical ingredients were explored and a total of 1205 targeted genes were obtained. The names of targeted genes were converted into gene ID on basis of UniProt database, and eventually 861 eligible targeted genes with molecular names and symbol ID were acquired. The active compounds involved in the present study with the amount as well as ratio of each component [20] were shown in Table 1 and detailed information of putative ingredients with targeted genes were documented in Supplementary Table S1. Identification of disease-related genes Since the application of TXL is to lower serum lipid level, anti-oxidation and anti-inflammation, which are standard management in CHD [21], the CHD and coronary artery disease were treated as keywords to acquire relevant genes. After the administration of search queries in GeneCards and OMIM databases, a total of 7389 CHD-relevant genes were obtained. Furthermore, intersection between ingredients-targeted and CHD-relevant genes were performed and 138 overlapped genes were obtained eventually. The Venn diagram of overlapped genes were displayed in Supplementary Figure S1. Network visualization A complete ingredient-target network consisting of a total of 212 nodes and 1016 edges (138 target nodes, 72 putative ingredients nodes, 1 disease node and 1 TXL node) was obtained after administration of Cytoscape software as shown in Figure 2. For detailed information, each node included in this ingredient-target network was documented in Supplementary Table S2. Overlapped genes were processed by STRING to produce a PPI network with confidence > 0.4 and shown in Figure 3A. PPIs were displayed by a total of 138 nodes and 1939 edges with average node degree of 28.1. Within the PPI net- Figure 3B and the detailed information of PPI is documented in Suplementary Table S3. GO and KEGG enrichment analyses Overlapped genes' names were converted into symbol ID via UniProt database for GO and KEGG enrichment analyses. Regarding GO enrichment analysis, function of peptide binding and ubiquitin-like protein ligase binding occupied the leading position among all relevant genes with adjusted P-value of 6.35e −7 and 1.00e −6 , respectively. Heme binding and tetrapyrrole binding function were at second place of overlapped genes enrichment analysis with adjusted P-value of 3.49e −8 and 6.83e −8 , respectively. Top 20 categories of GO enrichment analysis are shown in Figure 4A,B. When it comes to KEGG enrichment analysis, AGE-RAGE signaling pathway, and fluid shear stress and atherosclerosis pathway occupied the predominant position with adjusted P-value of 5.60e −19 and 3.88e −17 , respectively. Moreover, inflammation-related pathways, such as IL-17, TNF and T-cell receptors signaling pathways, were principal pathways within the TXL-CHD overlapped genes enrichment, with the adjusted P-value of 3.19e −17 , 1.13e −14 , Active ingredients protect H9c2 cells from H/R injury Six potential ingredients, β-sitosterol, ellagic acid, formononetin, eriodictyol, odoricarpin, tirucallol (detailed information shown in Table 2), were obtained and used for validation. Regarding to cell viability tests, β-sitosterol, eriodictyol, odoricarpin and tirucallol revealed positive improvement effect, while ellagic acid and formononetin were found to be cytotoxic to H9c2 cells in H/R model ( Figure 5A). Improvement rate at different concentrations was investigated to obtain optimal dosage. From the results, the optimal dosage of β-sitosterol, eriodictyol, odoricarpin, tirucallol were 40, 20, 20 and 40 μM in this model, respectively, and decreased relative cell viability was observed in each test when concentration exceeded 50 μM ( Figure 5B). Anti-inflammatory effect of TXL Due to the significance of anti-inflammatory regulation in CHD management, the anti-inflammatory effect of TXL was investigated. Since the enriched pathways in anti-inflammatory regulation (Supplementary Figure S2), concentrations of IL-6 ( Figure 6A) and IL-8 ( Figure 6B) were investigated with the abovementioned optimal concentration of four compounds. β-sitosterol, eridictyol, odoricarpin and tirucallol indicated significant inhibition on concentration of IL-6 as well as IL-8 (P<0.05). Moreover, tirucallol revealed to have a significant anti-inflammation effect compared with DXM group (P<0.05). Collectively, active compounds of TXL is capable of regulating anti-inflammation. Discussion In previous study, resistance to statin regimen led to rapid progression of atheroma, indicating warranted alternative to lipid-lowering medication [22]. As indicators of plaque progression, IMT and maximal plaque area are favored indicators for CHD assessment. In CAPITAL trial, as the additional anti-atherosclerotic regimen to routine CHD therapy, TXL revealed superiority compared with control group in slowing down the progression of CHD significantly [10]. However, the underlying anti-atherosclerotic effects of TXL were unclear. After this research, substantial evidences might be provided at the molecular level. Network pharmacology was designed for investigating single-medication targeting on multiple targets so as to enhance efficacy as well as reducing toxicity to patients [23]. Besides, TXL capsule was a mixture of 12 plant and animal products with multiple ingredients and targets, which conformed to the abovementioned perspective and was proved to be effective in cellular level in the present study. Regarding enrichment analysis, several pathways revealed the potential mechanism of TXL capsule acting on anti-atherosclerotic events. Peptide and ubiquitin-like protein ligase binding occupied the predominant position among GO enrichment analysis, in which rising ubiquitin was reported as positively correlated indicators with the severity of pathologies such as trauma, burn, and especially in CHD and acute myocardial infarction (AMI) patients [24][25][26]. Also, extracellular ubiquitin was shown to be elevated in CHD patients, especially in patients with acute coronary syndrome (ACS) attack, and it was positively related to Gensini score reflecting the degree of atherosclerosis in CHD [27]. Moreover, ubiquitin was suggested to be positively related to inflammatory markers CRP, CK-MB and cTnl, which were associated with progression of atherosclerosis as well as AMI [28]. To sum up, ubiquitin is an alternative biomarker to predict the severity of CHD. Predominant function of targeted genes on ubiquitin-like protein ligase binding might hint that TXL capsule had the capacity on regulating extracellular ubiquitin level to prevent the progression of atherosclerosis. Fluid shear stress and atherosclerosis pathway was enriched in KEGG analysis, and it was found to be associated with microvascular and epicardial endothelial dysfunction in CHD patients. Coronary arteries exposed to abnormal microvascular endothelial function exhibited significantly lower shear stress compared with normal coronary arteries [29]. Apart from systemic risk factors, local factors as low shear stress might contribute to promotion of early focal epicardial endothelial dysfunction and potential plaque progression [30,31]. A fall in shear stress might be triggered by microvascular endothelial dysfunction which induced by established systemic risk factors like inflammation and oxidative stress at early stage of disease, further provoking as well as exacerbating inflammatory processes of coronary endothelium. Moreover, inflammation plays an indispensable role in the progression of atherosclerosis [32,33], and inflammation-related pathways such as IL-17, TNF, toll-like receptor, T-cell receptor signaling pathways, were enriched among KEGG analysis. Targeted anti-inflammatory regimen and reduction in CRP have been shown to reduce major adverse cardiovascular events in established CHD patients [34,35]. As discussed above, TXL was also capable of regulating ubiquitin to adjust CRP level, and the active compounds of TXL were validated to be effective in regulating inflammation-related pathway, which further confirmed the theory of anti-inflammatory effects of TXL capsule on CHD patients. However, several limitations should be considered in the present study. First, retrieved active ingredients might be inconsistent with the exact compounds absorbed by patients. Second, only targeted genes of active ingredients could be found but the exploration of predominantly targeted genes by active compounds is difficult. Third, errors might occur in GO and KEGG enrichment analyses due to the complex formula of TXL capsule and enriched pathway might be confused. Last but not the least, validation is performed at cellular level and the verification in animal model to investigate more indicators is still necessary in future research. Conclusion Our study provided substantial molecular evidence that TXL capsule possessed the characteristics of multitargets with safe profile, and its main component is effective in regulating cytokine level as well as improving hypoxia to protect myocardial cells on CHD patients.
2020-10-06T13:33:24.069Z
2020-09-29T00:00:00.000
{ "year": 2020, "sha1": "182ea0653489c221580d613721fe3d228b1e9128", "oa_license": "CCBY", "oa_url": "https://portlandpress.com/bioscirep/article-pdf/40/10/BSR20201349/894861/bsr-2020-1349.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1d82e2ea1df0e359ee91aa929fb30d7e553569f1", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258595046
pes2o/s2orc
v3-fos-license
In vitro Growth of Cattleya sp Orchid from Leaf Explants with Growth Regulators Using leaf explants, this research aimed at identifying the type of growth response that can be created and the optimal mix of media for the growth of Cattleya sp orchid. This was an experimental investigation employing the RAL method (completely randomized design) with a combination of 2.4-D (0;1; 2; 3 ppm) and Kinetin (0;0.3;0.6 ppm) repeated three times. According to the results of this study, the growth regulators 2.4-D and kinetin were unable to promote the development of all explants. Explants transferred to media containing polyvinylpyrrolidone (PVP) to prevent browning after 12 weeks of observation. Then the results of the study also found the emergence of shoots in the treatment P0 (control), P2 (2,4-D 0 + Kinetin 0.6 ppm), P8 (2,4-D 2 + Kinetin 0.6) and P11 (2.4-D 3 + K 0.6 ppm) in the 3rd week of observation, the appearance of callus at P5 (2.4-D 1 + Kinetin 0.6 ppm) in the 4th week of observation INTRODUCTION Orchid Cattleya sp is an orchid of exceptional beauty.The Cattleya sp orchid plant features huge, gorgeous flowers with vibrant colors and a pleasant fragrance (Nika et al., 2018); hence, this flower is known as The Queen of Orchids.The Cattleya orchid blossom is highly sought after by enthusiasts and collectors because to its popularity (Buyung, 2021).Orchids are a type of decorative plant whose natural population has diminished and is endangered with extinction.Since orchid seeds lack an endosperm as a food store, they require nutrients that promote seed development in order to germinate. Because orchid seeds lack endosperm (food reserves), they cannot be propagated by normal seed culture; as a result, they can only germinate when grown on artificial media aseptically using in vitro seed culture.During two to three months, planted orchid seeds will germinate and create miniature plantlets.The germination of orchid seeds is characterized by the creation of the protocorm, followed by the emergence of the plumule and radicle.Many research findings suggested that genotype, explants, medium, incubation circumstances, inoculum density, and subculture time influenced orchid somatic embryogenesis.Propagation by tissue culture is a possible solution to this issue.One of the fundamental media utilized in tissue culture is MS, which is coupled with numerous Growth Regulatory Substances.Synthetic auxins such as NAA and 2,4-D are more effective since they are not degraded by IAA oxidase or other enzymes, allowing them to persist longer and be more stable, although BAP and kinetine are often employed in tissue culture research due to their low cost and resistance to degradation.Hormones 2.4-D and 1 ppm BAP may cause callus in explants of sipahutar pineapple shoots (Harahap et al., 2019).The optimal therapy for the development of dragon fruit explants is thus the injection of NAA hormone 0.4 ppm with Kinetine 4 ppm (Mahadi et al., 2013).This study utilized orchid leaves as explants for the Cattleya orchid propagation by administering a growth regulators combination of 2.4-D + Kinetine .This was done since waiting for the orchids to develop seeds is a lengthy process. Research Design This was an experimental research employing the CRD (completely randomized design) approach, with 12 (twelve) treatments repeated three times. Procedure This research's implementation began with the sterilization of the instruments and continued with the production of 2.4-D combination medium containing BAP. Cattleya sp orchid seeds produce Cattleya sp orchid leaf explants.The explants are cultivated in vitro so that they may be planted directly.The explants were kept in an incubation environment between 23 and 25 degrees Celsius and maintained by daily alcohol spraying.12 weeks of observations were performed.The proportion of living explants, the percentage of explants that swelled, the percentage of explants that produced callus, the percentage of explants that developed buds, and the period of shoot and callus formation were observed in this study.The obtained data are presented descriptively.In this research, it was found that not all explants grew well.In Table 2, it can be seen that the explants that grew well were in the P0 (70%) and P2 (70%) treatments, while the rest experienced browning.The occurrence of browning in plants is due to injury to the explants releasing exudate secretions or phenolic compounds so that the plants experience browning.These phenolic compounds appear and accumulate usually due to the activation of the enzyme phlorophenol oxidase (Admojo & Indrianto, 2016).According to Hutami (2008), compounds in the form of proteins, amides, and polyamides can be added to the media so that they can react with phenol and restore the activity of the enzymes.In general, the polyamide that can be added is polyvinylpyrrolidone (PVP).The results showed that using a combination of 24-D media with Kinetine can grow shoots.Melisa (2018) reported that the hormones 2.4-D and kinetine had no significant effect on the growth of PLB length and the orchid shoots formed, the Grammatophyllum scriptum orchid shoots were formed at a concentration of 4 mg/L 2.4-D + 2ml/L Kinetine .The results of observations on the 3rd MST of several treatments of shoot growth such as P0 (2.4-D 0 ppm + K 0 ppm), P2 (2.4-D 0 ppm + K 0,6 ppm), P8 (2.4-D 2 ppm + K 0,6 ppm) dan P11 (2.4-D 3 ppm + K 0,6 ppm). Time Appears Buds and callus In table 4 it can be seen that not all explants can grow callus or buds.Some have grown shoots or callus but can't last long, as can be seen in table 2 showing the percentage of explant life that survived to the 12th weeks after planting, namely at P0 (2.4-D 0 ppm + K 0 ppm) around 70% and P2 (2.4 -D 0 ppm + K 0.6 ppm) around 70%, while the rest experienced browning.In Table 4 above, it can be seen that the explants have different responses, which are due to differences in the endogenous hormones contained in these explants (Sriskandarajah et al., 2006), despite the addition of auxin and cytokinins at the same concentration.In table 3, it can be seen that from all treatments, the emergence of shoots dominated, as in treatment P0 (control), P2 (2.4-D 0 ppm + Kinetine 0.6 ppm), P8 (2.4-D 2 ppm + Kinetine 0.6 ppm), and treatment P11 (2.4-D 3 ppm + Kinetine 0.6 ppm), which grew at the 3rd week after planting.Whereas those who experienced callus growth were only in treatment P5 (2.4-D 1 + Kinetine 0.6 ppm), which appeared in the 4th week after planting.After being observed until the 12th week, many explants experienced browning.Ideally, after the explants show signs of browning, they are transferred to new media with the addition of polyamide, namely polyvinylpyrrolidone (PVP).However, due to the time limitations of the researchers, who only observed for 12 weeks, the action to move the explants and add polyvinylpyrrolidone (PVP) could not be carried out.In this study, it was found that even though 2.4-D was added at a dose of 0; 1; 2; 3 ppm combined with kinetine at a dose of 0; 0.3; 0.6, it did not cause callus.This is contrary to what Hariyadi et al., (2023) obtained by using a combination of growth regulatory substances 2.4-D at doses (0, 1, 2, and 3 ppm) combined with BAP at doses (0, 0.3, and 0.6) to produce research results as follows: Combination treatment of 2.4-D with BAP was able to produce a response in the form of discoloration, explant swelling, and callus formation in several treatments, namely P3 (2.4-D 1 ppm + BAP 0 ppm), P5 (2.4-D 1 ppm + BAP 0.6 ppm), P7 (2.4-D 2 ppm + BAP 0.3 ppm), and P8 (2.4-D 2 ppm + BAP 0.6 ppm).The best concentration to stimulate callus growth of Cattleya orchid leaf explants was treatment P5 (2.4-D 1 ppm + BAP 0.6 ppm) with a percentage of 35% callus formation characterized by green callus color, compact callus texture, and moderate callus growth (+ +). CONCLUSION Giving growth regulators 2.4-D and kinetine did not allow all explants to grow.After 12 weeks of observation, the explants were alive and growing well in treatment P0 (control) at around 70% and P2 (2.4-D 0 ppm + kinetine 0.6 ppm) at around 70%, while the rest experienced browning.This happened due to the limited time for the study, which was only around 12 weeks.The explants should have been transferred to media that had been given polyvinylpyrrolidone (PVP) to prevent browning.Then the results of the study also found the emergence of shoots in the treatments P0 (control), P2 (2,4-D 0 + Kinetine 0.6 ppm), P8 (2,4-D 2 + Kinetine 0.6 ppm), and P11 (2.4-D 3 + K 0.6 ppm) in the 3rd week of observation, and the appearance of callus at P5 (2.4-D 1 + Kinetine 0.6 ppm) in the 4th week of observation. Table 1 . Combination of the Hormones 2.4-D and Kinetine Table 2 . The average percentage of live explants on Cattleya sp.Orchid Leaf Explants.with 2.4-D Combination Treatment and Kinetine Observation for 12th Weeks Explant Growth Response (Swelling of Explant, Growing Buds and Growing of Callus) Table 3 . Growth Response of Explants at 3rd MST and 7th MST on Cattleya sp Orchid Leaf Explants with 2.4-D and Kinetine Combination Treatment Table 4 . Callus and budding time on explants of Cattleya sp. with 2.4-D Combination Treatment and Observation BAP for 12th Weeks after Planting.
2023-05-11T15:10:37.594Z
2023-03-26T00:00:00.000
{ "year": 2023, "sha1": "22ba83a6ee14f36b34ca5dcb02f9586e05f4e181", "oa_license": "CCBYNCSA", "oa_url": "https://jurnal.ulb.ac.id/index.php/nukleus/article/download/3945/3051", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d252e1d82a7adc3270d29a8436e7f1a3601d9edc", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
216344369
pes2o/s2orc
v3-fos-license
Scalar triplet leptogenesis with S 3 symmetry Extension of standard model with simplest discrete symmetry S 3 is conisidered to explain the neutrino masses and mixing consistent with the current observations. The particle content is extended by addition of right handed neutrinos and scalar triplets and explain the observed data in the neutrino sector. We mostly focus here on baryogenesis from the decay of a heavy triplets in the presence of right-handed neutrinos. The scenario also provides us the viability of TeV scale particles in the framework in the context of future collider searches of lepton flavor violation. Albeit the success of standard model (SM), it fails to accommodate the explanation of certain experimental observations like neutrino mass, dark matter, matter-anti matter asymmetry etc. The current understanding is that there should be new physics (NP) beyond the SM and SM is the low energy effective theory of some higher theory, which is unknown. Howeover, the search for direct or indirect detection of new physics beyond the standard model has not been successful so far. Nevertheless, the extension of SM is desired to justify the unexplained observations. Fortunately, there also exist few other observations in the flavor sector of the SM, where we have some kind of deviations from that of the SM expectations, although they cannot be construed as evidence of new physics. One can actually consider those deviations as somoking gun signals for possible new physics, which may lead to strong evidence of NP in the coming years or else may diasappear with the accumulation of more data. Here we mention the possibility of lepton universality violation in the observables R D and R * D , which are defined as where l = e or µ. The SM predicted values for R D = 0.300 ± 0.011 and R * D = 0.254 ± 0.004, and similary the obersvered values from different experiments are 0.340 ± 0.027 ± 0.013 and 0.295 ± 0.011 ± 0.008, respectively. The observed data for both the obsrevables R ( * ) D indicating the possibility that the τ leptons and rest other leptons (namely, e and µ) couple differentely and lepton universality may have been observed to be broken in B decays. Although, it is difficult to say anything with certainty at this point of time, which in fact actaully needs more careful study and, in particular, more precise experimental values with increased data set, but it has rekindled excitement in the community in the context of search for new physics beyond the SM. Since our objective here is to study leptogenesis, we only take the clue from the above mentioned deviations in the flavor sector. If found to be true, then it will indicate that the τ lepton couples differently than the other leptons (e and µ). Looking at the neutrino sector, it is very well known that discrete symmtery has played an important role in the phenomenological 2 studies of observed neutrino oscillation data. In this context, one can find mention of S 3 , A 4 and other discrete symmetries. S 3 is the simplest discrete symmetry, where one makes an analogy with the doublet and singlet structures among the leptons and therefore the τ behavior could be different from the other leptons (µ and e). Therefore, in the current framework, we include the simplest discrete symmetries S 3 and Z 2 along with the SM gauge group to explore the neutrino phenomenology and baryon asymmetry from leptogenesis [1]. The mentioned symmetries are widely discussed in the literature to explain neutrino masses and mixing with a specific flavor structure but very few are devoted towards the generation of lepton asymmetry from the decay of heavy triplets [2]. We extend the SM particle spectrum with three right-handed neutrinos, two Higgs doublets and two Higgs triplets to discuss the neutrino mixings compatible with current observation. Furthermore, we also explore the generation of lepton asymmetry from the decay of heavy triplets in the presence of right-handed neutrinos in two different mass scales [5]. The model framework Addition of only scalar triplets are not enough to explain neutrino mixing and hence we include the right-handed neutrinos to explore the neutrino phenomenology with a type I+II seesaw framework. The SM ⊗ S 3 ⊗ Z 2 invariant Lagrangian for type I+II Yukawa interaction in the The charged neutral lepton mass matrices can be constructed from the above Lagrangian. The rotation and redefinition of the Higgs fields along with the diagonalization and parameterization of the mass matrices are discussed in detail in [5]. Lepton asymmetry from the decay of triplet with mass O(10 10 ) GeV We realized that the diagonal structure of the triplet Yukawa leads to a vanishing CP contribution from the lepton mediated loop. But still the CP asymmetry can be generated from the Higgs mediated self energy and right-handed neutrino vertex diagrams, which are provided below [2]. Figure 2. The left panel shows the variation of Yukawa coupling with the CP asymmetry and the right panel represents the solution of Boltzmann equations, which gives rise to the required lepton asymmetry compatible with the observed baryon asymmetry. Resonant enhancement of CP asymmetry with TeV scale triplets We consider the standard scenario of resonant leptogenesis, where the self energy enhancement is done by fixing the mass difference between the two heavy triplets [2]. Parameters 7.2 × 10 −10 9 × 10 −7 0.023 0.06 BP1 2 20 6 × 10 −10 7.6 × 10 −7 0.02 0.1 Table 2. Benchmark points for the parameters satisfying the constraints from neutrino mass and observed baryon asymmetry. Comments on Lepton flavor violation Lepton flavor violating decay processes have received great attention in last few decades [3]. In this context, µ → eγ is found to be an important process to be measured with less background from observation. The current experimental limit on this decay is Br(µ → eγ) < 4.2 × 10 −13 from MEG collaboration [4]. In the framework of low scale leptogenesis, we can have extra contribution to rare decays l α → l β γ due to the presence of right handed neutrinos and Higgs. Figure 4. The left middle panel shows the allowed Higgs mass as per the experimental limit of LFV and muon anomalous magnetic moment, where, the right most panel represents the variation of triplet-lepton Yukawa with muon anomalous magnetic moment. Summary We discuss the neutrino masses and mixings with a non-vanishing θ 13 in this framework and obtained constraints on the model parameters from current oscillation data. Leptogenesis from the decay of lightest heavy triplet is explored in detail with the S 3 symmetry in two different mass scales. The TeV scale triplet opens up the future scope for the collider searches. The presence of right handed neutrinos not only explains the neutrino sector but also contribute to the lepton asymmetry along with providing valuable insights into rare lepton flavor violating decays [5].
2020-03-26T10:17:10.931Z
2020-02-01T00:00:00.000
{ "year": 2020, "sha1": "dd4d2f4640bbfcf8c31680c11af1c3e2cb6d576a", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1468/1/012200", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "a5efbb9bb4aacfd1da19b603e2ecc41781360f84", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
739066
pes2o/s2orc
v3-fos-license
Dehydration and crystallization of amorphous calcium carbonate in solution and in air The mechanisms by which amorphous intermediates transform into crystalline materials are poorly understood. Currently, attracting enormous interest is the crystallization of amorphous calcium carbonate, a key intermediary in synthetic, biological and environmental systems. Here we attempt to unify many contrasting and apparently contradictory studies by investigating this process in detail. We show that amorphous calcium carbonate can dehydrate before crystallizing, both in solution and in air, while thermal analyses and solid-state nuclear magnetic resonance measurements reveal that its water is present in distinct environments. Loss of the final water fraction—comprising less than 15% of the total—then triggers crystallization. The high activation energy of this step suggests that it occurs by partial dissolution/recrystallization, mediated by surface water, and the majority of the particle then crystallizes by a solid-state transformation. Such mechanisms are likely to be widespread in solid-state reactions and their characterization will facilitate greater control over these processes. T he recognition that hydrated amorphous precursor phases can play key roles in the formation of vertebrate and invertebrate biominerals has generated significant interest in these phases [1][2][3][4] . Providing organisms with a mouldable, spacefilling starting material 5 , which can be delivered on demand for the rapid, yet controlled production of structurally and morphologically complex crystalline biominerals, this biogenic strategy offers a perfect candidate for biomimicry 6 . Focusing on amorphous calcium carbonate (ACC), it has proven possible to profit from ACC precursor phases to generate CaCO 3 microlens arrays at the gas-liquid interface 7 , single crystals with complex forms via templating approaches 8,9 , thin films and fibres of calcite and vaterite in the presence of polyelectrolytes 2,10,11 and inorganic/organic composites 12,13 . A range of synthetic approaches have also been used to control the crystallization of ACC. For example, the stability of ACC can be tuned using soluble inorganic and organic additives [14][15][16] by association with insoluble matrices 17,18 and by varying the particle size 19 . However, widespread exploitation of ACC in materials synthesis has hitherto been limited by the challenges of characterizing structure 14,20,21 , tuning stability 17 , controlling morphologies 6 and determining crystallization mechanisms. A significant contribution to our current understanding of ACC and its crystallization has of course come from the study of biominerals. Biogenic ACC can be classified as either stable or transient, where the stable form is hydrated (with approximate composition CaCO 3 :H 2 O) and the transient form anhydrous 1 . The best characterized transient system is that of the sea urchin embryo, in which tri-radiate, single-crystal calcite spicules form from ACC within a membrane-bound vacuole. The ACC is tightly bound by the membrane such that the system remains free of bulk water 4,[22][23][24] , and under these conditions the initial hydrated ACC is observed to dehydrate to a more stable anhydrous ACC phase, before it subsequently crystallizes via a 'solid-state' mechanism 24,25 . While a comparable stepwise transformation has been observed when ACC is heated in air to drive off the water 15,[26][27][28][29] , the mechanisms of these structural transformations are not yet well understood. By comparison, ACC typically crystallizes very rapidly in aqueous solution, such that characterization of this process has proven extremely challenging 30-35 . This work therefore employs a bio-inspired strategy to address an intensively debated topic-the crystallization mechanism of ACC-by characterizing the transformation of synthetic ACC in aqueous environments. Encapsulation of ACC particles within porous silica shells provides an effective inorganic mimic of the spicule environment of the sea urchin, thereby sufficiently retarding the crystallization to allow characterization of the transformation. The mechanisms and structural changes that accompany ACC dehydration in air were also investigated in detail with ACC samples with well-defined water contents generated by annealing at different temperatures, and characterizing these using thermogravimetric analysis (TGA) and solidstate NMR. Comparison of both systems demonstrates that while identical dehydration processes can occur both in air and in solution, ACC crystallization at room temperature must be initiated by a local dissolution/reprecipitation, as mediated by water present on the particle surface or in the environment. The majority of the particle can then crystallize via a solid-state transformation. Results Crystallization of silica-coated ACC particles. Transmission electron microscopy (TEM) confirmed that the silica-coated ACC particles 36 (prepared by simple mixing of solutions of CaCl 2 and Na 2 CO 3 /Na 2 SiO 3 ) had diameters of B100 nm (Fig. 1a), while subsequent leaching of the encapsulated ACC through incubation in 1 M HCl for 24 h demonstrated that a continuous B5-10 nm thick silica shell forms around each ACC particle (Fig. 1b). Infrared spectra (Fig. 1c) showed bands characteristic of both ACC and amorphous silica, with peaks corresponding to the carbonate group appearing at 1,425 cm À 1 (n 3 ), 1,075 cm À 1 (n 1 ) and 863 cm À 1 (n 2 ), due to water at 1,641 cm À 1 and silica at 1,038 cm À 1 . Bands (n 4 ) at 747 and 714 cm À 1 , which are the characteristic of crystalline vaterite and calcite, respectively, were notably absent. TGA of freshly precipitated ACC-SiO 2 particles (Fig. 1d) showed an 18-20 wt% loss below 200°C due to dehydration of the ACC and SiO 2 , and an additional 7-10 wt% loss between 200-550°C, attributed to release of CO 2 on reaction of the SiO 2 shell with the CaCO 3 . Powder X-ray diffraction (PXRD) was also performed during in situ heating of the ACC-SiO 2 particles, where this demonstrated transformation from ACC ( Fig. 1e) to Ca 2 SiO 4 (belite) and calcite over 400°C (Fig. 1f). A gradual weight loss of 18-20 wt% above 550°C was observed rather than the sharp transition typically observed on conversion of CaCO 3 to CaO. TGA of the silica shells alone that is, after leaching out the ACC revealed that they comprise B20 wt% water ( Supplementary Fig. 1). With a 100-nm particle diameter, a 5-nm-thick silica shell, and ACC and hydrated silica ( The stability and crystallization in solution of ACC-SiO 2 was investigated by resuspending 15 mg particles in 100 ml Milli-Q water and characterizing their structures and compositions with time ( Fig. 2). Scanning electron microscopy (SEM) demonstrated the structural stability of the particles and showed that they aggregate during incubation (Fig. 2a,b) where infrared and TGA confirmed that this is accompanied by negligible change in the silica content of the particles (Fig. 2c,d and Supplementary Table 1). Addressing changes that occur in the ACC during their incubation in water, time-dependent infrared measurements (Fig. 2c) revealed a structural rearrangement, as was apparent from a narrowing of the n 3 band, a reduction in intensity of the n 1 absorption band and a shift in the n 2 band. Importantly, this was accompanied by dehydration of the ACC, which occurs before any evidence of crystalline phases is detected. The onset of crystallization occurs after B8 h, as shown by the appearance of a characteristic calcite peak at 714 cm À 1 (n 4 ). TGA of ACC-SiO 2 samples incubated in solution for different times clearly showed a decrease in the water-associated weight loss under 200°C from 20 wt% to a constant 6 wt% (due to the SiO 2 Á H 2 O phase, Fig. 2d). Also observed was the gradual appearance of a sharp CaCO 3 to CaO transition above 550°C (which is observed for uncoated ACC), and a reduction in the weight loss in the intermediate range (200-500°C) for longer incubation times. Both of these phenomena demonstrate reduced calcium silicate formation in ACC samples with greater degrees of dehydration. This can be explained by the fact that coprecipitation of ACC in the presence of silicate also results in the occlusion of the silicate ions within the ACC, which results in an increase in its thermal stability 38 . During dehydration/ restructuring of the ACC, silicate ions are likely to be expelled, resulting in reduced calcium silicate formation and possibly also the aggregation behaviour observed by SEM. Confirmation that the presence of silicate ions within the ACC does not change the pathway by which it crystallizes was obtained by monitoring the crystallization in water of ACC particles, which were precipitated in the absence of silicate, and then coated with a silica shell. While this method is less satisfactory as it never succeeds in completely coating every ACC particle present, the data obtained clearly demonstrate that the pure ACC particles also dehydrate before recrystallization ( Supplementary Fig. 2). We also explored the combined effects of encapsulation and stabilizing soluble additives with this system. It is well recognized that soluble macromolecules and ions, such as Mg 2 þ , silica, sulphate and phosphate at moderate concentrations (o o[Ca 2 þ ]), contribute to the extended lifetime of biogenic ACC 22,39,40 . However, this alone cannot provide the stability observed for biogenic ACC 15 , strongly suggesting that the environment of the ACC within an organism also makes a significant contribution to its stability 39,41 . ACC was therefore precipitated in the presence of the crystallization inhibitor aspartic acid, and its crystallization was investigated. ACC-Asp-SiO 2 particles crystallized by an identical pathway (dehydration followed by crystallization) where a small band at E700 cm À 1 corresponding to crystalline CaCO 3 was observed in infrared spectra after 18 h. This compares with the appearance of an equivalent peak at 8 h for ACC-SiO 2 and under 1 h for uncoated ACC-Asp 15 ( Supplementary Fig. 3). The soluble additive and confinement therefore appear to act synergistically in retarding ACC crystallization. Crystallization of lipid bilayer-coated ACC particles. Having demonstrated that encapsulation of ACC within a porous silica shell reduces the rate of ACC crystallization in solution, we extended our approach to explore whether ACC encapsulation within a lipid membrane-as in biological systems 4,23 -may act in an analogous way. ACC particles were coated with phosphatidylcholine-dihexadecyl phosphate (DHP) membranes using standard methods 42 , and their stability in Milli-Q water was investigated by isolating and characterizing the coated particles at different times. Laser scanning confocal microscopy, made possible by addition of a fluorescent phosphocholine (PC) molecule to the lipid mixture, demonstrated that the ACC particles were coated by lipid membranes and that they agglomerated with time ( Fig. 3a,b). Notably, structural changes in the ACC comparable to those seen during the transformation of ACC-SiO 2 were observed on incubation in solution, as shown by a reduction in intensity of the n 1 absorption band and a shift in the n 2 band. There was also a reduction in the intensity of the bands associated with the PC/DHP membrane 43 , (2,923 cm À 1 (n CH ) and 1,234 cm À 1 (n PO2 À )). No bands at 714 cm À 1 (calcite) or 747 cm À 1 (vaterite) were detected even after 4 days of incubation (Fig. 3c). TGA of freshly prepared samples showed a weight loss due to lipid decomposition of B30 wt% between 230 and 530°C (Fig. 3d). This compares with B20 wt% loss, estimated for 100 nm ACC spheres coating with single bilayers 44 , which suggests the presence of multilamellar coatings or additional vesicles. Importantly, the TGA analysis also demonstrated that the coated ACC particles underwent a very slow dehydration during incubation in solution, shown by the loss of water below 230°C. Indeed, the water content decreased from an initial B18-20 wt% to 10-13 wt% after 2 days, although the particles were still ACC, as judged by infrared spectroscopy. The mass loss associated with decomposition of organic materials that occurs at 230-530°C also decreases from B30 to 16 wt% over 2-4-day incubation. The data therefore indicate that the lipid coating of the ACC particles is lost/undergoes reorganization with time in solution, precluding investigation of extended periods leading to crystallization. However, the data clearly show that a lipid membrane can effectively stabilize ACC and that the ACC dehydrates before crystallization. Characterization of ACC with different hydration levels. Detailed studies of the transformation from hydrated ACC to anhydrous ACC to crystalline calcite were then performed by annealing ACC samples at specific temperatures. TGA showed that ACC precipitated from the combination of 1 M CaCl 2 with 1 M (NH 4 ) 2 CO 3 contained 20 ± 1 wt% water, with 15 ± 1 wt% water structurally associated with the ACC, which is consistent with the commonly reported molecular composition of BCa-CO 3 :H 2 O. ACC samples were then heated to, and isothermally annealed at, specific temperatures (between 25 and 220°C in 5°C steps) under a N 2 stream, until the weight of each sample had stabilized. TGA/differential scanning calorimetry (DSC) was subsequently used to determine the amount of water lost at each temperature, the water fraction remaining and the crystallization onset temperature. Figure 4a shows representative TGA spectra that clearly demonstrate that ACC can be systematically dehydrated by application of defined heating cycles (a full range of curves is given in Supplementary Fig. 4). SEM showed that the 50-nm ACC particles aggregated during heating (Fig. 4b,c), while infrared spectroscopy (Fig. 4d) confirmed that the ACC remained amorphous after each isothermal annealing. Dehydration was accompanied by a structural rearrangement in the ACC, which on careful scrutiny shows up as a narrowing of the n 3 band, a reduction in intensity of the n 1 absorption band and a slight shift in the n 2 band to higher frequencies. Notably, these spectral changes are identical to those observed for ACC samples undergoing crystallization in solution (Fig. 2c). DSC showed that crystallization only occurred at temperatures above 290°C, regardless of whether samples were continuously heated or whether they had been annealed at different temperatures. Crystallization activation energies of B100 kJ mol À 1 were derived in all cases using standard methods (Supplementary Figs 4 and 5) 45 and can be compared with reported values of 73 kJ mol À 1 (ref. 31) or 151-304 kJ mol À 1 (ref. 28), depending on preparation conditions. The activation energies associated with liberation of different water fractions were derived as averages of at least six isothermal measurements using equation (1) (ref. 46). Here a (as defined in equation (2)) represents the degree of dehydration, A is a preexponential factor, f(a) describes the reaction model and W max , Plots of these activation energies are given in Fig. 5 and show a general increase in the activation energy with increasing dehydration. Further, they indicate the existence of three apparent dehydration regimes. The first shows an increase in E a up to aB0.2-0.3, which corresponds to the loss of the surface water, while the second corresponds to a plateau regime from 40 to B85°C (0.3rar0.6). The E a then increases to a regime from 140-260°C (0.85rar1), which is characterized by high activation energies of B245 kJ mol À 1 (Fig. 5a). Estimates of the weight loss and activation energies (E a ) of each of the dehydration regimes are summarized in Table 1. The dehydration of the silicacoated ACC particles in air was also similarly assessed to determine the influence of the silica shell on ACC dehydration. The activation energies also increased as dehydration progressed, although no well-defined plateau regions were observed. The derived activation energies were somewhat higher than for the uncoated ACC in air, demonstrating that the silica coating can retard ACC crystallization by providing a barrier to water loss (Fig. 5b). Mechanism of dehydration of ACC. Plots of the gradual dehydration of ACC in air over the range 25-220°C were derived using a values obtained at the end of each annealing period (Fig. 6). The rate of dehydration decreases at higher temperatures, demonstrating that it becomes increasingly difficult to remove water as the limit of anhydrous ACC is reached, in keeping with the activation energy measurements (Fig. 5). The dehydration curve also provides further insight into the mechanism of dehydration of ACC in air by considering fits to common solid-state reaction models (f(a)) (ref. 47) are presented in Table 1. While the validity of such analysis has been widely debated because of the mathematical interdependence between activation energy, pre-exponential factor and chosen model 48,49 , this method provides some insight into the dehydration mechanisms that may operate. The full dehydration curve (Fig. 6a) is best described by a geometric contraction model, in which the reaction rapidly initiates on the particle surface and then proceeds towards its centre. The intermediate temperature range (40-140°C), which represents 65% of the total water fraction, can also be described by the same model (Fig. 6b). In both cases, a contracting sphere provided a slightly better fit than a contracting cylinder, with R 2 values of 0.92 and 0.89, respectively, as compared with 0.90 and 0.85. The final dehydration at 140-220°C, which represents less than 15 wt% of the initial water content, is in contrast best described by a second-order nucleation model (Fig. 6c). Removal of the last water is therefore not diffusion limited but is determined by the barriers to water release. The dehydration regime from 0rar0.3 (B40°C) is best described by an isothermal process following a second-order rate equation, as is consistent with loss of surface water and common adsorption isotherms (Fig. 6d). The dehydration of the ACC-SiO 2 particles in a is the degree of dehydration, Ea is the average activation energy associated with water loss for a given temperature range and CaCO3: xH2O shows the number of moles of water associated with 1 CaCO3 formula unit. The best-fit solid-state reaction models (f(a) ¼ kT) are further given together with the associated coefficient of determination R 2 and the rate constant, k. solution as a function of time (Fig. 6e) showed that the overall behaviour from 0.3rar1 obeys an identical three-dimensional model (contracting sphere, R 2 ¼ 0.94) as for the dehydration of uncoated ACC in air. A schematic of the dehydration mechanism according to the overall contraction model is given in Fig. 7. These analyses therefore strongly support the existence of the distinct dehydration regimes identified using the derived activation energies. NMR analysis. Further insight into the nature of the water environments in hydrated ACC was gained from 1 H solid-state NMR (SSNMR) measurements of ACC samples that had been isothermally annealed to different levels of dehydration. Analysis of a fully hydrated sample demonstrated the presence of five different proton environments, namely, a rigid structural phase associated with Ca 2 þ (two types of OH À at 0.9 and 3.4 p.p.m.), two partially mobile phases due to H 2 O (4.9, 5.7 p.p.m.) and a signal due to CO 3 2 À (H þ ) framework components (7 p.p.m.; Fig. 8) 50 . As dehydration progressed, there was little change in the OH À signal, while the 1 H signal from H 2 O and HCO 3 À decreased. Heating of the samples resulted in coalescence of the 1 H signals giving a broader signal centred at B5.2-5.5 p.p.m., that is, a weighted average of the 4.9 and 5.7 p.p.m. signals. This is due to exchange of protons between the two environments, showing that they are in physical contact. The 1 H signal from HCO 3 2 À shifts downfield (B6.7 p.p.m.) when the dehydration temperature is increased, suggesting that the 1 H in these sites also exchange with water 1 H, so that its chemical shift becomes a weighted average of that for the HCO 3 À site (B7 p.p.m.) and the water sites (4.9, 5.7 p.p.m.). ACC-SiO 2 particles with different water contents were also characterized, where these were generated on incubation in solution for different times. Again, as-prepared samples showed the presence of different proton environments within the ACC, along with signals originating in the hydrated SiO 2 shell 51 . Discussion By employing a bio-inspired strategy, where encapsulation of ACC particles within silica shells retards crystallization, we here show that in common with biomineralization processes and the transformation of ACC in air, synthetic ACC also dehydrates at room temperature in an aqueous environment. This is driven by the generation of a more stable, low water content ACC phase 26 . Characterization of this dehydration process revealed a strong dependence of the activation energy required to remove water on the degree of dehydration. The activation energies required to remove the first water fractions (up to 0.3 H 2 O) are close to the hydration energy of calcite faces in a humid atmosphere 52 , as expected if the first stage of dehydration removes more accessible water of hydration. Note that this is far more than monolayer coverage of water on the outer surface of the ACC particles, and undoubtedly includes water condensed around the contact points of adjacent particles as well as some more deeply located water. The higher activation energies measured for the remaining fractions may reflect the increasingly hindered escape of water molecules in low humidity environments 53 . Although it has been noted that a mechanistic interpretation of the magnitudes of activation energies for solid-state reactions is well-nigh impossible 54 , the results are in good agreement with modelling studies of ACC dehydration. These predict an increase in hydration energy with increasing degrees of dehydration owing to the formation of a stronger hydrogen-bond network surrounding neighbouring Ca 2 þ and CO 3 2 À ions 55 . This may Figure 7 | Schematic of stages of dehydration. On going from a to b, surface-bound water is lost, during b to c water is lost from the interior of the ACC and the ACC particle shrinks. On going from c to d, the most deeply located water is expelled and on going from d to e, crystallization to calcite occurs. ARTICLE also be associated with the observed structural reorganization of the ACC during heating. A model of ACC dehydration has also been predicted by combined computer simulations and structural studies of synthetic ACC (CaCO 3 Á H 2 O) 56 , which suggested that the water molecules in hydrated ACC are located, along with carbonate ions, within a network of nanoporous channels in a Ca 2 þ -rich framework. These channels could therefore provide a conduit for loss of water during dehydration, where this process would also be accompanied by a structural rearrangement in which CO 3 2 À ions relocate from the channels into the calcium framework. Again, channels may close at higher levels of dehydration. The results obtained in this work are consistent with either model. Our study also provides insight into the crystallization process itself. Considering first dry ACC, crystallization is only observed at temperatures of B300°C, where this is triggered by/coincides with the loss of the final water fraction. With no water present, the transformation of anhydrous ACC to calcite must proceed by a solid-state transformation. Indeed, our experiments show that the final dehydration step is associated with a very high activation energy of B245 kJ mol -1 . Activation energies of dehydration of crystalline solids vary widely but are typically of the order of 100 kJ mol À 1 (ref. 54). That ACC crystallizes very rapidly in solution at room temperature indicates that this must occur by an alternative mechanism with a lower-energy barrier. Indeed, when isolated, ACC only shows extended stability when washed with solvents such as ethanol, which can substitute for much of the surface water 57 . Even then, the rate of crystallization is dependent on the ambient humidity. Further, ACC that has no surfacebound water (as prepared by freeze drying) is extremely stable, only crystallizing under normal levels of humidity after 6 weeks 58 . These data therefore strongly suggest that while ACC can certainly dehydrate at room temperature, the free energy barrier to nucleation is such that formation of the first crystalline phase can only occur via a partial dissolution/reprecipitation. We emphasize that we are not proposing that the ACC fully dissolves and then reprecipitates. Instead, this would appear most probably to occur within a domain on the surface of an ACC particle, or could happen within an ACC particle containing pockets of entrapped water. Crystallization of the entire ACC particle can then occur by a solid-state transformation, which has also been termed secondary nucleation 59 , where the presence of the crystal nucleus induces structural changes in the adjacent ACC 24 ; the low water content of the ACC precludes local dissolution/ crystallization. Such a transformation is supported by the structural changes that accompany dehydration of the ACC. The transformation mechanism of ACC at room temperature is therefore defined by a balance between the rates of dehydration and dissolution/reprecipitation. ACC with surface water would be expected to transform via the solid-state mechanism mentioned above, while a full dissolution/reprecipitation mechanism would be anticipated for ACC in bulk solution. These mechanisms are also consistent with data presented in the literature. In situ small angle X-ray scattering/wide angle X-ray scattering studies of CaCO 3 precipitation in concentrated (1 M) solutions have suggested that initial dehydration or reorganization of ACC, followed by direct transformation to vaterite, occurs at early reaction times, before changing to a dissolution-reprecipitation mechanism 30,31 . Evidence for a direct transformation also comes from cryo-TEM studies of ACC transformation into vaterite, which revealed the development of nuclei within the ACC particles 32 . Further support for nucleation of the new crystal phase within ACC comes from observations that ACC typically aggregates before direct transformation into a crystal 60 that ACC particles crystallize more slowly in small volumes with few particles present, 61,62 and that small ACC particles show greater stability 19 . Once initial nuclei of vaterite or calcite are established, subsequent growth occurs via dissolution of the surrounding ACC, as shown by depletion zones around crystal nuclei 17,63 , through measurements of the changes in the solution composition on crystallization 33,34 and by simultaneous small angle X-ray scattering/wide angle X-ray scattering studies of CaCO 3 precipitation 30,31,35 . Looking beyond CaCO 3 , our results are also relevant to many other natural or synthetic transformations, such as amorphous titania to anatase or rutile, or ferrihydrite to goethite or haematite. These are hydrated, metastable, amorphous or nanocrystalline phases transforming after an initial dehydration, either thermally or in aqueous solution 64 . Similarly, a dissolution-recrystallization mechanism or a solid-state transformation have been proposed for the transformation of ferrihydrite, to haematite or goethite 65 . However, it is generally agreed that haematite formation involves an 'internal' dehydration (about a 25% weight loss) followed by crystallization, possibly by a topotactic route. Further examples are the dehydration of crystal hydrates, which often proceeds via an amorphous phase, as dehydration often destroys a crystal lattice 54 . Just as with ACC, the precise mechanisms of these transformations are still being vigorously debated. In conclusion, our data provide insight into the mechanisms of transformation of ACC to crystalline polymorphs in biological, environmental and synthetic systems. Thermal analysis and SSNMR demonstrated that ACC undergoes parallel dehydration and structural changes both in solution and in air where these processes enable a subsequent 'solid-state' transformation. The water in ACC exists in different environments and it is the loss of the final component that triggers crystallization. This step is associated with a high free energy barrier such that at room temperature the first crystal nucleus can only form via a dissolution/reprecipitation mechanism mediated by water present on particle surfaces or in solution. The majority of the structural water present within hydrated ACC is therefore of little importance to its stability, but plays a key role in the initial precipitation of ACC, lowering the energy barrier towards the formation of this hydrated phase compared with the anhydrous crystalline polymorphs. Through application of a bio-inspired strategy, we also show that confinement stabilizes ACC, most probably by creating a barrier to water diffusion, by retarding dissolution/reprecipitation-based nucleation and by limiting ACC aggregation. This effect is enhanced in the presence of crystallization inhibitors, suggesting that nature employs both biomacromolecules and confinement to tailor the stability of ACC in organisms. CaCO 3 is not unique in this multi-step crystallization pathway, and the mechanisms observed are likely to provide new insight into the formation of many common natural and synthetic materials. Methods Materials and general preparative methods. Analytical grade (NH 4 ) 2 CO 3 , CaCl 2 Á 2H 2 O and L-aspartic acid sodium salt monohydrate were purchased from Sigma-Aldrich and used as received. Na 2 SiO 3 solution (1.35 g cm À 3 ) was from Merck Chemicals, and aqueous stock solutions were prepared using Milli-Q water, 18.2 MOcm. Stock solutions of L-a PC (1,2-dipalmitoyl-sn-glycero-3-phosphatidylcholine, 499%, Sigma) were prepared in HPLC-grade chloroform. Glassware used to prepare solutions was soaked overnight in 10% w/v NaOH, followed by rinsing with dilute HCl and finally washing with Milli-Q water. Glass slides and crystallizing dishes were placed overnight in Piranha solution (70:30 wt% H 2 SO 4 : H 2 O 2 ) and then washed copiously with Milli-Q water before drying with acetone. Preparation of ACC. Except where otherwise noted, ACC was produced by combining equal volumes (0.5-1.5 ml) of 1 M (NH 4 ) 2 CO 3 (pH 9.15) with 1 M CaCl 2 (pH B6.8) at 4°C, and the ACC precipitate was immediately filtered through a 0.45-mm Isopore GTTP membrane filter (Millipore) before washing with ethanol, and drying over silica gel for 1 h. Synthesis of ACC-silica particles and hollow silica shells. ACC particles encapsulated in silica shells were synthesized following Kellermeier et al. 36 In brief, 125 ml of 10 mM CaCl 2 Á 2H 2 O was mixed with 125 ml of 10 mM Na 2 CO 3 /6 mM Na 2 SiO 3 solution, and the precipitates generated were incubated in the reaction solution for 10 min to allow formation of a silica shell. The solutions were then filtered using a 0.45-mm Isopore GTTP membrane filter (Millipore) and washed with ethanol before being left to dry. The dried ACC-silica particles (B15 mg) were subsequently dispersed in 100 ml of Milli-Q water and aliquots were removed at different times to investigate the crystallization mechanisms. Confirmation of encapsulation of ACC was obtained by leaching the ACC from the silica shell by immersing B500 mg of ACC-SiO 2 particles in 1 M HCl (50 ml) for 24 h. Dehydration of aspartic acid-stabilized ACC-silica particles. Silica-coated ACC particles stabilized with Asp were synthesized by combining equal volumes of 10 mM Na 2 CO 3 /6 mM Na 2 SiO 3 and 10 mM CaCl 2 Á 2H 2 O/5 mM aspartic acid. The onset time of crystallization of these particles was then compared with that of pure ACC-silica particles, prepared as above. Synthesis of pure ACC particles coated with silica shells. ACC was also synthesized in the absence of silica and was then coated with a silica shell. This method enables characterization of the transformation of ACC in the absence of occluded silica but suffers from the limitation that a fraction of the original ACC particles are not completely coated, and therefore crystallize rapidly. ACC was prepared by direct combination of 0.5 ml of 20 mM CaCl 2 Á 2H 2 O with 0.5 ml of 20 mM Na 2 CO 3 . Post-deposition of silica was then achieved by the delayed addition (4 s after preparing the ACC) of 1 ml of 12 mM Na 2 SiO 3 . The transformation of these silica-coated ACC particles in water was then studied using infrared and TGA as described for ACC containing silica. Synthesis of ACC particles coated with lipid bilayers. Precipitated and dried ACC particles were coated with bilayers of L-a PC and DHP according to the method of Bugni 42 . ACC (5-25 mg) was dispersed in 1 ml of ethanol and were briefly sonicated (1-2 s), before depositing them on a glass slide and leaving to dry at 40°C. Approximately 0.2 ml of a lipid stock solution (100 mg PC and 10 mg DHP per ml chloroform) was then applied dropwise to the ACC film, before rapidly evaporating the solvent under nitrogen. Subsequently, the resulting ACC bilayer aggregates were placed in 100 ml of Milli-Q Water and were gently agitated to displace them from the glass support. The transformation mechanism of the ACC bilayer particles was investigated by removing and analysing aliquots with time. The lipid bilayer coating on the ACC was confirmed using confocal fluorescence microscopy where particles were coated using a lipid stock solution containing PC labelled with a fluorescent group (1 wt% NBD-labelled PC (1-oleoyl-2-[12-[(7-nitro-2-1.3-benzoxadiazol-4-yl)amino]dodecanoyl]-sn-glycero-3phosphocholine) (Avanti Polar Lipids). Preparation of ACC with different hydration levels. ACC containing different amounts of structural water was obtained by the simple heating and subsequent isothermal storage of ACC particles prepared in the standard manner. Samples were heated in a nitrogen atmosphere using a TA Instruments STD Q600 at a rate of 15°C min À 1 and were then maintained at the desired temperature until the weight stabilized, as judged by o1 wt% change over 100 min. Samples were then stored over silica gel before analysis. The isothermal annealing was carried out at 5°C intervals in the temperature range 25-200°C. Samples for NMR analysis were maintained at 40°C for 1 h before the measurements to avoid transformation during the analysis 58 . Analysis of dehydration progress. The mechanism of dehydration of uncoated ACC in air under thermal treatment and ACC particles encapsulated in silica shells was investigated by replotting the weight loss curves as dehydration curves showing the fractional loss of total water (a) (below 400°C) as a function of temperature using the water fraction present at the end of each isothermal period. These were then fitted to the results expected from common solid-state reaction models. For comparison, progressive dehydration of the ACC-SiO 2 particles in solution at 25°C was considered as a function of incubation time in solution, where a values were determined by TGA analysis (weight loss below 200°C). Activation energies associated with the dehydration of ACC at different degrees of hydration, CaCO 3 .xH 2 O, in air were obtained by isoconversion methods based on recorded overlapping a values during isothermal storage present in flanking isotherms. A plot of ln(da/dt) a versus 1/T, where the value of (da/dt) a is determined for each isothermal dehydration event and temperature T, returns a straight line of gradient-E a /R. Characterization. The CaCO 3 precipitates were characterized by infrared spectroscopy, PXRD, TGA, DSC, SEM, TEM and SSNMR. Crystal morphologies were characterized using SEM by mounting glass slides supporting the CaCO 3 particles on SEM stubs using adhesive conducting pads and coating with Pt/Pd. Imaging was performed using a LEO 1,530 Gemini FEG-SEM operating at 3 kV or a FEI Nona NanoSEM 650. TEM was carried out using a FEI Tecnai TF20 FEGTEM fitted with an high-angle annular dark field detector and a Gatan Orius SC600A chargecoupled device camera, operating at 200 kV. Fluorescent confocal microscopy was performed using an Inverted Olympus IX-70 wide-field microscope with 100 W mercury illumination epifluorescence and differential interference contrast optics. Individual crystal polymorphs and initial amorphous character were confirmed with infrared spectroscopy using a Perkin Elmer attenuated total reflectanceinfrared. Further confirmation of polymorphs and polymorphic transitions was obtained with PXRD using a Bruker D8 Advanced diffractometer equipped with a CuKa 1 X-ray source and internal heating stage. Samples were placed on a silicon wafer, and XRD data were collected at angles from 5°to 70°in intervals of 0.02, with a scan rate of 1°min À 1 . Polymorphic transitions were also followed using DSC (TA Instruments DSC Q200), with a heating rate of 10-25°C min À 1 under a nitrogen flow rate of 100 ml min À 1 . The compositions of the different ACC samples were investigated using TGA, where data were recorded using a TA Instruments STD Q600, with a heating rate of 10-25°C min À 1 under a 100ml min À 1 N 2 flow. SSNMR experiments were performed using standard methodologies with a Bruker 9.4 Tesla Avance-400 wide bore spectrometer, at frequencies of 400.1 MHz ( 1 H). One-dimensional data sets were acquired on samples spun at 10 kHz using MAS ( 1 H p/2 pulse length 2.5 ms, contact time 2.5 ms, at a 1 H field strength of 100 kHz) and a repetition time of 2 s was employed in all experiments. On occasions where there was insufficient material to fill a 4-mm outer diameter rotor, the unfilled volume was taken up with polytetrafluoroethylene tape. The number of scans acquired depended on the quantity of available sample, and was generally between 256 and 512.
2018-04-03T03:04:01.072Z
2014-01-28T00:00:00.000
{ "year": 2014, "sha1": "6ec6ef0ca4fadbe5cf24980d40a4d9f42e225602", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/ncomms4169.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c8b426a8345e36b126d56af8ee3df5ec430e2278", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
241387306
pes2o/s2orc
v3-fos-license
Bone biochemical markers in acromegaly: An association with disease activity and gonadal status Objective: We aim to demonstrate the effect of disease activity and gonadal status on bone biochemical parameters in patients with acromegaly. Methods: In this cross-sectional,case–control study, 73 patients with acromegaly and 64 healthy controls were included in the study. Acromegaly and control groups, as well as active/controlled acromegaly groupswere compared in terms of alkaline phosphatase (ALP), calcium, magnesium, phosphorus, parathormone (PTH) and 25-OH Vitamin D (25[OH]D), and C-terminal telopeptide of type 1 collagen (CTX). Patients with hypogonadism and normal gonadal status were also compared in terms of these parameters among patients with acromegaly. Results: The calcium, phosphorus, and CTX were increased in the acromegaly group compared to the control group (p=0.04, p=0.006, and p<0.001, respectively). Age, estimated glomerular filtration rate (eGFR), PTH, and 25(OH)D levels were similar in the acromegaly group and the control group. The ALP, calcium, phosphorus, and CTX were increased in patients with active acromegaly compared to those in remission (p=0.03, p=0.001, p=0.03, and p=0.017, respectively). Age, eGFR, ALP, calcium, and CTX were increased in acromegalic patients with hypogonadism compared in those without hypogonadism (p<0.001, p=0.004, p=0.003, p=0.001, and p=0.009, respectively) while phosphorus, PTH, and 25(OH)D levels were similar between the two groups. Conclusion: Growth hormone (GH) and insulin-like growth factor 1 (IGF-1) levels, as well as concomitant hypogonadism, play an active role in calcium and CTX levels, while phosphorus levels are associated only with IGF-1 and GH rather than hypogonadism. A cromegaly is a rare disease characterized by increased growth hormone (GH) and insulin-like growth factor 1 (IGF-1). Several comorbidities and metabolic complications are observed in patients with acromegaly due to the effect of increasing GH and IGF-1 levels [1,2]. GH plays an essential role in the growth, differentiation, and repair of bone and cartilage formation through direct effects and indirect effects through IGF-1 [3]. Despite these physiological effects of GH, the increased GH adversely affects the balance between bone formation and resorption in patients with acromegaly [4,5]. Although data on the risk of osteoporosis and changes in bone mineral density appear to be contradictory, secondary osteoporosis associated with increased bone turnover has been shown in patients with active acromegaly [5]. Changes in bone mineral levels may also be observed in patients with acromegaly. Hypercalcemia, hyperphosphatemia, and hypercalciuria have been reported in patients with acromegaly [6,7]. In patients with acromegaly, although there are increases in 1,25-dihydroxyvitamin D (25[OH]D) levels, data on 25(OH) D and parathormone (PTH) are contradictory [7][8][9]. As with any other complication in acromegalic patients, the change in bone mineral markers has been associated with disease activity, disease duration, concomitant hypogonadism, glucocorticoid overtreatment, and diabetes mellitus [5]. It is known that increased GH and IGF-1 levels play a role in all complications in acromegaly. Changes in bone mineral markers have been observed due to decreased GH and IGF-1 levels through medical and surgical treatment [5]. Hypogonadism may be observed in approximately 50% of patients with acromegaly. Pituitary surgery, concomitant hyperprolactinemia, pituitary radiotherapy, and the mass effect of pituitary adenoma may lead to hypogonadism in these patients [10]. Sex steroids, which decline in the presence of hypogonadism, are known to change bone structure both in males and females. In this situation, hypogonadism is known to affect bone biochemical markers in acromegaly patients. We aim to demonstrate the effect of disease activity and gonadal status on bone biochemical parameters in patients with acromegaly in this study. MATERIALS AND METHODS This was a cross-sectional and case-controlled study. Seventy-three patients with acromegaly (46 females/27 males) and 64 healthy controls (41 females/23 males) with normal IGF-1, GH were included in the study. Healthy controls were matched by age ±2 years and body mass index (BMI) ±2 kg/m 2 to the patients with acromegaly. The patients with chronic renal impairment estimated glomerular filtration rate (eGFR) <60 mL/min/1.73 m 2 ), chronic liver disease, hyperthyroidism, diabetes mellitus, or primary hyperparathyroidism were excluded from the study. The diagnosis of acromegaly was defined by the presence of typical clinical features, radiographic findings, non-suppressible GH levels, and high IGF-I level, according to the guideline [10]. Acromegaly disease activity at the last visit was evaluated according to "A Consensus on Criteria for Cure of Acromegaly, 2010" [11]. Active disease is defined as high IGF-1 levels according to ageadjusted normal range and random GH level >1 ng/ ml (μg/l) and GH >0.4 ng/ml. Criteria of acromegaly remission are defined as IGF-1 level in the age-adjusted normal range, random GH level <1 ng/ml, or GH <0.4 ng/ml after the oral glucose load. The normal gonadal function was defined as regular menstrual periods and lack of estrogen deficiency in women and testosterone levels in the normal range by age in men [12][13][14]. Patients receiving testosterone or estrogen replacement were considered to have a normal gonadal function. BMI was evaluated in all study groups. BMI was calculated as weight (kg)/height (m 2 ). Venous blood samples were drawn in following overnight fasting. Serum glucose, creatinine, alanine aminotransferase (ALT), alkaline phosphatase (ALP), calcium, magnesium, and phosphorus were measured by the photometric method using a Beckman Coulter analyzer. PTH and 25(OH)D levels were measured by chemiluminescence method using a Beckman Coulter Analyzer. C-terminal telopeptide of type 1 collagen (CTX) was measured by electrochemiluminescence method using a Roche Diagnostic Autoanalyzer System. Serum IGF-1 level was measured using a solid-phase, enzyme-labeled chemiluminescent enzyme immunometric assay (IMMULITE 2000). Serum GH levels were measured using a two-site chemiluminescent immunometric assay (IMMULITE 2000) with a sensitivity of 0.01 g/l. IGF-1 levels were corrected using the formula "IGF-1 levels/upper limit of normal (ULN)" with respect to the ULN. Accordingly to this, the IGF-1 level was considered as normal in patients with IGF-1 ULN ≤1. Patients with acromegaly and healthy controls were compared in terms of age, BMI, gender, glucose, creatinine, eGFR, ALT, ALP, calcium, magnesium, phospho- Highlight key points • Changes in bone mineral levels may be observed in patients with acromegaly. • GH and IGF-1 levels and concomitant hypogonadism play an active role in the increase in calcium, ALP, and CTX levels in patients with acromegaly. • IGF-1 and GH have more effect on the phosphorus levels in patients with acromegaly rather than hypogonadism. rus, PTH, 25(OH)D, and CTX. Active and controlled acromegaly groups were compared in terms of similar parameters. Patients with hypogonadism and normal gonadal status in the acromegaly group were also compared. Correlation analyses were performed in terms of ALP, calcium, magnesium, phosphorus, PTH, 25(OH)D, and CTX that may be associated with GH and IGF-1 levels in the whole study group. The study was approved by the local ethical committee and was conducted according to the Declaration of Helsinki. Informed consent was obtained from all patients with acromegaly and healthy controls. Our study was approved by the ethics committee in March 2019 and the number was 201,956. Statistical Analyses Statistical analyses were performed using SPSS version 22.0. Categorical variables were defined by frequency and percentage rate, and numerical variables were defined by mean±standard deviation (SD). The normality of the distribution of the quantitative variables was assessed by the Kolmogorov-Smirnov test. Independent group comparisons, one-way ANOVA test was performed for normally distributed numeric variables, and the Kruskal-Wallis test was performed for non-normally distributed data. Categorical variables were compared using the Chisquare test. Spearman correlation analysis was used for correlation analysis. Statistically significant results were defined with p<0.05. RESULTS Age was similar in the acromegaly group and healthy control group in this study (48.1±12.5 and 47.9±13.5 years, respectively). About 56% (n=41) of the patients with acromegaly had the active disease (14 patients with newly diagnosed acromegaly and 27 patients with the active disease under treatment), and 44% (n=32) of the patients were in remission (13 patients had been surgically cured and 19 patients had acromegaly that was well controlled medically). Clinical characteristics and laboratory data of the acromegaly and the control groups are presented in Tables 1 and 2. Age was more advanced in patients with hypogonadism than the patients without hypogonadism (54±11.8 and 43±11.2 years, respectively). Hypogonadism was present in 42% (n=31) of the patients di- agnosed with acromegaly. Of those with hypogonadism (n=31), 26 were male, and 5 were female. Seventeen female patients with hypogonadism were also postmenopausal. The group without hypogonadism (n=42) consisted of 20 female and 22 male patients. Clinical characteristics and laboratory data of the patients with and without hypogonadism are presented in Table 3. Table 1 shows the clinical characteristics and laboratory data of the acromegaly and the control groups. As expected, IGF-1-ULN and GH were increased in the group of patients with acromegaly (p<0.001 and p<0.001, respectively). Calcium, phosphorus, and CTX were increased in the acromegaly group compared to the control group (p=0.04, P=0.006, and p<0.001, respectively). On the other hand, magnesium and ALT levels were higher in the control group than the acromegaly group (p<0.001 and p<0.001, respectively). Age, gender, BMI, creatinine, eGFR, PTH, and 25(OH)D levels were similar in the acromegaly group and the control group. Clinical characteristics and laboratory data of the patients with active acromegaly, those in remission, and the control group are presented in Table 2. As expected, IGF-1-ULN and GH were increased in patients with active acromegaly than those in remission (p<0.001 and p<0.001, respectively). ALP, calcium, phosphorus, and CTX were increased in patients with active acromegaly compared to those in remission (p=0.03, p<0.001, p=0.03, and p=0.017, respectively). Age, gender, BMI, creatinine, eGFR, ALT, magnesium, PTH, and 25(OH) D levels were similar in patients with active acromegaly and those in remission. Furthermore, in subgroup analyses, ALP, calcium, phosphorus, PTH, 25(OH)D, and CTX levels were observed similarities between the healthy control group and in patients in remission. Clinical characteristics and laboratory data of acromegaly patients with hypogonadism and acromegaly patients without hypogonadism are shown in Table 3 respectively). The two groups were similar in terms of BMI, active disease/controlled disease (active disease %), creatinine, ALT, phosphorus, magnesium, PTH, and 25(OH)D levels. The correlations between serum IGF-1 ULN, GH, and laboratory parameters in all of the participants are given in Table 4. There were statistically significant positive correlations of GH between calcium, phosphorus, and CTX levels (r=0.257, P=0.005; r=0.461, p<0.001; and r=0.389, p<0.001, respectively) but there were negative correlations of between 25(OH)D levels (r=-0.211, p=0.026, respectively). There were no statistically significant correlations of GH with magnesium, PTH. There were statistically significant positive correlations of IGF1-ULN between calcium, phosphorus, and CTX (r=0.321, p=0.002; r=0.444, p<0.001; and r=0.454, p<0.001, respectively) but there were negative correlations of between magnesium (r=-0.264, p=0.009, respectively). There were no statistically significant correlations of IGF 1-ULN with PTH, 25(OH)D levels. DISCUSSION This study investigated the association of bone biochemical markers with disease activity and concomitant hypogonadism in patients with acromegaly. As expected, serum calcium, phosphorus, and CTX levels were found statistically significantly higher in patients with acromegaly compared to healthy controls. On the other way, magnesium levels were lower in acromegaly patients than that in healthy controls. Calcium, phosphorus, ALP, and CTX were higher in a patient with active acromegaly compared to those in remission with surgical or medical treatment. In acromegalic patients with hypogonadism, a statistically significant increase was found in calcium, ALP, and CTX levels, while other parameters were similar compared to acromegalic patients without hypogonadism. However, in patients with acromegaly, although phosphorus level was found to increase with disease activity, it was similar in those with and without hypogonadism. PTH and 25(OH)D levels were similar between patients with active disease/controlled disease and patients with hypogonadism/normal gonadal status. The association of acromegaly with PTH and 25(OH) D levels remains unclear. In patients with acromegaly, no difference in 25(OH)D levels was reported in comparison to healthy subjects [15,16]. In the literature, increased PTH levels have been observed with long-term somatostatin analog treatment [17]. This situation has been explained with associated secondary hyperparathyroidism due to decreased calcium absorption in patients with acromegaly [17,18]. In the present study, 25(OH) D and PTH levels were similar between acromegaly patients and healthy controls, as in between patients with active disease and patients with controlled disease. Taken together all these results, it is shown that the acromegaly disease itself and the activity of this disease have no effect on PTH and 25(OH)D levels. While calcium and phosphorus levels were higher in patients with acromegaly than in healthy controls, as in the literature, we found similar PTH and 25(OH)D levels in these groups [15,16]. Hence, there must be other contributing factors to changing the bone mineral markers in patients with acromegaly. This finding suggests that the increase in calcium and phosphorus levels may be associated with increased IGF-1 and GH, additionally to PTH. It is known that increased GH and IGF-1 levels affect bone turnover in patients with acromegaly [19]. Therefore, changes in bone structure and bone turnover markers are expected in acromegalic patients. Mild increases in calcium and phosphorus levels are common in patients with acromegaly. Increased extracellular volume is observed in patients with acromegaly, and the tubular defect caused by this increase is thought to result in altered excretion of calcium and phosphorus [20,21]. In patients with acromegaly, GH is thought to increase hypercalciuria and hyperphosphatemia also in the presence of increased bone turnover due to the effect of IGF-1 [17]. Increased serum phosphate levels have been achieved by increasing phosphate excretion with GH treatment in patients with GH deficiency [22]. This suggests the effect of GH on bone phosphorus levels. Although increases in calcium levels are expected in patients with acromegaly, severe hypercalcemia requiring treatment may also develop due to increased 1,25(OH)2D levels or concomitant primary hyperparathyroidism. Treatment of acromegaly has been shown to decrease hypercalcemia [7,23]. Furthermore, exogenous GH therapy has been shown to increase 1,25(OH)2D levels in patients with GH deficiency [24]. However, some studies have reported decreased calcium and phosphorus with acromegaly treatment without any change in 1,25(OH)2D levels [25]. Changes in calcium and phosphorus levels have been associated with increased GH and IGF in acromegalic patients, and higher levels have been reported in active patients compared to those with the controlled disease [26]. In our study, calcium, phosphorus, IGF-1-ULN, and GH levels were higher in patients with active acromegaly compared to those in controlled disease activity with medical or surgical treatment. We also found that IGF-1-ULN was higher in patients with controlled acromegaly compared to the healthy controls, while GH levels were similar between these groups. ALP, calcium, phosphorus, PTH, 25(OH)D, and CTX levels were also observed similar between these groups. These findings suggest an effect of increased IGF-1, as well as increased GH, had a primary effect on increased bone biochemical markers. In the correlation analyses, we performed in the whole study group, we identified positive correlations between calcium and phosphorus levels with IGF-1 and GH. CTX is a bone turnover marker released during bone resorption [27]. This parameter is used for early response to osteoporosis treatment before bone mineral density evaluation. Studies have shown increased CTX levels in patients with acromegaly [28]. Consistent with the literature, CTX levels were higher in acromegaly patients than healthy controls in the present study. Studies in patients with active acromegaly have reported a significant decrease in CTX levels in patients who achieve remission with surgical or medical treatment [16,29]. In our study, CTX levels were higher in patients with active acromegaly than patients in remission. Considering the active role, GH and IGF play in bone structure changes, the correlation analysis revealed that CTX levels were correlated with IGF-1 and GH levels in the pres-ent study. Similar to our study, there are other studies in the literature that report a correlation between CTX and GH and IGF-1 level [30]. Hypogonadism is observed in approximately 50% of patients with acromegaly. While often reversible, it may develop due to concomitant hyperprolactinemia, the effect of a pituitary mass, and pituitary surgery [10]. Hypogonadism was detected in 42% of the patients included in the present study. Calcium, ALP, and CTX levels were higher in patients with hypogonadism compared to those without hypogonadism. On the other hand, 25(OH)D, parathyroid hormone, and phosphorus levels were similar. The vast majority of patients in the acromegaly group with hypogonadism were those in remission. Hence, we believe that hypogonadism alone is effective on the increase in calcium, CTX, and ALP levels. Consistent with our study, there are studies in the literature that report increased calcium and ALP without the difference in phosphorus levels in acromegalic women with hypogonadism [31]. We mentioned before that increased phosphorus has been associated with increased GH and IGF-1 in patients with acromegaly. In the present study, phosphorus levels were similar in patients with and without hypogonadism, and also GH and IGF-1 levels were also similar between these two groups. All these findings suggest that the changes in phosphorus levels may be more associated with increased GH and IGF-1 rather than hypogonadism. Our study had certain limitations. The study design was retrospective, and we were unable to evaluate the duration of hypogonadism. The patient with hypogonadism was a heterogeneous group consisting of males, postmenopausal women, and premenopausal women. Patients with hypogonadism were older than those without hypogonadism. It is known that bone mineral metabolism changes due to dietary changes, restricted physical activity, and increased Vitamin D deficiency in the elderly. Higher PTH levels are observed in the elderly compared to younger individuals [32,33]. These findings showed that our study results might have affected in patients with hypogonadism. Conclusion This study revealed that GH and IGF-1 levels, as well as concomitant hypogonadism, play an active role in the increase in calcium, ALP, and CTX levels in patients with acromegaly. However, IGF-1 and GH have more effect on the phosphorus levels in patients with acromegaly rather than hypogonadism.
2020-12-31T09:04:49.718Z
2022-02-10T00:00:00.000
{ "year": 2022, "sha1": "ffe79198e9b8bd08e3aed0b83b77fc5732e2bcbd", "oa_license": "CCBY", "oa_url": "https://doi.org/10.14744/nci.2020.35467", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "45df95c29390f2fd7312b804ea2ce6b21a43e063", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231931493
pes2o/s2orc
v3-fos-license
In vitro antimycobacterial studies of flavonols from Bauhinia vahlii Wight and Arn Mycobacterial infections and fast-growing strains are increasing globally with 8 million new cases and 1.8 million fatalities per annum worldwide. The acid-fast bacterium, Mycobacterium tuberculosis (M.t), can spread diseases like tuberculosis (Tb) and weaken the immune system. In Ayurveda, the Bauhinia genus is most valued for the treatment of tuberculosis lymphadenitis. The objective of the present study is to identify anti-tubercular compounds from the under-investigated medicinal plant B. vahlii Wight and Arn. using bioassay guided isolation. The antimycobacterial activity was evaluated against non-virulent strains: Mycobacterium tuberculosis H37Ra (ATCC 25177) and Mycobacterium bovis BCG (ATCC 35743). Also, antibacterial and cytotoxicity activities were tested to identify the specificity of the isolated metabolites. Bioassay-guided isolation yielded three known flavonols, namely quercetin (1), ombuin (2), and kaempferol (3), from the methanolic extract of bark of B. vahlii. The results of antimycobacterial activity tests revealed that 2 showed much better mycobactericidal activity than 1 and 3 under ex vivo condition with minimum inhibitory concentration (MIC) values ranged from 0.05 ± 0.01 to 0.26 ± 0.01 nM, and half-maximal inhibitory concentration (IC50) values ranged from 2.85 ± 0.14 to 7.21 ± 1.09 nM against dormant and active forms, respectively. Also, compound 2 showed higher resistance with MIC values > 100 μg/mL against both Gram-positive and Gram-negative bacteria and the least cytotoxicity up to 100 μg/mL concentration against the tested series of cancer cell lines. The results revealed the Ayurvedic use of extracts of the Bauhinia genus for treating tuberculosis, and the key bioactive compounds were found to be flavonols (1–3). The present work provides the first evidence for the presence of antimycobacterial compounds in B. vahlii. Supplementary Information The online version contains supplementary material available at 10.1007/s13205-021-02672-4. Introduction Acid-fast bacilli, Mycobacterium tuberculosis (M.t), is a huge threat to human health. The acid-fast bacterium can spread diseases like tuberculosis (Tb) and weakens the immune system (Behr et al. 1999). However, depending on the illness, Tb requires drug therapy of antibiotics (commonly, rifampicin and isoniazid) for at least 1-6 months (Global Tuberculosis Report 2020), and the side effects of many anti-Tb drugs are severe. Usually, the treatment also causes the occurrence of extremely drug-resistant and multidrug-resistant strains of M.t (Gagneux 2006;Paidi et al. 2017). According to the World Health Organization (WHO) Global Tb Programme Report 2020, the COVID-19 pandemic situation has had a negatively impacted on care services and detection of Tb cases globally and the mortality rate of Tb patients have also increased (Glaziou 2020). Therefore, Tb has been declared a global emergency by the 1 3 128 Page 2 of 5 WHO (Glaziou 2020;Rakotonirina et al. 2009). Hence, in search of alternative antimycobacterial agents, we concentrated our efforts toward screening natural sources like B. vahlii Wight and Arn. (Family: Fabaceae), against M.t strains and isolating the marker compounds. In Ayurveda, the bark, flowers, and roots of genus Bauhinia are most valued for the treatment of scrofula, tuberculosis lymphadenitis, worm infestation, and wounds. The broad spectrum of biological activities of this genus is mainly due to the presence of flavonols, flavanones, bibenzyls, triterpenes, flavonol-glycosides, saponins, and phenanthraquinones. The antimycobacterial activity of some flavanones and bibenzyls from Bauhinia purpurea has already been reported (Boonphong et al. 2007). With this background, we have presented results of bioassay-guided isolation and characterization of flavonols from B. vahlii and their antimycobacterial activity in both in vitro and ex vivo conditions. Plant material Bark of B. vahlii Wight and Arn. was collected at Seshachalam hills (Tirupati), India, in Mar 2019, and a voucher specimen (DB-SVU-2019-3478) was deposited at the Department of Botany, Sri Venkateswara University, Tirupati, India. Extraction and bioassay-guided isolation Dried bark (about 250 g) was powdered and extracted by the maceration method (Tatipamula et al. 2020) using 90% methanol (3 × 500 mL × 7 days) at 25 ℃. All the extract was combined and evaporated under low pressure using rotavapor (Shimadzu Rotation evaporator QR 2005-S, Japan) to obtain crude methanol extract of bark of B. vahlii (ME, 4.5 g) as a dark black solid. ME was subsequently screened and found to be effective against M. tuberculosis (M.t) H37Ra (ATCC 25177) ( Table 1). Results and discussion To identify the bioactive antimycobacterial compounds from ME, it was fractionated using column chromatography, into seven fractions, namely F1-7. They were then screened for inhibitory action against M.t H37Ra. In preliminary screening, ~ 90% inhibition was observed for only ME, F4, and F6; others showed > 28% inhibition against M.t H37Ra strain (Table 1). Further purification of F4 and F6 yielded compounds 1-3. By elemental and spectral analysis, these compounds were identified as quercetin (1), ombuin (2), and kaempferol (3) (Fig. 1). Compounds 1 and 2 obtained from F4 and F6, respectively, showed profound mycobactericidal strength, inhibiting ~ 90% of mycobacterial growth. Depending upon primary screening, active compounds 1 and 2 at a concentration range of 0.03-30 μg/mL were further evaluated to determine their IC 50 and MIC values against dormant (12 days' incubation) and active (8 day's incubation) forms of M.t in both in vitro (M.t and M. bovis BCG), and ex vivo (M.t) conditions using the XRMA and NR assays. The results revealed the strong antimycobacterial activity of compounds 1 and 2 (Fig. 2). Compound 1 was found to be extremely effective in inhibiting both active and dormant forms of M. bovis BCG (in vitro), and M.t (ex vivo) with MIC values ranged from 3.21 ± 0.09 to 10.26 ± 0.73 nM (Fig. 2e, f), and IC 50 values ranged from 0.13 ± 0.01 to 4.90 ± 0.60 nM (Fig. 2b, c). A higher concentration of compound 1 was required for complete inhibition of both active and dormant forms of M.t (in vitro) (Fig. 2a, d). Alternatively, compound 2 showed much better mycobactericidal activity under ex vivo conditions with MIC values of 0.05 ± 0.01 and 0.26 ± 0.01 nM (Fig. 2f), and IC 50 values of 2.85 ± 0.14 and 7.21 ± 1.09 nM against dormant and active forms, respectively (Fig. 2c). Compound 2 exhibited the least activity against the dormant stage of M.t (in vitro) with MIC value ~ 300 nM (Fig. 2d) and the IC 50 value of 15.86 ± 1.10 nM (Fig. 2a). However, the IC 50 value of compound 2 in the dormant stage of M.t (in vitro) was detected to be lesser than in its active stage. Taken together, the overall antimycobacterial activity exhibited by compounds 1 and 2 was significant, although they hold inferior potencies compared to rifampicin. To check the antimicrobial specificity of compounds 1-3, we tested them against two Gram-negative bacteria (S. typhi and E. coli) and two Gram-positive bacteria (S. aureus and B. subtilis). Compounds 2 and 3 showed higher resistance with MIC values > 100 μg/mL against both Gram-positive and Gram-negative bacteria (Table 2). Moreover, the MIC values of compound 1 against tested bacterial strains ranged from 69.15 ± 1.30 to 151.20 ± 5.10 nM (Table 2). This indicates that compound 2 isolated from ME, has greater specificity toward mycobacteria than 1. Furthermore, compounds 1-3 were tested against MDA-MB-231, HT-3, and SKOV3 human cancer cell lines for their cytotoxicity. Fig. S1 represents the percentage of cytotoxicity and IC 50 values obtained against MDA-MB-231, HT-3, and SKOV3 cancer cell lines through MTT cell proliferation assay. At 100 µg/mL concentration, compounds 1 and 3 exhibited ~ 44 and ~ 53% inhibition, respectively, of MDA-MB-231 and HT-3, suggesting the biocompatible nature of these compounds. Compound 2 showed the least cytotoxicity up to 100 μg/mL concentration against the M. tuberculosis H37Ra. *Mean ± SD (n = 3); Statistical analysis determined by t test where ***p < 0.0001 is statistically significant compared to rifampicin. IC 50 and MIC values are the lowest concentration of samples exhibiting percentage growth inhibition of 50% and ≥ 90%, respectively, relative to the growth control Table 2 In vitro antibacterial activity of compounds (1-3) isolated from methanolic extract of bark of B. vahlii a Values are expressed as nM (mean ± SD, n = 3); "-" indicate not active up to 100 μg/mL concentration; half-maximal inhibitory concentration (IC 50 ) and minimum inhibitory concentration (MIC) values are lowest concentration of samples exhibiting percentage growth inhibition of 50% and ≥ 90%, respectively, relative to the growth control tested series of cancer cell lines. The tested concentration was nearly ten times greater than the detected MIC values for M.t in ex vivo conditions. Overall, the cytotoxicity results suggest the biocompatible nature of compounds 1-3. To conclude, the present work provides the first evidence for the presence of antimycobacterial compounds in B. vahlii. Hence, we report the bioassay-guided isolation of marker compounds 1-2 from the bark of B. vahlii, possessing significant inhibitory actions against non-virulent M.t strains (both in vitro and ex vivo). This study supports the prediction of predominant binding mode(s) of compounds 1 and 2 within M.t proteins (like DprE1) that help to recognize the ligand-protein interactions and establish a structural basis for inhibition of M.t strains.
2021-02-16T15:24:57.237Z
2021-02-16T00:00:00.000
{ "year": 2021, "sha1": "6ab195fa838492d82e1740274f907090eae42acd", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s13205-021-02672-4.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "6ab195fa838492d82e1740274f907090eae42acd", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
243872730
pes2o/s2orc
v3-fos-license
Spectroscopic properties of CrOx/Al2O3 nanopowders synthesized by cw CO2 laser vaporization Nanosized 5.0 wt% Cr/nano-Al2O3 powders with the particle size of ca. 15 nm were synthesized via laser vaporization using irradiation by a cw CO2 laser in different gas atmospheres – Ar, Ar+O2, Ar+H2. All the investigated nanopowders were studied by XRF, XRD, TEM, UV-vis DRS and PL spectroscopy methods. The nanopowders were found to contain the Cr6+ ions located on the surface of Al2O3 nanoparticles and two types of Cr3+ sites. One type is Cr3+ ions located in the bulk (Cr3+ b-sites) of Al2O3 matrix in a strong crystal field. The second type is represented by Cr3+ sites residing near the surface (Cr3+ s-sites) of CrOx/Al2O3 nanoparticles in a weak crystal field. It was shown that varying the composition of the buffer gas (Ar, Ar+O2, Ar+H2) during laser vaporization makes it possible to control the properties of the obtained 5.0 wt% Cr/nano-Al2O3 nanopowders with a change in Cr6+/Cr3+ ratio in the bulk and on the surface of alumina support. Introduction Today, the development of methods for the synthesis of nanoscale materials has greatly increased interest to them, since the transition from coarse-crystalline materials to nanoscale analogs is often accompanied by a significant change in the physicochemical properties of substances. It was found that the developed surface of individual nanocrystallites affects their bulk structure and thereby contributes to their properties [1,2]. One of the leading approaches in the synthesis of nanomaterials is the "bottom-up" approach. It consists of the stepwise formation of nanomaterials according to the sequence "atoms → clusters → nanoparticles" [2]. A striking example of this approach is laser synthesis methods, which are based on the Physical Vapor Deposition (PVD) method. Laser methods for the synthesis of oxide nanoparticles are presented mainly by the methods of laser vaporization in the gas phase (LAVA and co-LAVA) [3,4] and pulsed laser ablation (PLA) in a liquid [5,6]. Laser synthesis methods make it possible to produce nanoparticles by evaporation of starting materials by exposure to laser radiation and subsequent condensation of the vapor of the evaporated substance in a gas or liquid, the so-called buffer gas or liquid [7][8][9][10]. By varying several parameters in the process of laser evaporation such as laser radiation power, the composition of the buffer gas in the evaporation chamber, and gas pressure it becomes possible to control the physicochemical properties of the resulting nanoparticles. Thus, one of the main advantages of laser synthesis methods is high purity, in contrast to, for example, chemical methods, and monodispersity of the obtained samples, as well as the ability to control the size of nanoparticles in the range from 1 to tens of nanometers [11][12][13]. Laser synthesis methods are actively used to obtain simple oxides, for example, Al 2 O 3 , ZrO 2 , TiO 2 , Y 2 O 3 , SiO 2 , as well as multicomponent compounds such as YSZ, YAG, Eu:SrAl 2 O 4 , etc. [12,[14][15][16]. In the present work, we investigated the physicochemical properties of the nanosized CrO x /Al 2 O 3 systems synthesized by cw CO 2 laser vaporization in different gas atmospheres -Ar, O 2 , H 2 . The samples under consideration are of interest as promising heterogeneous catalysts for isobutane dehydrogenation reaction. A characteristic feature of CrO x /Al 2 O 3 catalysts is a wide variety of different species of supported chromium oxide particles [17,18]. This is largely due to the peculiarities of the catalyst preparation, as well as the significant influence of the chemical nature of the initial support on the properties of the final deposited chromium oxides on Al 2 O 3 surface. This influence manifests itself in a strong interaction of the supported chromium oxide with the alumina support surface, as a result of which chromium ions are stabilized in different oxidation states and different coordination. There are no data in the literature on the study of the formation of various chromium sites in the nanosized CrO x /Al 2 O 3 systems prepared by laser evaporation, except for the work of the authors of this paper [19]. The use of various gas atmospheres (oxidative and reductive) during laser synthesis of CrO x /Al 2 O 3 catalysts should lead to the emergence of different charge states of chromium, a change in their ratio, which, as a result, will make it possible to synthesize chromiaalumina catalysts with a higher concentration of the active component. The main aim of the work is the study of various charge states of chromium in 5.0wt% Cr/nano-Al 2 O 3 powders, synthesized by cw CO 2 laser vaporization. Establishing the effect of the atmosphere (Ar, O 2 , H 2 ) during vaporization on the ratio of the main types of charge states of chromium Cr 3+ /Cr 6+ in the investigated nanopowders. This paper is a consistent continuation of our work for the study of different Cr n+ species in chromia-alumina systems prepared by different methods: zol-gel [20], laser synthesis [19]. Preparation of samples The Cr:α-Al 2 O 3 ceramic targets for further laser vaporization as pellets (diameter 18 mm, thickness 10 mm, density of the targets 1.8 g/cm 2 ) were prepared using a highly dispersed γ-Al 2 O 3 powder (99%) as a starting material. γ-Al 2 O 3 was obtained by calcination of pseudoboehmite (γ-AlOOH×0.37H 2 O) for 4 h at a temperature of 550°C. To increase the Cr concentration in the targets, the γ-Al 2 O 3 powder before pelletization was modified using chromium nitrate Cr(NO 3 ) 3 ×9H 2 O (99.7%) via incipient wetness impregnation with an aqueous solution of nitrate salt with chromium concentration 5.0 wt%. The Cr:γ-Al 2 O 3 powder was then loaded in a vacuum press mold and pelletized at a force of ca. 13-15 t. After that, the Cr:γ-Al 2 O 3 pellets were calcined in a crucible at 1250 °C for 4 h until Cr:α-Al 2 O 3 was formed. According to XRF data, the Cr concentration in Cr:α-Al 2 O 3 targets was 4.98±0.04 wt%. 5.0 wt% Cr/nano-Al 2 O 3 nanopowders were obtained by laser vaporization of Cr:α-Al 2 O 3 ceramic targets irradiated by a cw CO 2 laser (radiation wavelength 10.6 μm, generation power up to 110 W on one TEM 00 transverse mode, output beam diameter 8 mm, divergence in the far-field region 3×10 -3 rad) with subsequent condensation of vapor in a buffer gas flow in a vaporization chamber. Samples of nanopowders were synthesized in argon (99.998%) and argon supplemented with oxygen (99.7%) and hydrogen (99.999%) in the concentration 20 and 30 vol%, respectively. Laser power on the target surface was 103 W (the power density 5.5×10 4 W/cm 2 ). After exposure of the target surface to laser radiation, it was heated and the target material was evaporated with subsequent vapor condensation in the low-temperature zone of the chamber. A gas-dust flow with 5.0 wt% Cr/nano-Al 2 O 3 particles was then passed through the filter where the nanoparticles were settled and then collected for further investigation. During all the procedure of laser synthesis, Ar pressure in the vaporization chamber was 0.1 atm. A detailed description of the experimental setup used for the laser vaporization of nanomaterials with different chemical composition can be found in Refs. [12,21]. Characterization of samples The chemical composition of the investigated samples was controlled by X-ray fluorescence (XRF) analysis on an ARL -Advant'x analyzer. X-ray diffraction (XRD) and transmission electron microscopy (TEM) data were obtained for 5.0 wt% Cr/nano-Al 2 O 3 nanopowder synthesized in Ar atmosphere. XRD pattern was obtained on a Bruker D8 Advance diffractometer using CuKα radiation (λ = 0.15418 nm). Measurements were performed in the 2θ range of 10-70° with a step 0.05° and acquisition time 3 s. Phases were identified by comparing experimental diffraction patterns with the data of ICDD, PDF 2 database. TEM images were collected on a JEM-2010 electron microscope at accelerating voltage 200 kV and resolution 1.4 Å. During TEM measurements all the samples were deposited on a copper grid by dispersing a solid phase suspension in alcohol using an ultrasonic disperser. Photoluminescence (PL) and photoluminescence excitation (PLE) spectra were measured on a CaryEclipse (Varian) fluorescence spectrophotometer with a Xe lamp as an excitation source. UV-vis DRS measurements were carried out on a UV 2501 PC (Shimadzu) spectrophotometer with an ISR 240A diffuse reflectance attachment. The nanopowders under consideration for PL, PLE, and UV-vis DRS experiments were placed in quartz cuvettes. Results and Discussion According to XRF data, the Cr concentration in studied nanopowders was 4.8±0.05 wt%. Figure 1 demonstrates the photographs of the initial Cr:α-Al 2 O 3 ceramic target and final 5.0 wt% Cr/nano-Al 2 O 3 nanopowders synthesized in different gas atmospheres -Ar, Ar+O 2 , Ar+H 2 . As seen in Fig. 1, depending on the composition of the atmosphere during vaporization, all the resulting powders have a different color. TEM images demonstrate that the studied sample is represented by faceted spherically symmetric nanoparticles with an average size of d m = 15 nm. It should be noted that the XRD and TEM data for other nanopowders studied in this work are similar to the data for 5.0 wt% Cr/nano-Al 2 O 3 synthesized in Ar. Figure 3 shows UV-vis DRS, PL, and PLE spectra of the investigated 5.0 wt% Cr/nano-Al 2 O 3 nanopowders. The obtained UV-vis DRS spectra demonstrate the presence of four bands with maxima at 16500, 22500, 27100, and 36600 cm -1 in all studied samples. All the PL spectra in each case demonstrate one broad band with a maximum at ~14300 cm -1 . PLE spectra contain two broad bands with maxima at ~17700 and 23400 cm -1 , respectively. Based on the analysis of spectroscopic data, it was concluded that in the studied 5.0 wt% Cr/nano-Al 2 O 3 nanopowders, chromium is stabilized mainly in the charge states of Cr 6+ and Cr 3+ in different coordination. Thus, according to the UV-vis DRS data (see Fig.3 a), the absorption bands at 16500 and 22500 cm -1 correspond to d-d transitions in Cr 3+ ions located in octahedral oxygen coordination in the matrix of nanosized Al 2 O 3 ( 4 A 2g → 4 T 2g and 4 A 2g → 4 T 1g electron transitions, respectively) [22,23]. The bands at 27100 and 36600 cm -1 in UV-vis DRS spectra correspond to the ligand-metal CT bands for Cr 6+ ions in tetrahedral coordination of oxygen ions [23]. An increase in the intensity and a change in the ratio of the corresponding bands at 27100 and 36600 cm -1 in the UV-vis DRS spectra for the sample obtained by laser vaporization in the presence of O 2 indicates an increase in the concentration of water-soluble forms of Cr 6+ in tetrahedral coordination. The variety of different chromium compounds, the defectiveness of the structure of the obtained nanopowders, most likely, is responsible for their differences in color. Ar Ar + O 2 Ar + H 2 Chromium ions in the trivalent state, that are located in the bulk of alumina matrix, exhibit intense luminescence in the red spectral region [24][25][26][27]. Thus, the band at ~14300 cm -1 in the PL spectra (Fig. 3 b) is identified with the radiative d-d transition 2 E g → 4 A 2g in octahedrally coordinated Cr 3+ ions in Al 2 O 3 lattice. The broad bands in the PLE spectra correspond to the 4 T 2g , 4 T 1g → 4 A 2g\ electron transitions for Cr 3+ ions in octahedral coordination [19]. Earlier, in xCr/Al 2 O 3 (zol-gel synthesis; [x]= 0.25, 0.5, 1.0 wt%) and xCr/nano-Al 2 O 3 (laser synthesis; [x]= 0.0, 0.05, 0.5, 1.0, 2.5, 5.0 wt%) samples, along with the emission of Cr 3+ ions ( 2 E g → 4 A 2g electron transition) located in the bulk of Al 2 O 3 lattice, we revealed the PL of Cr 3+ sites located in more disordered structural positions, which are situated in the subsurface layers of Al 2 O 3 nanocrystallites [19,20]. This PL corresponds to the 4 T 2 → 4 A 2 electron transition in Cr 3+ ions, which are in a weak crystal field, where Dq/B≤ 2. Electron transition 4 T 2 → 4 A 2 becomes possible when the octahedron in which the Cr 3+ ion is located undergoes significant distortion, leading to the position of [28,29]. Figure 4 shows the PL spectra (λ ex = 532 nm (18800 cm -1 )) of 5.0 wt% Cr/nano-Al 2 O 3 nanopowders deconvolved into Gaussian components. Deconvolution of the PL spectra into Gaussian components made it possible to reveal two types of Сr 3+ sites in all the studied samples. The first type is Cr 3+ ions located in the bulk (Cr 3+ b ) of Al 2 O 3 matrix, in a strong crystal field. Cr 3+ b sites correspond to the PL band with λ max~ 14400 cm -1 ( 2 E g → 4 A 2g electron transition). Whereas Cr 3+ ions in nonequivalent positions corresponding to a weak crystal field ( 4 T 2 → 4 A 2 electron transition) are responsible for the shoulders with λ max~ 12500, 13600, 14100 cm -1 for 5.0 wt% Cr/nano-Al 2 O 3 (Ar, Ar+O 2 ) and 12700, 13300, and 14200 cm -1 for 5.0 wt% Cr/nano-Al 2 O 3 (Ar+H 2 ). These are Cr 3+ sites that are located in the subsurface (Cr 3+ s ) layers of Al 2 O 3 nanocrystallites. Conclusion The nanostructured powders Cr/nano-Al 2 O 3 with the Cr concentration 5.0 wt% were synthesized via laser vaporization using a radiation by a cw CO 2 laser in flowing argon and in Ar with the addition of O 2 and H 2 . Laser synthesized Cr/nano-Al 2 O 3 nanopowders can be used as the promising catalysts for isobutane dehydrogenation. As follows from TEM images investigated nanopowders are represented by faceted spherically symmetric nanoparticles with an average size of d m = 15 nm. XRD data demonstrate that in terms of phase composition the samples correspond predominantly to low-temperature γ-Al 2 O 3 with the beginning of the transition to high-temperature δ-Al 2 O 3 . The electronic states of various chromium ions were investigated by means of UV-vis DRS and PL spectroscopy methods. It was shown that Cr ions in the 5.0 wt% Cr/nano-Al 2 O 3 samples are stabilized predominantly in Cr 3+ and Cr 6+ states in octahedral and tetrahedral coordination, respectively. The PL properties of all the studied nanopowders are caused by the luminescence of octahedrally coordinated Cr 3+ ions in Al 2 O 3 matrix. The analysis of the acquired data allowed us to separate the observed PL into luminescence of Cr 3+ sites located in the bulk of Al 2 O 3 lattice (Cr 3+ b -sites; the case of a strong crystal field) and Cr 3+ sites located near the surface of Al 2 O 3 nanocrystallites (Cr 3+ s -sites; the case of a weak crystal field). Varying the composition of the buffer gas (Ar, Ar+O 2 , Ar+H 2 ) during laser vaporization makes it possible to control the properties of the obtained 5.0 wt% Cr/nano-Al 2 O 3 nanopowders with a change in Cr 6+ /Cr 3+ ratio in the bulk and on the surface. This is particularly important in catalytic studies on the dehydrogenation of alkanes using laser vaporized CrO x /Al 2 O 3 nanosized powders as novel catalytic systems.
2021-11-10T20:41:08.435Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "82f37b67f47e7ebf532eacc230f0d5303d07fbd1", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/2067/1/012007/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "82f37b67f47e7ebf532eacc230f0d5303d07fbd1", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
12230210
pes2o/s2orc
v3-fos-license
Preventive Effect of Cichorium Intybus L . Two Extracts on Cerulein-induced Acute Pancreatitis in Mice Objectives: Acute pancreatitis is an inflammatory condition of pancreas with sudden onset, high mortality rate and multiple organ failure characteristics. It has been shown that oxygen free radicals have an important role in development of pancreatitis and its complications. Antioxidant, anti-inflammatory, anti-hepatotoxicity and gastroprotective properties of Cichorium intybus L. suggest that this plant may have beneficial effects in the management of acute pancreatitis. Methods: Five intraperitoneal (i.p.) injection of cerulean (50 μg/ kg at 1 h intervals) in mice resulted in acute pancreatitis, which was characterized by edema, neutrophil infiltration, as well as increases in the serum levels of amylase and lipase in comparison to normal mice. Different doses of C. intybus root (CRE) and aerial parts hydroalcoholic extract (CAPE) orally (50, 100, 200 mg/kg) and intraperitoneally (50, 100, 200 mg/kg) were administrated 1.0 and 0.5 h respectively before pancreatitis induction on separate groups of male mice (n=6). Control groups treated with normal saline (5 ml/ kg) similarly. Results: Both extracts in greater test doses (100 mg/kg and 200 mg/kg, i.p.) were effective to decrease amylase (23-36%) and lipase (27-35%) levels. In oral route, the dose of 200 mg/ kg showed a significant decrease in levels of amylase (16%) and lipase (24%) activity while the greatest dose (200 mg/kg, i.p.) was only effective to diminish inflammatory features like edema and leukocyte infiltration in pancreatitis tissue (P<0.01). Vacuolization was not significantly reduced in extracts treated groups. Conclusions: These data suggest that C. intybus hydroalcoholic extracts were effective to protect against experimental acute pancreatitis and the efficacy was partly dependent to the dose and was more significant after parenteral administration. INTRODUCTION Acute pancreatitis is a sudden inflammation of the pancreas with high mortality and limited specific therapy. [1]Circulatory www.mui.ac.ir shock, cardiac insufficiency, renal, respiratory and hepatic failure are the most important causes of death. [2]lcohol beverage drinking and biliary tract disorders are the most common etiologies of pancreatitis.Viral infections such as mumps and hepatitis type A and B, drugs such as tetracyclines, furosmide and estrogens, as well as hypertriglyceridemia and hypercalcemia are the other etiologic factors for acute pancreatitis. [3]t has been shown that activation of intracellular digestive enzymes and auto-digestion of the pancreas induces local and systemic injuries as well as organ failure. [4]Oxidative stresses such as hydrogen peroxide (H 2 O 2 ), superoxide and hydroxyl radicals have been shown to be involved in the pathophysiology of acute pancreatitis, where oxygen free radicals and lipid peroxidation play important roles in the development of pancreatic inflammation. [5]It is believed that many factors are involved in the progression of this illness from acinar cell injuries to a fetal systemic reactions, such as activated leukocytes and releases of cytokine and chemokine mediators like interlukin-1, interlukine-6 and tumor necrosis factor (TNF-α). [6,7] So the use of drugs with antioxidant and anti-inflammatory properties could be proposed as a potential therapeutic intervention in acute pancreatitis. [8]Cichory (Cichorium intybus L.) (Asteraceae) an important traditional remedy is widely used in Iran as a liver and digestive tract protective.It has been implemented in folk medicine in Iran for several centuries.Roots have been recently recognized as an important source of dietary fibers (inulin and oligosaccharide).It is believed that these fibers possess anticarcinogenic, diuretic and laxative properties. [9]Moreover, the roots are the major source of sesquiterpene lactones (SQLs), with strong anti-inflammatory effects. [10]SQLs have been found as prostaglandin E2 synthesis inhibitors.This was caused by inhibitory effects on cyclooxygenase 2 expression by the pro-inflammatory agent TNF-α.Moreover, they have been shown to possess a wide variety of pharmacological properties such as antimicrobial, anti-tumoral, and antiinflammatory activities. [11][14] The present study was designed to study the protective effects of C. intybus root (CRE) and aerial parts hydroalcoholic extracts (CAPE) in a murine model of acute pancreatitis caused by cerulein administration.In order to have a better insight into the mechanism(s) of action of the observed anti-inflammatory effects of Cichory extract on pancreatitis, we investigated the effects of Cichory on serum amylase and lipase levels, tissue edema, leukocyte infiltration and vacuolization. [15] Plant material and extraction The roots and herbal aerial parts of C. intybus were collected from the plants grown wild For preparation of hydroalcoholic extract, dried and finely powdered herbs (500 g) was soaked in adequate volume of ethanol : water (70: 30) and the extraction was undertaken for 48 h to obtain full extract using percolator apparatus.The product was then shaken, filtered and evaporated in a rotary evaporator to obtain semisolid extract under reduced pressure. [16]tal phenol assay of the extract The total phenols in the CRE and CAPE were determined by the Folin-Ciocalteua method with some modifications.Results are given as gallic acid equivalent (GAE)/g of the extract. [17]duction of pancreatitis For biological testing total hydroalcoholic extracts (CRE and CAPE) were dispersed in normal saline solution as vehicle. Acute pancreatitis was induced by five intraperitoneal (i.p.) injection of 50 µg/kg body weight of cerulein (Sigma, St. Louis, MO, USA) with 1 h intervals according to the method was previously demonstrated by Mazzon et al. [18] in which edematous pancreatitis with leukocyte infiltration, as well as increased serum levels of amylase and lipase activity were prominent. Animals Male mice weighting 25-30 g and bred in animal house of Isfahan School of Pharmacy, Isfahan, Iran were used in this study.Animals www.mui.ac.ir were kept in uniform environmental conditions of temperature, humidity and light/dark cycles (12/12 h) and allowed free access to rodent chow and tap water.The study was approved by the local Ethics Committee of Isfahan University of Medical Sciences, Isfahan, Iran. Groupings Animals were randomly assigned into following 16 groups (n=6). Negative control groups: Mice with acute pancreatitis were pretreated with normal saline (5 ml/kg p.o. and i.p). CAPE groups: Mice with acute pancreatitis were pretreated with CAPE (50, 100, 200 mg/ kg) as a single dose (p.o. and i.p.).Test doses of Cichory extracts were chosen because they were suggested as hepatoprotective by Zafar et al. [19] Intraperitoneal (i.p.) and oral (p.o.) treatments were carried out 0.5 and 1 h before pancreatitis induction, respectively.Mice were sacrificed 4h after last injection of cerulein.Blood samples were obtained by directed intracardiac puncture under generalized anesthesia induced by diethyl ether inhalation and stored at -60° for biochemical analysis. [18]Pancreas were removed immediately and fixed in formaldehyde (10%) for histological examination. Biochemical analysis Serum lipase and amylase activity were determined by using commercially available lipase and amylase kits (Pars-Azmoon Company, Tehran, Iran). Histological examination Paraffin-embedded pancreas samples were sectioned (5 µm), stained with hematoxylene and eosin (H and E) and examined by an experienced co-worker pathologist unaware from experimental protocol. The histological grading of edema was made using a scale ranging from 0 to 3 (0=no edema, 1=interlobular edema, 2=interlobular and moderate intralobular edema, and 3=interlobular edema and severe intralobular edema).Leukocyte infiltration was also graded from 0 to 3 (0=absent, 1=scarce perivascular infiltration, 2=moderate perivascular and scarce diffuse infiltration, 3=abundant diffuse infiltration).Grading of vacuolization was based on the appropriate percentage of acinar cells involved: 0=absent, 1=less than 25%, 2=25-50% and 3=more than 50% of acinar cells. [15]atistical analysis Biochemical results are expressed as mean±SEM.Statistical analysis was carried out by one-way analysis of variance (ANOVA) followed by Tukey's multiple comparison test.Nonparametric data was analyzed by Mann-Whitney U test.The minimal level of significance was considered at P<0.05. Total phenolic content Phenolic content of CRE and CAPE were 2.5 and 6.5% of 100 g galic acid respectively. Effects of CRE on the serum levels of amylase and lipase Cerulein-induced pancreatitis in vehicle-treated mice was associated with significant rises in the serum levels of amylase and lipase.The increase in amylase and lipase was markedly reduced in cerulein-treated mice which had been pre-treated with CRE in doses of 100, 200 mg/kg by i.p. injection and in dose of 200 mg/kg by oral route [Figures 1a and b]. Effects of CAPE on the serum levels of amylase and lipase The effects of CAPE on the serum levels of amylase and lipase were same as root extracts.The groups that were received extract in doses of 100, 200 mg/kg by an i.p. injection showed a significant decrease in levels of amylase and lipase.In oral route the group that received extract in the dose of 200 mg/kg showed a significant decrease in levels of amylase and lipase activity too [Figures 2a and b]. Effects of CRE and CAPE on the histological parameters In normal saline treated mice, pancreas did not show any tissue injury at light microscopic level (×10 magnification).Administration of cerulein www.mui.ac.ir induced acute edematous with severe leukocytic infiltration pancreatitis in all mice tested.The pancreas was grossly swollen and enlarged with a visible collection of edematous fluid.Prominent interlobular and severe intralobular edema was also accompanied with moderate perivascular and abundant diffuse inflammatory infiltration.Vacuolization was also observed in 25 to more than 50% of acinar cells but no necrosis or hemorrhages were observed. In groups that received extracts in the dose of 200 mg/kg by i.p. injection, the severity of edema and leukocytic infiltration significantly reduced compared to normal saline treated group (interlobular edema, scarce perivascular infiltration).Vacuolization was not significantly reduced in extracts treated groups.Lower test doses (50 and 100 mg/kg) of both extracts (CRE and CAPE) were not effective to reduce pathological tissue injures compared to controls [Table 1]. DISCUSSION In the present study, results showed that CRE and CAPE had good potential to attenuate pancreatitis in mice as indicated by biochemical and histological evaluations.Biochemical assays confirmed that administration of CRE and CAPE reduced amylase and lipase activity, both of which are markers of pancreatitis. [6]Interestingly CRE and CAPE, especially at doses of 100 mg/ kg and 200 mg/kg i.p. and 200 mg/kg p.o., showed significant protection against pancreatitis compared to control groups.Regarding to the histological results, administration of CRE and CAPE showed an effective protection in a manner was partly dependent to the dose and route of administration.The highest doses of CRE and www.mui.ac.irCAPE (200 mg/ kg) that were administered intraperitoneally had significant effects compared to the respected control groups. In biochemical examination, the results showed that the lowest doses of oral extracts (50 and 100 mg/ kg) were not effective to suppress pancreatitis and neither of doses had significant effects on serum levels of amylase and lipase activities.This is in accordance with the results obtained by Zhao et al. [20] The authors demonstrated that the higher dose of rhubarb hydroalcoholic extract (150 mg/ kg, twice daily, p.o.) was effective to protect against cerulein-induced acute pancreatitis while the lower test dose (75 mg/kg) was not effective. Examination of total phenolic content of extracts showed that the amount of total phenols in the CAPE was about twice rather than in the CRE but interestingly biochemical and histological results showed that CAPE and CRE had same effects in same doses as well as route of administration.It could be suggested that both extracts exerted their protective effect through mechanisms are not essentially dependent to phenolic contents of extracts.Moreover compared to the roots, the aerial parts which are easily harvested and found as renewable source had same potential to be considered as a useful source for anti-inflammatory and pancreatitis protective compounds.The hepatoprotective activity of C. intybus L. root and root callus extracts has investigated by Zafar et al. [19] The results showed that root callus extract had better activity against carbon tetrachloride hepatotoxicity as compared with natural root extract.They suggested that metabolites present in cultured cells were more potent as anti-hepatotoxic as compared to constituents present in natural root extract.The results also indicated that natural root extract especially by 150 mg/kg as the highest test dose was effective to markedly prevent necrosis in liver tissue, however lower doses of 50 and 100 mg/ kg were only effective to reduce milder forms of hepatic injures like fatty changes and bilirubin content. Administration of medicinal herbs that possess anti-inflammatory and antioxidant properties is a new approach to attenuate inflammatory-related disorders. [19]In this regard the effects of Ginko biloba extract on acute pancreatitis has been studied by Zeybek et al. [21] The results demonstrated that Ginko biloba extract in 100 mg/kg administered i.p. was able to decrease significantly in serum amylase and lipase levels as well as histopathologic scores in sodium taurocholate-induced pancreatitis.The beneficial effects had attributed to the oxygen radical scavengering potential of Ginko biloba flavonoids contents.Flavonoids with anti-inflammatory, antioxidant and gastro-protective effects are widely distributed in plant kingdom.Stimulation of prostaglandins, suppression of histamine secretion and inhibition of Helicobacter pylori growth are the main causes of gastroprotective effects of flavonoids. [22]. intybus is considered as a promising source of flavonoids with various beneficial biological effects.Hepatoprotective, gastroprotective, free radical scavenging and anti-inflammatory actions are the most important properties of C. intybus that assumed to be related to its flavonoids. [23]As we know aqueous and alcoholic extracts of C. intybus L. showed anti-inflammatory activity against formalininduced paw edema in mice. [24]Moreover C. intybus has immunomudulatory, apoptotic and osteoporosis preventive properties for which fructans derivatives and fermented preparations have been shown to be involved (butyrate derivatives). [9,25]In addition, roots are the source of sesquiterpene lactones that have been showed to act as powerful inhibitors of cyclooxygenase-2 enzyme (Cox-2) [26] that can dramatically reduce the inflammation. [27]Various mechanisms might be involved in beneficial protective effects of Cichory in this study and total extracts have many different components for which wide variety of pharmacological effects has been mentioned.Thus, further experimental studies are necessary to isolate and identify the active principles present in CRE and CAPE fractions which are responsible for the protective effects on pancreatitis. CONCLUSION We demonstrated that C. intybus hydroalcoholic extracts possess protective therapy in ceruleininduced acute pancreatitis in mice and may suggest a therapeutic potential for pretreatment in this inflammatory disease condition in clinical setting.
2017-10-17T04:59:17.541Z
2012-02-18T00:00:00.000
{ "year": 2012, "sha1": "3b1aa773ea0ce04849ef0ec7b05f9732ba6244b1", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "ScienceParseMerged", "pdf_hash": "3b1aa773ea0ce04849ef0ec7b05f9732ba6244b1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
263639915
pes2o/s2orc
v3-fos-license
Safe Ultrasonography-Assisted Knee Posterior Transseptal Portal Creation Technique Knee arthroscopy is a minimally invasive surgical technique that allows surgeons to diagnose and treat various knee conditions using much smaller incisions than open surgery. However, it is difficult to fully visualize the posterior compartment of the knee joint using the usual anterior portal approach because of blind spots. The transseptal portal technique enables surgeons to visualize the surgical field and access instruments in the posterior compartment of the knee during arthroscopic surgery. However, creation of the posterior transseptal portal increases the risk of neurovascular injury. Particular attention should be paid to avoid damaging the saphenous nerve, common peroneal nerve, popliteal artery, and tibial nerve. Here, we describe an ultrasonography-guided surgical method for creating the posterior transseptal portal by confirming the surrounding anatomy. Patient Positioning The patient is placed in the supine position with the affected limb in an w90 flexed position using a leg holder.The contralateral leg is positioned widely to prevent interference with the medial operative field and allows space for the medial side of the affected leg (Fig 1). The surgeon performs the arthroscopy using a 30 arthroscope with standard anterolateral and anteromedial portals. Ultrasonography-Guided Creation of Posteromedial Portal The arthroscope is placed between the anterior cruciate ligament and the medial femoral condyle through the anterolateral portal, while the posteromedial compartment is observed in the trans-notch view ( Ultrasonography-Guided Creation of Posterolateral Portal An arthroscope is placed between the PCL and the lateral femoral condyle through the anteromedial portal to observe the posterolateral compartment in the trans-notch view.The posterolateral portal is created between the lateral femoral condyle, lateral gastrocnemius tendon, and biceps femoris.As with the posteromedial portal creation, the posterolateral portal is created under ultrasonography assistance using a parallel technique to ensure that it enters the posterolateral compartment (Fig 5).Care should be taken to protect the peroneal nerve and the posterior neurovascular structures. Ultrasonography-Guided Creation of TSP An arthroscope is inserted into the posterior medial portal to confirm the position of the posterior septum posterior to the PCL.A cannulated switching rod is inserted through the posterior lateral portal to confirm the location of the posterior neurovascular structure using ultrasonography (Fig 6).The arthroscope, cannulated switching rod, septum, and posterior neurovascular bundle are identified during ultrasonography, and the posterior septum is perforated with rods under ultrasonographic guidance.If the septum cannot be fully perforated, a guide pin is placed in the rod, which is first perforated, and the rod is inserted and perforated (Fig 7).Next, a slotted cannula is placed over the rod guide and replaced with the shaver of the radio frequency device to dissect the septum (Fig 8).This can also be performed with ultrasonographic assistance, which enables safe dissection of the septum with confirmation of the distance to the posterior neurovascular bundle (Fig 9). Discussion The development of arthroscopic techniques and instruments has enabled the evolution of complex surgical procedures.Although the prevalence of nerve injuries during arthroscopic knee surgery is reportedly low (0.06e0.6%), 3,4 avoiding neurovascular injuries, including popliteal artery injuries, during arthroscopic procedures is vital for knee function. 5,6Therefore, arthroscopic procedures require better intra-articular visualization to ensure the safety of more complex procedures, and maximum effort should be made to POSTERIOR TRANSSEPTAL PORTAL CREATION TECHNIQUE e1883 avoid complications.This is particularly true for posterior compartment procedures, which are associated with a high risk of neurovascular complications. To understand the neurovascular structures, Ahn et al. 7 measured the distance from each portal site in a cadaveric study.The authors showed that a 90 knee flexion position during arthroscopic surgery is reasonably safe for posterior portal creation.Makridis et al. 8 examined the relationship between the nerve and the posterior portal at high flexed knee angles and expanded the measurements using a no.11 knife and a cannula.They showed that the posteromedial portal can be safely created at flexed knee angles 90 .However, the potential risk of injury to the common peroneal nerve is higher at high flexion angles and in near extension.However, variations in anatomical structures and dynamic arthroscopic surgery techniques require knee repositioning, while joint expansion due to reflux further alters the anatomical relationship among the neurovascular structures. Owing to anatomic variations, changes in intraarticular pressure due to reflux, and possible changes in anatomic position due to limb position, our method of creating the posterior portal and TSP assisted by ultrasonography is safer than conventional surgical methods because it enables identification of the surrounding anatomical structures during portal creation and posterior compartment procedures.Pearls and pitfalls of the technique are summarized in Table 1.The advantages and disadvantages of our technique are shown in Table 2.The instruments must always be identified by ultrasonography during the procedure. In obese patients, it may be difficult to clearly delineate the posterior knee structures and instruments on the ultrasound images.The surgeon must check the ultrasound images and the arthroscopic images prior to proceeding. Fig 1 . Fig 1. Left knee.The patient is placed in a supine position with the leg holder, and the affected limb is in w90 flexed position. Fig 4 . Fig 4. Left knee.Canulated rod and cannula are inserted to perforate the capsule and create a portal.(A) Surgical field.(B) Arthroscope image of posteromedial compartment from anterolateral portal. Fig 3 . Fig 3. Arthroscopy needle is inserted under ultrasonography guidance of the left knee.(A) Ultrasound transducer is placed to the posteromedial of the knee.(B) Ultrasound image of the posteromedial of the knee.(C) Arthroscope image of posteromedial compartment from anterolateral portal.MFC, medial femoral condyle; MM, medial meniscus. Fig 5 . Fig 5.An arthroscopy needle is inserted under ultrasonography guidance of the left knee.(A) Ultrasound transducer is placed to the posterolateral of the knee.(B) Ultrasound image of the posterolateral of the knee.(C) Arthroscope image of posterolateral compartment from anteromedial portal.LFC, lateral femoral condyle. Table 2 . Advantages and Disadvantages Table 1 . Pearls and Pitfalls
2023-10-05T15:22:30.781Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "af97a4c83ae296d7cc2407ec1419261c7ed5765a", "oa_license": "CCBYNCND", "oa_url": "http://www.arthroscopytechniques.org/article/S2212628723001937/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "05494a1c7c58374910b175d1978d054a0c4659e7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
269222100
pes2o/s2orc
v3-fos-license
Simulating lightning effects on carbon fiber composite shielded with carbon nanotube sheets using numerical methods The demand for lightweight aircraft structures has shifted from traditional metals like aluminum to composite materials such as carbon fiber reinforced polymers (CFRP) to achieve weight reduction. However, this transition has led to decreased lightning strike protection efficiency due to the dielectric nature of CFRP. To address this problem, two CFRP samples—one unprotected and the other shielded with carbon nanotube (CNT) sheets—were subjected to artificial lightning strike testing. The research employed a coupled thermal-electrical finite element analysis method to investigate the lightning strike's impact and damage mechanisms on both samples. The numerical results closely aligned with published experimental data, validating the simulation. Unprotected CFRP sustained damage through the thickness direction up to 8 composite plies and in-plane direction over a length of 110 mm. In contrast, the sample protected with CNT sheets exhibited damage limited to the surface of the first 4 plies, with in-plane damage reduced to 24 mm. Notably, the damage area in the CFRP protected with CNT sheets showed a substantial 78.1 % reduction compared to the unprotected CFRP sample. This suggests that CNT can enhance the electrical conductivity of CFRP when incorporated between interlayers in both in-plane and thickness directions. The study enhances understanding of CFRP damage behavior and failure modes under lightning strike conditions, emphasizing CNT sheets as an improved and viable lightning strike protection system for aerospace applications, warranting further investigation. Introduction The history of materials used to make aircraft structures drives the history of the aviation industry in general.Ever since the Wright brothers made their first successful human-crewed flight in 1903 with a wooden aircraft, we have seen an evolution from the use of simple wooden truss structures, to metals, leading to the design of the sleek aerodynamic flying machines we see today [1,2].However, the need for lightweight structures has led to a gradual shift from the use of metals to the adoption of composite materials [3,4].The design and the art of forming things as light as possible such as parts, products, and structures within certain specifications are known as lightweight design.Extrusion profiles or sections for designing stiff and lightweight are commonly available [5,6].In recent times, the use of polymer matrix composites (PMC) like carbon fiber-reinforced polymers (CFRP) has been considered for the manufacture of various components and structures in the aircraft, from the fuselage to the wings and the engines mainly because of their light-weight properties [7,8].An example is the use of CFRP for the manufacture of aircraft skin, engine cowlings, and other components in the front section of the engine [9].However, from a practical point of view, CFRP being used in these areas of the aircraft presents new challenges especially because of the elevated conditions these aircraft and engines operate in Ref. [10].Also, CFRP has dielectric properties which makes it less electrically conductive.This poor conductive nature of CFRP becomes a critical disadvantage in case of lightning strikes [11].They are also less thermally stable and therefore less resistant to extremely elevated temperatures [12].This material is both exceptionally durable and remarkably light in weight [13].Carbon fibers exhibit greater adhesion with matrices and possess higher strength compared to other fibers.They are five times stronger than steel, twice as stiff, and approximately two-thirds lighter in weight [14,15].A thermoset, thermoplastic, or elastomer matrix is combined with carbon fibers as the reinforcing element to create Carbon Fiber Reinforced Polymer (CFRP), a composite material [16][17][18][19].Therefore, there is a need to conduct research into the behaviour of CFRP used in these areas of an application under extreme conditions like lightning strike threats and further research into how to obtain lightweight materials with specific desired properties (Nano-composites) to help curtail these new challenge that the use of CFRP presents to the ever-growing aviation industry.Although researchers have tried to investigate the effects of lightning strike damage on composites experimentally, they often prove to be very expensive and time-consuming providing limited means of performing repetitive iterations [20,21].Therefore, virtual lightning strike tests have been recommended as an inexpensive way but direct method to study the effect and damage mechanism of lightning strike-induced damage on composites to propose effective lightning strike protection systems. Khalil et al. [22] looked at the use of finite element analysis to investigate design variables that control lighting strike damage on graphite/epoxy composites.The primary contribution of this study lies in addressing the development and validation of material properties that vary with temperature.This methodology was implemented on a test sample, and the outcomes were cross-verified with established experimental data, emphasizing the necessity of incorporating temperature-dependent properties in lightning strike simulations.The test was then applied to a lightning strike protection system and the results of the simulation were used to understand and quantify the behaviour of the strike that will minimize material damage.Feraboli and Miller [23], conducted an experiment to examine the residual strength and damage modes of CFRP specimens incorporating a stainless-steel fastener.The result of his experiment revealed that the presence of the fastener in the CFRP specimen influenced the lightning strike damage on the material in the thickness/depth direction.The residual strength of the specimen was also affected as the fastener propagated the current through the thickness direction.Ogasawara et al. [24] considered the use of a coupled thermal-electric finite element analysis method to study the damage on a composite laminate.An assumption established considered the electrical conductivity in the depth direction of the material to be linear with temperature.Their results showed that joule heating significantly affected the effect of the lighting strike damage on the sample used.The study ignored the temperature dependency of the thermal-electrical properties of the composite material and used a method that considered only the effect of the electric arc load leading to a maximum surface temperature rise to values greater than 100,000 • C, which is an unrealistic result.The work of Zhou et al. [25] employed a new kind of thermal protection system that used prepared metal-ceramic functionally graded bolted joint (FGBJ), whose material system (porous Fig. 1.SAE ARP 5414 commercial aircraft lightning strike zones. E.I. Awode et al. ZrO 2 /(ZrO 2 +Ni) FGMs) was fabricated by cold isostatic pressing and pressureless sintering (CIP-PLS).The authors further studied the shearing properties of the bolted joint, a three-dimensional (3D) finite element model of the double-shear bolted joint connecting plates which were constructed based on the ABAQUS code.Peyrou et al. [26] conducted an extensive investigation into the mechanical and thermal impacts of lightning strike damage on the skin panels of aircraft.Their study involved the development of both experimental and numerical models to facilitate the simulation of plasma between panels and the cathode, capturing conduction and temperature profiles.The simulated lightning strike environment was designed to emulate flight conditions, aiming to define and characterize the sweep stroke process and assist in evaluating dwell time-an essential parameter for defining waveforms applied to the swept zones of an aircraft.Their experimental approach also encompassed the modeling of electric arc physics and electric spark generation in the skin panels.While their methods comprehensively modelled the mathematical expressions governing thermal and mechanical forces applied to the skin panels, they did not account for the interaction between the applied force and the resulting structural (skin panel) damage.Lightning strike-induced damage is regarded as one of the potential threats to the structural integrity of composite materials used in the manufacture of aircraft structures.Fig. 1 shows the potential areas on an aircraft structure that are susceptible to the impact of lighting strikes [23] and Fig. 2 shows the current waveforms that are applicable and certified by international bodies for manufacturers to use to certify their structures against lightning strikes. In adherence to the Federal Aviation Administration (FAA) Advisory Circular AC 25-21, an aircraft is required to maintain operational and flight capabilities following a lightning strike, preventing any catastrophic failures or damages [27].The SAE ARP contains tests and guidelines that determine how aircraft can pass these regulations.Even though there is no minimum specified value or rating of electrical conductivity for a structural material or component to be able to pass these regulations, frequent tests have shown that materials and components with high conductivity are less susceptible to extensive damages [28].According to the Standard SAE ARP 5414 and the FAA Advisory Circular AC 20-155A, the outer surface of the aircraft can be divided into three (3) regions called the lightning strike zones.Each region or zone represents a different area on the aircraft with the likelihood of experiencing various types of lightning strike currents [29,30].Fig. 1 shows a detailed description of these zones.Engineers can effectively identify and safeguard the structure by dividing the aircraft into zones to determine potential hazard areas.Table 1 shows a detailed description of the various zones and the associated colour codes.a. Zone 1: Has a high probability of a direct strike also known as first return stroke zone.b.Zone 2: likely to experience lightning strikes sweep back from a point of initial attachment.c.Zone 3: very low probability of strikes but may experience conducted currents between two attachment points [31].SAE ARP 5412 evaluates the direct effects of lightning strike current waveforms into four components from A-D (A, B, C, D) depending on the SAE ARP 5414 recommended lightning zoning as shown in Fig. 1 [23].Fig. 2 shows a grouping of the various current components attached to the aircraft zones in Fig. 1 into a single diagram. As shown in Fig. 1, aircraft structures from Lightning Strike Zones 1 and 2 and expected to be designed to be able to withstand current components from B and C. Current component A has a peak amplitude of about 200 KA and is earmarked for zone 1 regions only while Zone 2 regions can only withstand currents from Component D. Components B and C represents area where electric currents are conducted through the material.Manufacturers utilize this current waveform during the testing and material selection process for aircraft structures [32]. Governing equations of coupled thermal-electric analysis The coupled thermal-electrical analysis was selected for the simulation in ABAQUS/CAE.The analysis involved a representation of the chosen material exposed to a concentrated current, simulating an authentic lightning strike, and the material was constrained by specific pre-established electrical boundary conditions [33].The energy from the lightning strike (concentrated electrical current) is simulated to flow through the material under transient heat transfer analysis and transformed into thermal energy by joule heating.This process was also governed by thermal boundary conditions, hence the adoption of the temperature-dependent properties of the material for the simulation which would help describe the physical, visual, and chemical characteristically changes within the material under varying temperature conditions. These governing equations also help to simplify the analysis into a non-linear heat transfer effect in ABAQUS allowing the software to consider the inner heat source [34].The inner heat source is attributable to the heat generation (thermal energy) caused by the concentrated current (simulated lightning strike).The electric field generated within the material (if regarded as a conductor) is regulated by Maxwell's equation for the conservation of charge [24,30]; where V is any control volume whose surface is given as S, n is defined as the outward normal to S, r c is defined as the internal volumetric current source per unit volume, and J (A/m 2 ) is defined as the electric current density (current per unit area). Here, θ represents the temperature, σ E (θ) denotes the electrical conductivity matrix, E stands for the electric field intensity, and φ represents the electrical potential.However, ∂φ ∂x = ∇φ.Hence, a fundamental equation is derived by incorporating Ohm's Law from equation (2) into equation ( 1), Maxwell's equation for the conservation of charge, as illustrated in equation ( 3) is; Here, (J‾ = J.n) is specified as the electrical current density entering a control volume V. Taking the Joule heating effect into account, Joule's law will characterize the rate of electrical energy dissipated by the current flowing through the conductor (material) as follows; Therefore, combining equations ( 1) and ( 4), gives equation ( 5) as; However, in a transient thermal-electric analysis, the average value of the transient thermal electric analysis is obtained over a short time increment [30] as given in equation ( 6) as; where E and σ E (θ) values at time (t + Δt) and the amount of this electrical energy released as internal heat is given in equation ( 7) as; where r is the heat generated within the material caused by the concentrated current (Lightning strike) and η v is defined as the energy conversion factor.The transient thermal conductive energy governing equation considering the heat generated within the material due to chemical reactions (inner heat source) is expressed in equation ( 8) [35] as: Equation ( 8) can also be expressed in equation ( 9) as follows: ρ stands for the material density, q is the heat flux per unit area (heat generation density), Cp is the specific heat of the material, θ is the temperature, Q represents the resistive heat produced by the concentrated electric current (lightning strike), t denotes the time, and k signifies the thermal conductivity [35][36][37]. During the simulation process for the transient coupled thermal-electric analysis, the process was computed in a stepwise manner for each time increment up to 60 μs.The software updated the temperature dependent properties of the modelled material after each step is completed till the end of the simulation process and after each time increment.After the analysis, the results obtained showed the electric potential, electric current applied, total temperature of each layer of the material, heat flux and the effect of joule heat generated.Also, the parameters for the CNT sheet used in the simulation are shown in Table 2. Current waveform generated in this study The lightning current waveforms and peak amplitudes that affect the aircraft and its components in its operational lifecycle differ in nature.The lightning component A and D represents two significant peak amplitudes of 200 kA and 100 kA respectively.These figures are virtually too high and large to apply during simulations; however, the SAE ARP 5412, IEC (International Electrotechnical Commission) and MIL-STD 464C by the US Department of Defense have described standards used by majority of manufacturers to certify their airframe structures and components [38][39][40].These standard current waveforms described follows a double exponential curve equation to produce the component D waveform.The double exponential curve equation [41][42][43] is shown in Equation (10); This double exponential current waveform equation was simplified into an impulse current (Fig. 3).This impulse current can be defined as an unidirectional current which rises rapidly to a maximum value and falls more or less rapidly to zero [44].It comprises of two segments; a wave-front and a wave-tail [45].This impulse current system was adopted using Excel spreadsheets to generate the various peak current amplitudes that was applied to the 3D (three-dimensional) model in this study during the simulation.The peak amplitudes used were within the ranges of 20 kA-60 kA. CFRP modelling in ABAQUS The lightning strike analysis was done using transient thermal-electric coupled analysis where it was possible to apply the temperature dependent properties of the material as shown in Table 3.A 3D solid deformable model of carbon/epoxy IMM600/133 was created in Abaqus/CAE 6.14-1 with eight (8) plies of dimensions 300 × 300 mm each.Material orientation was defined with a stack sequence of [45 • /0 • /-45 • /90 • ]s.After the material sample was created in the software, the lightning strike was simulated to act in the middle of the model sample as concentrated electric current defined as a function of time.This was done to make it possible to visualize the heat/temperature changes and radiation through the material as a form of joule heat dissipation using the physics governing the characteristic of a black body radiation, Planck's law and the Stefan Boltzmann constant [46][47][48][49].Fig. 4 Shows the 3D model in ABAQUS and the stacking sequence of the CFRP composite. Where T is the temperature (K), Cp is the specific heat capacity (J/kgk) , ρ is the density (kg/m 3 ), k 11 , k 22 and k 33 are the thermal conductivities (W/mK) while σ 11 , σ 22 and σ 33 are the electrical conductivities (S/m). Boundary conditions The simulation was defined as a high-speed process given a time of approximately 60 μ s to simulate the lightning waveform component D which depicts the subsequent return stroke with a peak amplitude between the range of 20 kA-60 kA.The top and side surfaces was defined to show thermal radiation in order to see how the current propagates through the fibre of the plies.An absolute temperature of zero (0) Kelvin was specified while setting the ambient to room temperature at 298 K.The sides and bottom surfaces of the carbon/epoxy model was grounded and given an electric potential of zero (0) volts.In order to show the changes within the material properties when the lightning strike is applied, the temperature dependent properties of the carbon/epoxy IMM600/133 was used for the analysis with a Stefan-Boltzmann constant of 5.67 × 10 − 8 W/m 2 K 4 .An emissivity of 0.9 was specified for the model.The total time set for the simulation was 60 μs.The initial time interval was designated as 0.01 μs and the maximum was set at 0.02 μs to ensure accuracy and convergence of the finite element analysis model.These boundary conditions (Fig. 5) were well-defined in the simulation environment to match as close as possible to the real life experiment conducted by Ogasawara et al. [24] and this was corroborated by the works of Zhou et al. [53] on a numerical investigation on stress modal analysis of composite laminated thin plates.Abdel and Murphy [54] used similar boundary conditions in their lighting strike test on graphite/epoxy, however, a Stefan Boltzmann constant which describes the relationship between the intensity of thermal radiation by a body or matter in terms of its temperature was not defined.The total time and stepwise increment defined for the simulation is dependent on the understanding of how the Modelling of the 3d CFRP sample with carbon nanotubes sheets The modelling of the CFRP sample with carbon nanotube sheets was done using the same transient coupled thermal-electrical analysis to be able to see the radiation, conduction, heat flux, electric potential gradient and temperature profile due to joule heating effect on the sample after the simulation.The element type assigned to the mesh was a DC3D8E; which is an 8-node linear coupled thermal electric brick.The CFRP laminate was modelled with carbon nanotube sheets inserted between the interlayers of the plies.The microstructure of the CFRP laminate was blown-out to show the distribution of the carbon nanotubes between the plies as shown in Fig. 6.This was the concept adopted to help increase the electrical conductivity of the whole composite laminate in both the thickness and in-plane direction.In order to maintain the weight of the sample and not compromise on the protection effect, a total of 8 carbon nanotubes were added to the CFRP model.From the top surface and one between each layer.A more suitable method for the use of carbon nanotubes as protective layers will be its incorporation into the matrix resin or the use of silver-modified carbon nanotube films created by Xia et al. [58].However, for ease of modeling, carbon nanotube sheets were adopted [59][60][61][62]. Results and discussion Fig. 7 shows the illustration of the electric current waveform that was generated and applied in this study using the SAE ARP 5412 and the double exponential current waveform.This double exponential current waveform also known as the impulse current is defined as a unidirectional current that rises rapidly to a maximum value and falls more or less rapidly to zero.It comprises two segments; a wave-front and a wave-tail. 1. Wave-front: is defined as the time taken by the current or wave to reach its maximum value when starting from zero.Wave-front time is calculated using [1.5 × (t 2 -t 1 )].Where t 2 is the time taken to reach 90 % of peak value and t 1 is the time taken to reach 10 % of peak value.2. Wave-tail: is defined as the time measure between the nominal starting point (t o ) on the wave-tail where the current is 50 % of the peak value.Wave-tail time is calculated using (t 3 -t o ). This system is standardized to study the effect of transient currents generated by lightning.It was adopted to generate the various peak amplitudes between the range of 20 kA-60 kA that were applied to the simulation in this study.The results of the generated current waveform are as shown in Fig. 7. Results from lightning strike The outcomes derived from the simulation distinctly illustrate that the predominant damage to the CFRP model resulted from the simulated lightning strike inducing a lightning arc flow, leading to a robust electrical discharge through the material.A parallel finding was observed by Peyrou et al. [26] in their examination of the thermal, electrical, and mechanical constraints.This electrical discharge on the top surface of the sample led to the conduction of high current through the composite material as a result of Joule heating.Figs. 8 and 9 show the spike in temperature and contour plot of the electric potential on the top layer of the 3D sample after the impact of the lightning strike.Since current always seeks the path of least resistance, it can be observed that the lightning current was conducted along the fibre direction of the outermost and subsequent layers.The heat flux and the subsequent rise in temperature on the surface of the sample composite material expanded the lightning arc channel and induced damage along the length of the material and gradually penetrated the model in the thickness direction to other layers of the material. For the temperature profile illustrated in Fig. 8, A path was picked along the cut section of the 3D model.It revealed that the temperature on the top surface of the material moved from ambient to about 4070 • C between 1.5 μs (μs) and 2.5 μs (μs) when the lightning strike impacted the surface of the sample material.Comparing the temperature rise that was obtained from this study with experimental data obtained from Zhang et al. [34], the results of the simulation from this study predict sublimation of CFRP and Fig. 6.Blown-up view of carbon nanotube sheets in-between layers of CFRP composite.decomposition of matrix resin.This is because, from Wen et al. [63] and Lachaud et al. [64] carbon fibre begins sublimation at 3316 • C and epoxy resin undergoes thermal decomposition and begins to cause pyrolysis at temperatures starting from 298 Damage behaviour of CFRP The 3D model and boundary conditions in the ABAQUS software were altered to closely resemble the actual experimental setup carried out by Ogasawara et al. [24] for the application of the 40 kA lightning strike in the simulation.This allowed for a comparison between the damage propagation, damage area, and damage direction obtained from the simulation and the experimental results.Since the damage to the CFRP sample is regarded as initial, it is primarily observable and comparable through visual inspection, as it would be challenging to quantify it using numerical values.The visual appearance of the simulation and test specimen is as shown in Fig. 10, which depict electric potential on the top layer (Fig. 10a) and temperature profile on the top layer (Fig. 10b) in comparison with damage on a real-life experimental study (Fig. 10c). The evaluation of the damaged areas on both the experimental and simulated samples was done visually, as the damaged areas and regions exhibit distinct appearances and reflections from the intact surfaces.From the visual inspection of the samples, it can be observed that the simulation predicted thermal ablation due to joule heating, fibre dissipation, fibre breakage, delamination, and resin decomposition which can be observed as a blistered surface on the material model.At the lightning attachment point, fibre breakage can be observed several layers deep from the top surface.The main cause of the fibre damage in the simulated model is due to the shockwaves produced by the concentrated current (lightning strike) and rapid expansion of the material.Because carbon fiber is very resistive, it limits the flow of concentrated current that affects the surface of the material.This resistance to the flow of current releases a significant quantity of energy, which is then transformed into heat energy through joule heating.As the temperature of the material rises due to the resistive heating; pyrolysis of the resin matrix occurs.This chemical process causes resin deterioration leading to the release of gases which are in turn trapped within interlayers of the material plies leading to delamination.Wan et al. [65] and Kuang et al. [66] experienced similar damage propagation in the form of delamination from their experiment and analysis conducted as a result of pyrolysis from joule heating and resin decomposition.Fibre breakage of the layers of plies also occurs due to thermal expansion from joule heating causing thermal stress build-up and fracture of the material layers as shown in Fig. 11a and b. From Fig. 12a and b, it can be evident that the lightning-imposed current spread from the top surface layer of the sample to the interior layers due to the layer-by-layer dielectric breakdown and the effect of joule heating.Apart from the thermal ablation and surface recession on the model shown in Fig. 12a, the extended delamination on the surface can be attributed to the expanding gas pressures due to pyrolysis.However, the delamination and in-plane damage were reduced in each layer of the ply due to the reduction in the flow of current from the top surface to the bottom. Carbon nanotubes for lightning strike protection For the unprotected CFRP model, the lightning strike mainly flows along the length of the fibre direction since the electrical conductivity in the in-plane direction is much higher compared to the thickness direction as shown in Fig. 13a.The enormous electric potential energy from the impulse current builds up close to the point of lightning attachment, where it undergoes joule heating to become thermal energy [67].The temperature of the top layer increases rapidly and reaches 4070 • C within 2 μs as illustrated in Fig. 7.This elevated temperature deteriorates the resin and sublimation of carbon fibre begins to occur.Also, damages from delamination and fibre breakage set in.The lightning strike adheres to the subsequent layer while the upper layer disintegrates, absorbing a significant portion of the sample's joule heat [68].The lightning strike current reaches the eighth (8th) ply and causes huge damage to the unprotected CFRP [69]. For the CFRP protected with carbon nanotube sheets, when the simulated lightning strike impacts the surface of the sample model, the temperature of the model increases within 10 μs-3328 • C and thermal ablation and carbon fibre breakage begin to occur.However, the electric potential energy from the impulse current is not concentrated at the attachment point as in the case of the unprotected CFRP.Because of the electrical conductivity of the carbon nanotubes, The electrical current generated enters into the carbon nanotube sheets due to its lower resistivity than CFRP and is conducted through the sample model [67].This reduced the ablation damage to the material due to the joule heating effect as shown in Fig. 13b.However, the residual electric current enters the material through the thickness and heats the next layers of the CFRP ply since the carbon nanotubes form the electrically conductive path for the propagation of the current [70].The CFRP plies within the top layers are damaged due to the heat they absorb while the carbon nanotubes conduct the current through them [68,69].The results revealed that, even though three plies and carbon nanotubes within the top region got damaged, the damaged area and depth significantly decreased by 78.1 % compared to the unprotected CFRP model.This finding corroborates the works of Zhang et al. [33] who conducted a similar research that employed the use of CNT films and achieved a reduction in the damaged area of the model by 77.6 % in area and 68 % in depth. Conclusion The outcomes of the transient coupled thermal-electrical analysis in ABAQUS/CAE for the CFRP sample agreed with the experimental results used for comparison.The simulation effectively predicted and characterized the damage behavior under lightning strike conditions.The impulse current, simulating a real lightning strike, induced noticeable damage in the CFRP, categorized as fiber damage, resin deterioration, thermal ablation, and delamination.Furthermore, the generation of Joule heat, resulting from resistive heating, notably influenced the observed lightning strike damage mode on the material.Delamination occurred due to resin pyrolysis, while fiber damage resulted from carbon fiber sublimation.The propagation of damage within each layer was predominantly influenced by ply orientation and the orthotropic thermal and electrical properties of the material.This conclusion is drawn from observations of thermal ablation and delamination in the unprotected CFRP model, whereas only thermal ablation was noted in the CFRP model protected with carbon nanotubes.The variations in waveform and peak amplitudes of the applied impulse current significantly influenced both the size and modes of damage.Carbon nanotubes incorporated within CFRP layers proved effective in shielding against lightning strikes, enhancing electrical conductivity, and reducing damage area by 78.1 %.These nanotubes facilitated current conduction in both in-plane and thickness directions, minimizing resistive heating.The study involved unprotected CFRP and CFRP protected by carbon nanotubes with identical ply orientation and stacking sequence.However, it is crucial to explore how ply orientation and stacking sequence influence material damage during lightning strikes.Therefore, further research should investigate these factors and explore ways to enhance the protective efficacy of carbon nanotube sheets based on their arrangement. Although this study successfully highlighted the detrimental effects of CFRP behavior and explored the potential of carbon nanotubes for lightning strike protection, it acknowledges limitations.Hence, the ensuing suggestions are proposed for future research: The research exclusively examined the electro-thermal impact on CFRP composites due to its pronounced damage compared to the mechanical effect.However, future analysis should encompass mechanical, impact, and acoustic damage inflicted by lightning strikes on CFRP.Besides, the exploration of an element deletion method and alternative user subroutines to enhance the examination and understanding of outcomes in coupled thermal-electrical analysis. Fig. 3 . Fig. 3.The illustration of the double exponential current waveform. E .I. Awode et al.lightning arc flow considered as a high speed process influences the prediction of the material behaviour at the attachment point[55][56][57].The geometrical dimensions of the FE model (Fig.5) are length -300 mm, breadth-300 mm, Thickness per sheet -0.25 mm and Total thickness/height − 2 mm. Fig. 4 . Fig. 4. The illustration of the number of plies and stack sequence. Fig. 5 . Fig. 5. Illustration of the meshed model and some boundary conditions. Fig. 7 . Fig. 7.The illustration of the electric current waveform generated in the study. Fig. 8 . Fig. 8.The spike in temperature ( • C) in the depth direction and propagation of the lighting arc channel along the length (mm) of the top surface. Fig. 9 . Fig. 9.The electrical potential obtained after the numerical simulation shows the propagation of the lightning along the fibre direction. Fig. 10 . Fig. 10.The illustration of the damage prediction on the top layer (45 • ply) of simulation conducted in this study (a) Electric potential on the top layer and (b) Temperature profile on the top layer in comparison with (c) damage on a real-life experimental study [24]. Fig. 11 . Fig. 11.Illustration of (a) gases trapped within interlayers due to pyrolysis of the resin matrix leading to the dielectric breakdown of the outer layer and (b) surface lowering as a result of carbon sublimation and thermal stress fracture. Fig. 12 . Fig. 12. Illustration of (a) the damaged area on the CFRP model and (b) the depth of damage (in the thickness direction). Fig. 13 . Fig. 13.The illustration of (a) damage depth and width on unprotected CFRP model and (b) damage depth and width on CFRP protected with carbon nanotube sheets. Table 1 Description of SAE ARP 5414 aircraft lightning strike zones. Table 2 Parameters for the CNT sheet used in the simulation.
2024-04-19T15:13:55.472Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "08b6be3e136f0bc9c7ec26f7239dd3814b5283da", "oa_license": "CCBYNC", "oa_url": "http://www.cell.com/article/S2405844024057931/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2cd4170db89849f68b3f87bbeceaa24de7b22025", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Physics" ], "extfieldsofstudy": [] }
8891143
pes2o/s2orc
v3-fos-license
Evaluation of Guava Products Quality Fresh ripened Guavas were procured from entrepreneur’s field and were weighed, sorted, washed, lye peeled before crushing and sieving to get guava pulp for preparation of different products such as RTS, Nectar and guava bar. Physico-chemical properties (Total Soluble Solids (TSS) Acidity, Ascorbic acid Content (AAC), Thermal properties, Particle Size analysis) and microbial properties (bacteria, yeast and mould) were studied for the products prepared. Thermal properties such as Thermal Conductivity (w/mk); Thermal diffusivity (mm/s), Specific Heat Capacity (J/mk) and Thermal Resistivity (mk/w) ranged between 0.319-0.640, 0.076-0.086, 4.170-7.459 and 1.562-3.136 respectively. Particle size of guava products varied from 301to 1033 μm. Ascorbic acid content decreased with the decrease in TSS during product preparation. Microbial examination revealed that the product is safe to consume. Introduction Guava (Psidiumguajava L.) is a member of the large Myrtaceae or Myrtle family, believed to be originated in Central America and the southern part of Mexico (Somogyiet al. 1996). It is claimed to be the fourth most important fruit in terms of area and production after mango, banana and citrus. India is the major world producer of guava (Jagtianiet al. 1998). It has been in cultivation in India since early 17th century and gradually becamea crop of commercial importance. Guava is quite hardy, prolific bearer and highly remunerative even without much care. It is widely grown all over the tropics and sub-tropics including India viz., Uttar Pradesh, Bihar, Madhya Pradesh, Maharashtra, Andhra Pradesh, Tamil Nadu, West Bengal, Assam, Orissa, Karnataka, Kerala, Rajasthan and many more states. Main Varie-ties grown in India are Allahabad Safeda, Lucknow 49, Chittidar, Nagapur Seedless, Bangalore, Dharwar, AkraMridula, ArkaAmulya, Harijha, Hafshi, Allahabad Surkha CISHG1, CISHG2, CISHG3etc (NHB, 2010). Guava is often marketed as "super-fruits" which has a considerable nutritional importance in terms of vitamins A and C with seeds that are rich in omega-3, omega-6 polyunsaturated fatty acids and especially dietary fiber, riboflavin, as well as in proteins, and mineral salts. The high content of vitamin C (ascorbic acid) in guava makes it a powerhouse in combating free radicals and oxidation that are key enemies that cause many degenerative diseases. The anti-oxidant virtue in guavas is believed to help reduce the risk of cancers of the stomach, esophagus, larynx, oral cavity and pancreas. The vitamin C in guava makes absorption of vitamin E much more effective in reducing the oxidation of the LDL cholesterol and increasing the (good) HDL cholesterol. The fibers in guavas promote digestion and ease bowel movements. The high content of vitamin A in guava plays an important role in maintaining the quality and health of eyesight, skin, teeth, bones and the mucus membranes. With the changing consumer attitudes, demands and emergence of new market products, it has become imperative for producers to develop products, which have nutritional as well as health benefits. In this context, guava has excellent digestive and nutritive value, pleasant flavor, high palatability and availability in abundance at moderate price. The fresh fruit has limited shelf life therefore it is necessary to utilize the fruit for making different products to increase its availability over an extended period and to stabilize the price during the glut season.Guava can be consumed fresh or can be processed into juice, nectar, pulp, jam, jelly, slices in syrup, fruit bar or dehydrated products, as well as being used as an additive to other fruit juices or pulps (Leite et al. 2006). These products have good potential for internal as well as external trade. The utilization of guava for preparation of beverages and intermediates moistureproducts has not been explored much. Guava pulp can be used as base for the preparation of these products. In the food industry, knowledge of the physical properties of food is fundamental in analyzing the unit operations. They influence the treatment received during the processing and good indicators of other properties as well as the qualities of food. These benefit the producer, industry and the consumer (Ramos and Ibarz, 1998). Establishment of food processing industry in India is one of the best profitable businesses but due to non-availability of proper guidance and capacity-matching machinery makes this business unattractive to the investor. Small scale entrepreneurs and beginners require hands on experience before they invest in the procurement of the machinery and industry set up. Practical experience to run food processing plants will help in developing confidence in new entrepreneurs and analyze the actual facility required to establish the plant. Keeping above points in the mind, an existing pilot scale fruit processing facilities were ran at CIPHET, Ludhiana for preparing different value added products from the guava. Materials and Methods Fresh ripened guavas of similar maturation grade were procured from the entrepreneur's farm located at Ludhiana, Punjab (India). Guava fruits were cleaned in tap water to remove surface dust and leaves before weighing, sorting and lye peeling. Existing pilot scale fruit processing facilities (100 Kg/hr) at CIPHET, Ludhiana were used. Pilot Scale Processing Plant Existing pilot scale fruit processing facilities at CIPHET, Ludhiana having an average capacity of 100Kg/hr has following equipments/machineries. SamplePreparation Good quality sound guava fruits were lye peeled by dipping in 2% sodium hydroxide solution at 80ºC temperature for about 3 minutes. Lye peeled guava fruits were then neutralized with 1% citric acid solution before washing in tap water. The washed fruits were passed through a crusher/ slicer to crush the fruit. The crushed fruit pulp mixture was fed into the coarse pulper (1.14 mm dia.) followed by fine pulper (0.84 mm dia.) to separate the seeds, fibrous pieces and pulp in a homogenized pattern through a perforated stainless steel screen. Guava pulp was extracted using cold extraction method. The guava fruit pulp was blanched in a steam blancher at 100°C temperature for 3 minutes and used for the preparation of products such as Guava RTS, Nectar and Squash. Guava Product Preparation The pulp was taken for preparation of guava juice, guava nectar and guava leather. A brief explanation for each product is given below along with the process flowchart ( Fig 1). Guava Juice RTS was prepared using12% of guava fruit pulpand pasteurized at 85ºC for 3 min with the addition of sugar (12%) and citric acid (2.8g/l)and remaining volume was adjusted with water. For preparation of nectar, 20% guava pulp, 15% sugar, 2.5g/l citric acid and 65% water was used. Glass bottles filled with RTS and Nectar were subjected to sterilization for about 15minutes at 121°C (15psi) to control the microbial load. Determination of Physico-Chemical Properties The physic-chemical properties such as Total Soluble Solids (TSS), Acidity, Ascorbic acid Content (AAC), Thermal Properties, Particle Size analyzer and Microbial Count of guava pulp (fresh and blanched), RTS and Nectar were determined as follows: TSS -TSS value is defined as the amount of sugar and soluble minerals present in fruits. It is determined by the help of hand refractometer, which works on the principle of total refraction. A drop of the sample was placed on the plate to read the TSS in Brix. PercentTitratable Acidity -Titratable Acidity of product is the acidity in terms of the predominant acid present in the juice i.e. citric acid.Titratable acidity was measured according to the method described by (Ranganna, 2001). The % titratable acidity was determined by taking 5ml of sample, adding 4 to 5 drops of 1 % phenolphthalein indicator and titrating with 0.1 N NaOH. The following formula was used to calculate the total acid, % (Ranganna, 2001). Total acid (%) = Titre X Equivalent weight of acid X 100 Volume of sample taken X 1000 Ascorbic Acid Content (Vit. C) -Ascorbic Acid content was estimated by Iodine titration method (Suntornsuk et al. 2002). Ascorbic acid present in fresh and blanched guava pulp, guava juice, guava nectar and guava bar was determined. Sample of 10 ml was taken and made upto 100 ml volume with 3 % HPO3 and filtered. Standard ascorbic acid solution was prepared by taking 50 mg ascorbic acid and making up its volume upto 50 ml with 3 % HPO3 solution; an aliquot of 5 ml from this solution was made up to 50 ml with 3 % HPO3solution. 42 mg of NaHCO3 was dissolved in 150 ml hot distilled water. 50 mg of the dye, 2, 6-dichlorophenol indophenol was added in it and the volume was made upto 200 ml with distilled water to prepare the Dye solution. 5 ml of standard ascorbic acid solution was taken and mixed with 5 ml of 3 % HPO3 solution. Dye was filled in a pipette and titration was done till a pink color appears that persists for at least 15 seconds. 5 ml of sample was blended with 50 ml of 3 % HPO3 solution and filtered. 2 ml was taken from this solution and titrated against the dye. Dye Factor is 0.5/Titre. The calculations were done with the help of the following formula (Rangana, 2001): Ascorbic acid (mg/100g) = (Titre X Dye Factor ×Volume made up X 100) Aliquot of extract taken × Weight of sample taken Thermal Properties Thermal properties such as thermal conductivity, thermal diffusivity, specific heat capacity and thermal resistivity were determined using a thermal properties analyzer Type KD2, manufactured by (Decagon Devices Incorporation, USA). It was operating based on the line heat source method and the values were obtained directly from the digital readout. Thermal conductivity was measured in intact guavas, pulp (Blanched, unblanched and homogenized) as well as in all the products made from it. Particle Size Analysis Particle size of guava pulp and various products prepared from it were analyzed with the help of Particle size distribution analyzer, LA-950V2 (Horiba, Japan). It works on the principle of laser scattering through the sample of known refractive index. Samples were dispersed in a solvent and passed through flow type cell unit for the analysis. Microbial Analysis The microbial load of the product was determined by checking the fungal and bacterial growth in the developed product for safety of the consumers. For Fungal and Bacterial load Mortin Rose Bengal agar [Peptone (5.0g), glucose (10.0g), KH 2 PO 4 (1.0g), MgSO 4 7H 2 O (0.5g), Rose Bengal (0.035g) and agar (18.0g ) were dissolved in 1000 ml of distilled water]and standard plate count agar [Peptone/trypton (5.0g), yeast extract (2.5g), beef extract (2.0g), glucose(10.0g) and agar (18.0g) were dissolved in 1000 ml of distilled water/media were used respectively. Water blanks were prepared by 1g of sampleto 10 ml of autoclaved water. For juice samples, the dilution was made up to 10 -1 and 10 -2 respectively for both enumerations of fungi and bacteria. From different dilutions made from different dilutions made of different products, 1ml was poured into each petridish followed by addition of 20-25ml of media. The petridish was circumvolved for proper mixing. The plates were allowed to solidify and then kept in incubator at 37ºC and 30ºC for bacteria and fungi respectively. Colonies were counted after 72 h and 24 h for fungi and bacteria respectively. The colonies were counted and were calculated as (Rangana, 2001). Colony Forming Unit (cfu)/ml = colonies counted × reciprocal of dilution factor. The microbial analysis thus gave the measure of viable yeast andmold count and keeping quality of the guava juice. Statistical Analysis Physico-chemical properties and microbial load evaluation were carried out to check the effect of treatments and safety of food quality. Data was analysed as per one-way ANOVA, using LSD of AgRes Software statistical package. Results and Discussion Fresh ripened guavas harvested from entrepreneur's farm located at Ludhiana were used for preparation of different products and their properties were determined as follows: Thermal Properties From Fig 2 it's clear that the Thermal properties such as thermal conductivity (w/mk) and thermal diffusivity (mm 2 /s) values are ranged between 0.319-0.640 and 0.076-0.086 respectively where as specific heat capacity (J/m 3 k) and thermal resistivity (mk/w) values are ranging between 4.170-7.459 and 1.562-3.136 respectively (Table 1). From Fig 3 it's clear that the specific heat and thermal conductivity showed linear dependency on water content and temperature i.e. it is increased with increasing temperature and decreased with decreasing total soluble solids. Similar results were also reported for tamarind juice concentrates (Manohar et al. 1991), clarified apple juice (Constenla et al. 1989 Microbial Load The microbial load of the product was determined by checking the fungal and bacterial growth in the developed product for safety of the consumers. No fungal and bacterial infestation was detected in any of the processed guava products. Similar results were reported in foam-mat dried mango (Kadam et al. 2010). Therefore, the value added products prepared from guavain this study may be adjudged safe as far as national and international standards of microbial safety are concerned (Kadam et Conclusions Since value addition and product diversification is of paramount importance in the present market scenario. More diversified products from Guava like RTS, nectar and guava leather/bar, have much importance. The developed products were excellent in taste, rich in nutritional quality, retained original fruit flavor and safe for consumption. Development of such nutritional products using pilot scale facilities will not only reduce the postharvest losses but also impart value to less appreciated fruits. Processed guava pulp can be converted in to a novel "guava leather/bar" product developed by CIPHET, Ludhiana/Abohar which will add3-4 times value to the fruits. Therefore, manufacturing of such products will provide ample avenues for employment generation in the rural masses by way of setting small scale processing unit.
2019-04-06T13:07:50.785Z
2012-02-01T00:00:00.000
{ "year": 2012, "sha1": "6b78425e93babc47319dac6b18c19f760bdae580", "oa_license": null, "oa_url": "https://doi.org/10.5923/j.food.20120201.02", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "28fa55c74f83a18e23b32e08170e9a52d23057a4", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
252598155
pes2o/s2orc
v3-fos-license
Tracing Bai-Yue Ancestry in Aboriginal Li People on Hainan Island Abstract As the most prevalent aboriginal group on Hainan Island located between South China and the mainland of Southeast Asia, the Li people are believed to preserve some unique genetic information due to their isolated circumstances, although this has been largely uninvestigated. We performed the first whole-genome sequencing of 55 Hainan Li (HNL) individuals with high coverage (∼30–50×) to gain insight into their genetic history and potential adaptations. We identified the ancestry enriched in HNL (∼85%) is well preserved in present-day Tai-Kadai speakers residing in South China and North Vietnam, that is, Bai-Yue populations. A lack of admixture signature due to the geographical restriction exacerbated the bottleneck in the present-day HNL. The genetic divergence among Bai-Yue populations began ∼4,000–3,000 years ago when the proto-HNL underwent migration and the settling of Hainan Island. Finally, we identified signatures of positive selection in the HNL, some outstanding examples included FADS1 and FADS2 related to a diet rich in polyunsaturated fatty acids. In addition, we observed that malaria-driven selection had occurred in the HNL, with population-specific variants of malaria-related genes (e.g., CR1) present. Interestingly, HNL harbors a high prevalence of malaria leveraged gene variants related to hematopoietic function (e.g., CD3G) that may explain the high incidence of blood disorders such as B-cell lymphomas in the present-day HNL. The results have advanced our understanding of the genetic history of the Bai-Yue populations and have provided new insights into the adaptive scenarios of the Li people. Introduction Hainan Island is located in southern China and is considered a critical site connecting the human populations of East Asia and Southeast Asia (Li, Li, Ou, et al. 2008;Li et al. 2013). Several archaeological relic sites discovered in Changjiang County of Hainan Province have indicated that the earliest modern human settlement on Hainan Island could date back to ∼20,000 years ago (ya) in the Paleolithic Age, and the unearthed stone implements signify a high similarity with cultures from mainland South China (Li, Li, Wang, et al. 2008). The frequent movements of East Asian and Southeast Asian populations on the MBE mainland have facilitated their genetic admixture, and further increased genetic diversity and phenotypic affinities of populations involved in admixture (Lipson et al. 2018;McColl et al. 2018). In turn, human genetic diversity in the insular region is always distinguished from those in the continent due to the effect of geographical isolation (Matsunami et al. 2021), resulting in the unique and uniform genetic backgrounds of island aboriginal people. As a result, Hainan Island may harbor ancient footprints of East Asian and Southeast Asian populations regarding genetic origins and evolution. Overall, the specific circumstances could intensify the merits of present-day aboriginal people living on Hainan Island regarding the preservation of distinctive genetic patterns and undergoing characteristics of adaptive evolution, and thus such information would help to gain insight into the identification of genetic variants with large effects in traits related to adaptations. As the dominant aboriginal people living on Hainan Island, the Li (also known as the Hlai) population in Hainan (HNL) is considered an ethnic group in China whose history is not well known. The Li nationality is officially recognized as one of the 55 Chinese ethnic minorities, and the "Li" in ancient Chinese refers to the ethnic minority living dispersedly in the mountainous areas of Hainan Island. The present-day HNL speaks the Hlai language that belongs to the Tai-Kadai (also known as Kra-Dai) language family, and the group inhabits mountainous areas in Central and South Hainan Island. The earliest historical record of HNL can be traced back to Shangshu ∼2,500 ya (Li 2006), and the records of Shiji in the Han dynasty ∼2,200 ya formally described the HNL as one lineage of ancient Bai-Yue (Wu 1997;Li 2006). The "Bai-Yue" in ancient Chinese refers to the "hundreds of tribes," who are collectively known as ancient indigenous Tai-Kadai-speaking populations living in the present-day south of the Yangtze River to North Vietnam (Jin et al. 2001). Under the influences and constraints from surrounding populations throughout history, the populations derived from ancient Bai-Yue lineage have undergone different migration, admixture, and isolation, which shape the various present-day southern East Asians (EAS.South) and mainland Southeast Asians (MSEA). Nonetheless, the genetic origin and population history of HNL remain debatable. One hypothesis proposes the HNL migrated from South China and are descendants of the ancient Bai-Yue lineage. For example, previous studies based on mitochondrial DNA (mtDNA) and SNP-array data showed that HNL presented close genetic affinities with mainland Tai-Kadai-speaking populations in South China (He et al. 2020;Mengge et al. 2020). An alternative hypothesis based on Y-chromosomal data proposed that the HNL originated from ancient migrants from Southeast Asia to East Asia ∼20,000 ya (Li, Li, Ou, et al. 2008). Moreover, another study applying Y-chromosomal analysis proposed that the lower genetic diversity of HNL at the paternal level probably resulted from a founder effect (Song et al. 2019). These studies suggest that HNL manifested a close genetic relationship with indigenous populations in South China, where the Bai-Yue ancestors were believed to be widely distributed, while also retaining a unique genetic background. However, previous studies of the HNL have focused on forensic characteristics or uniparental genetic markers (Li, Li, Ou, et al. 2008;Peng et al. 2011;Li et al. 2013;Fan et al. 2018;Song et al. 2019;Li et al. 2020;Mengge et al. 2020) and have therefore failed to portray the full picture of genetic history and adaptive evolution of HNL. In addition, due to the limited amount of genetic material, small sample size, and analytical approaches, conclusions drawn from previous studies are contradictory and may show bias concerning the fine-scale population history of HNL. Indeed, genetic studies of HNL remain largely unexplored, and fundamental questions remain unsolved, including (1) whether there is a Bai-Yue ancestry enriched in HNL and other indigenous populations living in present-day South China and North Vietnam; (2) when the HNL arrived at Hainan Island; (3) whether there is recent genetic admixture in HNL and when it began; (4) whether there was adaptive evolution of HNL attributed to the local environment of the isolated island. To obtain explicit information concerning the genetic characterization of HNL, in the present study we sequenced whole genomes of 55 HNL individuals living in Central and South Hainan Island (supplementary fig. S1A, Supplementary Material online), the main settlement of the Li population. Analyzing the genetic data together with East Asian and Southeast Asian populations (supplementary table S2, Supplementary Material online), especially the populations of EAS.South and MSEA, we describe the population structure, demographic history, and local adaptations of the HNL. We provide new insights into the genetic history of populations from the Bai-Yue lineage, and the result will advance our understanding of human adaptive evolution in insular circumstances. To dissect the ancestral composition of HNL, we further performed global ancestry inference using ADMIXTURE (Alexander et al. 2009 Genomic Diversity and Genetic Ancestry of HNL As revealed by the ADMIXTURE analysis, the Bai-Yue ancestry in HNL.Main is genetically homogeneous with low variation, indicating strong drift due to isolation. To measure the population inbreeding of HNL, we calculated the runs of homozygosity (ROH) for HNL and compared this with other East Asian populations in the next-generation sequencing (NGS) panel (see Materials and Methods). The HNL showed larger numbers and longer average lengths of medium (0.5-1 Mb) and long ROH (>1 Mb) than other East Asian populations (supplementary fig. S11, Supplementary Material online), supporting the hypothesis that HNL was more isolated from having lived on the island. Genetic We further calculated the f 3 statistics in the form of f 3 (X, Y; HNL.Main), with X and Y as all the possible population combinations of East Asian and Southeast Asian populations to test for potential admixture in HNL.Main. We found no evident admixture signal can be detected with . These results suggest that admixture evidence was found in HNL.Admixed and another three mainland Bai-Yue populations, but was lacking in the HNL.Main. We alternatively employed GLOBETROTTER (Hellenthal et al. 2014) to detect plausible ancestral sources for HNL.Main and mainland Bai-Yue populations from multiple East Asian and Southeast Asian surrogates. The best-guess conclusion for admixture in the HNL.Main and Thai_V was "uncertain," whereas potential admixture events were detected in other Bai-Yue populations (supplementary table S7, Supplementary Material online), suggesting less likely admixture events occurred in HNL.Main. To further test whether HNL is the best representation of a Bai-Yue ancestry found in present-day Bai-Yue populations, we introduced two ancient individuals, the Bianbian representing an ancient northern East Asian ancestry and the Qihe representing an ancient southern East Asian ancestry (Yang et al. 2020), and used f 4 statistics in the form of f 4-(HNL.Main, mainland Bai-Yue groups; Bianbian/Qihe, Yoruba) to evaluate their genetic connections with ancient ancestries. The result illustrated the f 4 values were consistently negative for Bianbian and positive for Qihe, which indicates HNL.Main show closer genetic connections with ancient southern East Asian ancestry than with mainland Bai-Yue populations ( fig. 2A). We further introduced additional ancient ancestries from Guangxi of South China and applied qpAdm-based mixture models (Patterson et al. 2012) to characterize genetic ancestry components of present-day HNL and other Bai-Yue Tracing Bai-Yue Ancestry in the Li People · https://doi.org/10.1093/molbev/msac210 MBE populations (see Materials and Methods, supplementary table S8, Supplementary Material online). We observed that HNL.Main harbored higher ancient southern ancestry (LadaKH01 + Qihe) but lower ancient northern ancestry (Bianbian) than other Bai-Yue populations. In addition, we also found that HNL.Main showed a higher proportion of Qihe ancestry, an ancestry related to that found in Austronesians (Yang et al. 2020), than other mainland Bai-Yue populations (supplementary fig. S12B, Supplementary Material online). This is consistent with our result of f 4 (HNL.Main, mainland Bai-Yue groups; Qihe, Yoruba) as well as a previous study that illustrated the Li population shows the highest ancestry proportion of Liangdao hunter-gatherer among Tai-Kadai speakers (Wang, Yeh, et al. 2021). We also computed the f 4 statistics in the form f 4 (mainland Bai-Yue groups, X; HNL.Main, Yoruba), where X is other present-day East Asians and Southeast Asians, to investigate whether HNL showed different affinities with East Asians or Southeast Asians compared with other mainland Bai-Yue populations. We found that HNL showed a closer genetic affinity with isolated Austronesian populations that harbor more divergent ancestry, such as the Ami, Atayal, and Kankanaey Morseburg et al. 2016;Skoglund et al. 2016), than with the mainland Bai-Yue populations (supplementary fig. S13, Supplementary Material online), suggesting HNL could be a present-day Tai-Kadaispeaking population who is closer to the Austronesianrelated ancestry. Overall, these results suggest that lower gene flow occurred in HNL because of the isolated circumstances; this may have helped to retain the genetic characteristics of HNL's genome and to be representative of Bai-Yue ancestry. As shown in the ADMIXTURE results, Bai-Yue ancestry was widely distributed in EAS.South and MSEA. We thus compared ancestry sharing between HNL and other EAS.South and MSEA based on identity by descents (IBDs). We found HNL.Main, Gelao, and Tay showed elevated levels of within-population IBD sharing compared with other Bai-Yue populations (supplementary fig. S14A, Supplementary Material online). In addition, between-population IBD sharing showed that HNL shared . Interestingly, among these archaic introgression signals, the involved gene NPHP3-AS1 and the hypothetical gene BC039487 were both reported to be associated with the age at menarche in previous genomewide association studies (GWAS; Perry et al. 2009;Pickrell et al. 2016;Tachmazidou et al. 2017). These results suggest that Denisovans had less connection with southern East Asian populations of Bai-Yue ancestry, although relatively unique Denisovan sequences were identified in Bai-Yue populations. Genetic Origin and Population History Given the genetically isolated ancestry identified in HNL, we also analyzed ancient DNA (aDNA) data that consist of ancient individuals of EAS.South and MSEA (see Materials and Methods, supplementary table S9, Supplementary Material online) to investigate the homogeneous Bai-Yue ancestry in HNL from ancient individuals with a wide time range. We first projected these ancient individuals onto the PCA of present-day East Asian and Southeast Asian populations and found that five ancient individuals from Guangxi in a historical era were placed with Bai-Yue populations (supplementary fig. S19, Supplementary Material online). In particular, three ancient Guangxi individuals ∼1,500 ya were placed with HNL.Main, and the other two Guangxi individuals ∼500 ya were closer to mainland Bai-Yue populations Material online) and estimated the TMRCA of this specific paternal lineage. We found that paternal lineage O1b1a1a (O-M95) was dominated by Bai-Yue populations (55/62), including HNL, CDX (Dai), and KHV (Kinh). We then estimated that this paternal lineage appeared at least in 10,998 ya (95% confidence interval [CI]: 10,082-12,651 ya; fig. 3A and supplementary table S10, Supplementary Material online). As for sublineages of O1b1a1a (O-M95), we found that the HNL individuals under O1b1a1a (O-M95) all belonged to the sublineage O1b1a1a1a1. We also found that individuals belonging to sublineage O1b1a1a1a1a were mainly Dai (12/32), Kinh (11/32), and HNL (6/32), whereas the sublineage O1b1a1a1a1b was dominated by HNL (13/23) and Dai (7/23). This may suggest a closer genetic relationship at a paternal level between HNL and Dai, and the divergence of HNL and Dai occurred later than that of HNL and Kinh. We also observed that there were two evident divergences between HNL and Dai under the O1b1a1a1a1b sublineage MBE occurring 2,700 ya (95% CI: 2,025-3,437 ya) and 2,828 ya (95% CI: 2,151-3,280 ya). To infer the fine-scale population history of HNL with Bai-Yue lineage, we applied a multiple sequentially Markovian coalescent (MSMC; Schiffels and Durbin 2014) to estimate the historical effective population size (N e ) using Han and mainland Bai-Yue populations (Dai and Kinh) for comparison ( fig. 3B). The results showed that N e of HNL.Admixed was consistently higher than that of HNL.Main since ∼20,000 ya ( fig. 3B), probably resulting from the higher genetic similarity between HNL.Admixed and Han. We also observed that Bai-Yue populations, including HNL, Dai, and Kinh, all experienced a bottleneck ∼7,400 ya when the Han Chinese underwent population expansion in the early Neolithic Age. In addition, HNL.Main continued to experience bottlenecks since ∼4,000 ya, consistent with the timing of large-scale migration of the Li population to Hainan Island from South China, when all the other mainland populations experienced substantial increases of N e . We also estimated the N e based on genome-wide genealogies using RELATE (Speidel et al. 2019). Although RELATE yielded lower estimated values, the overall pattern was consistent with that of MSMC (supplementary fig. S23A, Supplementary Material online). Recent demography inferred from IBD segments using IBDNe (Browning and Browning 2015) also illustrated that HNL.Main showed an elevated decrease of N e compared with Han and mainland Bai-Yue populations (supplementary fig. S24, Supplementary Material online). We then estimated that HNL.Main divergence from Han occurred ∼13,000-7,900 ya, much earlier than the divergence between HNL.Admixed and Han ∼3,600 ya (supplementary fig. S23B and C, Supplementary Material online). In addition, we estimated the divergence time between HNL.Main and the mainland Bai-Yue populations such as Dai and Kinh began ∼3,600 ya (supplementary fig. S23B, Supplementary Material online). This divergence was followed by other two divergences between HNL and Dai within the O1b1a1a1a1b sublineage ∼2,800-2,700 ya, suggesting the time of population differentiation among the ancient Bai-Yue lineages. Local Adaptation To investigate the potential population-specific adaptation of HNL.Main, we applied population branch statistics (PBS; Yi et al. 2010) to perform a genome-wide scan using the Han and CEU as the ingroup and outgroup reference populations, respectively. Notably, in the HNL-Han-CEU trio, we pinpointed several evident haplotype blocks with strong PBS signals that suggest recent positive selec- The strongest PBS signal of selection was an ∼110 kb region encompassing four genes located on chromosome 11 comprising FADS1, FADS2, and their upstream genes MYRF and TMEM258 ( fig. 4A and supplementary fig. S27A, MBE Supplementary Material online). The genes FADS1 and FADS2 encode the fatty acid desaturase (FADS) enzymes that involve the determinants of long-chain (LC-) polyunsaturated fatty acid (PUFA) levels in lipid metabolism. We found that rs174570 had the top selection signal within the FADS region (PBS = 1.22), which has been reported as an evident and potentially functional selection signal identified from Greenlandic Inuit (Fumagalli et al. 2015). In addition, a reported variant under selection in Indonesia Flores pygmy (Tucci et al. 2018), rs174547-C tagging ancestral haplotype, was also fixed in HNL with 0% derived allele frequency ( Malaria was prevalent in Central and South Hainan and overlapped with the main settlement of HNL (Xiao et al. 2010(Xiao et al. , 2012. Thus, we also focused on the selection signals related to malaria pathogenesis. We identified variants of three genes for which the PBS values were in the top 0.005% percentile, CR1, FREM3, and IL6 ( fig. 4A), genes that have been reported to be associated with malarial susceptibility and/or severity. The CR1 encodes a membrane glycoprotein found on different types of blood cells. It was reported as being a receptor for the invasion of red blood cells by the parasite (Stoute 2011). As for FREM3, it was identified as a selection or GWAS signal of malaria in African populations in previous studies (Malaria Genomic Epidemiology et al. 2015;Ndila et al. 2018;Ravenhall et al. 2018;Choudhury et al. 2020), and the polymorphism of FREM3 was reported to be associated with differential susceptibility to severe malaria (Ndila et al. 2018;Choudhury et al. 2020). This association is probably because FREM3 is close to a cluster of glycophorin genes (supplementary fig. S28A, Supplementary Material online; GYPA, GYPB, and GYPE) that encode blood group antigens for malaria resistance (Malaria Genomic Epidemiology et al. 2015;Ndila et al. 2018). The last gene with significant PBS signals was IL6 which encodes interleukin-6 and is one of the indicators of malaria severity (Kern et al. 1989;Mbengue et al. 2016). Overall, these genes with strong selection signals suggest positive selection induced by malaria resistance in the HNL. Since a high incidence of blood disorders is accompanied by a high prevalence of malaria infection, we were also concerned with genes that were involved in hematopoiesis or blood disorders. We first focused on the haplotype block with a strong selection signal located in a ∼610 kb region of chromosome 11 involving 11 genes with variants having high PBS values ( fig. 4A). This region also showed strong selection signals by the XP-EHH method ( fig. 4B). Among these genes, we found five genes, ATP5L, BCL9L, CD3G, CXCR5, and DDX6, that have been reported to be associated with the occurrence of B-cell lymphomas, a blood cancer caused by the disorder of immune functional B cells (also known as B lymphocytes) that attack invading pathogens. Notably, we found a missense variant rs3753058 within CD3G that showed a strong selection signal (PBS = 0.6; fig. 4C) and putative loss of function, since it was predicted to be damaged by Sorting Intolerant from Tolerant (SIFT) (Kumar et al. 2009), Polymorphism Phenotyping (PolyPhen) (Adzhubei et al. 2010), and Combined Annotation Dependent Depletion (CADD) (Rentzsch et al. 2019) methods. The protein encoded by CD3G is a part of the T-cell receptor (TCR)-CD3 complex that plays an essential role in the adaptive immune response. The derived allele (T) of CD3G-rs3753058 could change the position 131 of the CD3G protein sequence (Ensembl protein ID ENSP00000431445) from valine to leucine (p.Val131Leu; fig. 4D). This derived allele is enriched in East Asians (∼30-50%) and particularly shows a much higher frequency in HNL (82.29%) compared with other global populations ( fig. 4E). Moreover, we also found another gene located on chromosome 13 with strong PBS signals, FLT3 ( fig. 4A), that is involved in the regulation of hematopoiesis and the development of lymphocytes. In addition, most of these genes with selection signals showed relatively high expression levels in tissues related to B cells such as spleen and Epstein-Barr virus (EBV)-transformed lymphocytes in the GTEx data set (supplementary fig. S29, Supplementary Material online). Collectively, we speculated that these genes under selection could be malaria driven and have become a part of the genetic contribution to immune-related blood traits in present-day HNL. Finally, to investigate the interactions of genes putatively under selection, we performed functional enrichment using genes with variants for which the PBS value was over the top 0.005% percentile (supplementary table S11, Supplementary Material online). We found that a hematopoietic cell lineage (KEGG: hsa04640) was identified as having the strongest signal in the enrichment analysis (supplementary fig. S30 and table S12, Supplementary Material online). In addition, we also searched for adaptive signals of polygenic selection in HNL from the KEGG database (Kanehisa et al. 2017) by determining whether the PBS distribution of variants in a gene set was significantly shifted toward larger values than the rest of the genes across the genome. We detected 13 gene sets showing an overall significantly larger distribution of PBS values as candidates for polygenic selection (Bonferroni P-value <0.05; supplementary table S13, Supplementary Material online) and found that the hematopoietic cell lineage (KEGG: hsa04640) was also identified as a candidate pathway for polygenic selection. These results again confirmed that a local adaptation of hematopoietic function has occurred in HNL. Evolutionary Scenario of Bai-Yue Lineage To characterize differentiated adaptation within Bai-Yue populations, we also used HNL-CDX-Han and HNL-KHV-Han trios to search for candidate selection signals within Bai-Yue lineage (supplementary fig. S31A and MBE B, Supplementary Material online). We found that a certain number of strong PBS signals, including genes in the FADS region and genes related to malaria and B-cell lymphomas, overlapped with the HNL-Han-CEU trio (supplementary fig. S31C, D and table S14, Supplementary Material online). We thus hypothesized that the differentiation between the island and mainland Bai-Yue populations could have been driven by the admixture between mainland Bai-Yue and surrounding mainland populations such as the Han. We then changed the target population and ingroup reference populations of these two trios as CDX-HNL-Han and KHV-HNL-Han and compared the PBS distribution with the original HNL-CDX-Han and HNL-KHV-Han trios. We found that the PBS distribution using HNL as the target population was significantly lower than those of mainland Bai-Yue populations ( fig. 5A), indicating that mainland Bai-Yue populations shared more potential adaptations with the Han. Combined with the previously inferred population history, we assumed that the founder effect in HNL preserved the high proportion of Bai-Yue ancestry and the local adaptations in ancient Bai-Yue, which were subsequently diluted in mainland Bai-Yue populations due to the gene flow ( fig. 5B). For example, the DAF of rs174570-T on the FADS2 locus decreased with a decrease of Bai-Yue ancestry proportion ( fig. 5C). This observation was also consistent with the East Asian haplotype patterns of the FADS region, that is, populations with more Bai-Yue ancestry tended to harbor more haplogroups closer to the ancestral FADS haplotype (supplementary fig. S32, Supplementary Material online). As another example, we observed that FREM3-rs186244045, a typical Bai-Yue-specific variant, showed the highest DAF in HNL (42.71%) and was followed by mainland Bai-Yue populations, CDX (11.82%), and KHV (8.08%), whereas this derived allele is rare (<5%) or absent in other worldwide populations ( fig. 5C and supplementary fig. S28, Supplementary Material online). These results suggested that the continental region intensified genetic affinity among mainland populations, while such an effect was much weaker on island populations due to the more isolated circumstances. Discussion The present-day populations of once or currently speaking Tai-Kadai languages are mainly EAS.South and MSEA. As revealed by the ADMIXTURE analysis, Bai-Yue ancestry was widespread in EAS.South and MSEA and was well preserved in Bai-Yue populations in South China and North Vietnam ( fig. 1D and supplementary fig. S8, MBE Supplementary Material online). In particular, our analyses confirmed that these populations from South China and North Vietnam showed close genetic affinity and have a common genetic origin, the Bai-Yue lineage. We also observed the Bai-Yue ancestry in HNL.Main was homogeneous with the highest proportion of Bai-Yue ancestry ( fig. 1D), which is likely to be a consequence of the isolated circumstance of Hainan Island. The Bai-Yue lineage is believed to have originated from South China and corresponds to the present-day Tai-Kadai-speaking populations (Jin et al. 2001;Bin et al. 2021;Yang et al. 2022). However, our results showed the Austroasiatic-speaking Kinh population presented evident genetic characteristics of Bai-Yue lineage as similar to the other Tai-Kadai-speaking Bai-Yue populations on the mainland. A previous study also illustrated that Austroasiatic-speaking Kinh, Muong, and Tibeto-Burman-speaking Phula and Lolo in North Vietnam were genetically closer to the Tai-Kadai-speaking populations rather than to other populations from the same language family ). In addition, consistent with this previous study, we also observed that Tai-Kadai-speaking Colao and Lachi populations in North Vietnam show distinctiveness with specific genetic components from other mainland Bai-Yue populations, probably due to the strong genetic drift . As genetic and linguistic classifications can diverge in a population, we proposed that although present-day Bai-Yue populations mainly speak Tai-Kadai languages, Bai-Yue lineages also included populations speaking different language families. A previous study based on sporadic Y-SNP markers estimated that the settling of HNL on Hainan Island occurred ∼44,500-11,300 ya based on the paternal lineage O-M95 (Li, Li, Ou, et al. 2008), whereas another study based on maternal mtDNA lineages proposed that the peopling of Hainan Island occurred ∼27,000-7,000 ya (Peng et al. 2011). However, due to the low resolution of the data and the lack of comparisons, the timing estimated from previous studies may be ambiguous. Moreover, due to strong genetic drift, uniparental markers are inclined to estimate the formation time of specific paternal or maternal lineages of HNL ancestors, rather than the timing of the settlement on Hainan Island. Taking advantage of the high-resolution NGS data, we estimated the formation time of the specific NRY lineage O-M95 of the Bai-Yue population as ∼11,000 ya ( fig. 3A), an estimate that refines the possible origin time of the ancient Bai-Yue lineage. In addition, we observed the Bai-Yue populations experienced a bottleneck from ∼7,400 to ∼4,000 ya based on our MSMC estimation ( fig. 3B), probably induced by the Han Chinese expansion in the Neolithic Age Zhang et al. 2017). This hypothesis is also supported by our observation that multiple EAS.South and MSEA were modeled as an admixture of ancestry sources from HNL.Main and ancient northern East Asian ancestry (Bianbian) in qpAdm analyses ( fig. 2B). Intriguingly, we also found that Han Chinese and Tujia with a strong Han Chinese genetic assimilation showed relatively high f 3 values in outgroup f 3 analyses of HNL (supplementary figs. S2C and S6, Supplementary Material online), which may be induced by the consistently increasing N e and large genetic variation of Han Chinese population. Traditional historical records indicate that the HNL migrated from mainland South China to Hainan Island ∼4,000-3,000 ya (Du et al. 1993;Attané and Gu 2014). Our observations based on aDNA analyses indicated that HNL.Main show closer genetic affinity with ancient individuals from Vietnam and Fujian, China ∼4,000 ya rather than ancient Guangxi individuals ∼1,500 ya compared with other mainland Bai-Yue populations, which suggests the migration of HNL was much earlier than 1,500 ya (supplementary fig. S21, Supplementary Material online). In our MSMC estimates, we found that since ∼4,000 ya, the HNL experienced a continual population bottleneck ( fig. 3B), whereas other mainland Bai-Yue populations (Dai and Kinh) displayed population growth after the previous bottleneck induced by the Han Chinese expansion since ∼7,400 ya ( fig. 3B). Our observation suggests that the further HNL bottleneck was probably caused by the large-scale migration of the ancient proto-HNL from mainland South China to Hainan Island. In addition, previous linguistic research proposed that the Hlai language used by HNL diverged as a separate branch from other languages within the Tai-Kadai language family ∼4,000-3,000 ya (Bauer 2002;Diller et al. 2004;Blench et al. 2005). Our MSMC analyses estimated that the divergence between HNL and mainland Bai-Yue populations started from ∼3,600 ya (supplementary fig. S23, Supplementary Material online), in agreement with the time of linguistic divergence. Moreover, both previous studies (He et al. 2020;Li et al. 2020) and our observations based on the f 3 tests indicated that HNL was an isolated population with low gene flow compared with other mainland Bai-Yue populations. The f 4 tests of our study also illustrated that, compared with the mainland Bai-Yue populations, HNL show closer genetic connections with ancient southern East Asian ancestry and Austronesian-related ancestry, which may also be preserved by the early migration to Hainan Island. Collectively, we propose that the ancient Bai-Yue population lived in mainland South China before ∼4,000 ya, and a part of the ancient Bai-Yue population, that is, the proto-HNL, started migrating from the mainland to Hainan Island and became the main settlers ∼4,000-3,000 ya. The isolated circumstance of Hainan Island well preserved the ancient Bai-Yue ancestry in the HNL and prevented admixture with other populations, thus restricting the increase of N e growth of the HNL. In turn, the mainland Bai-Yue populations were admixed in various degrees with ancestries from other surrounding groups on the mainland, and this contributed to the increase of N e since ∼4,000 ya. Such an effect thus further resulted in the differentiation of gene pools of island HNL and mainland Bai-Yue populations. For example, rs174570 within FADS2 and rs186244045 within FREM3 are population-specific adaptive variants for HNL, whereas these have lower DAF in mainland Bai-Yue populations MBE ( fig. 5C). The diluted adaptations could have resulted from the admixture between mainland Bai-Yue populations and other surrounding mainland populations that have much lower DAF values of these adaptive variants. The enzymes encoded by the FADS genes are involved in the biosynthesis of omega-3 and omega-6 LC-PUFAs that are enriched in individuals subsisting on animal-based diets but absent for those subsisting on plant-based diets (Ameur et al. 2012;Ye et al. 2017). The decrease and increase of FADS1 expression are likely to respectively represent adaptations to low and high conversion efficiency from SC-to LC-PUFA, corresponding to animaland plant-based diets (Ye et al. 2017;Mathieson and Mathieson 2018). In our study, we identified strong positive selection signals on FADS1 and FADS2 in Tai-Kadai-speaking HNL. We observed that rs174570 and rs174547 showed inverse patterns in DAF, but the alleles with high frequency in HNL were both associated with down-regulation of FADS1, that is, reducing the efficiency of conversion from SC-to LC-PUFA (supplementary fig. S27, Supplementary Material online). We found that other mainland Bai-Yue populations, though lower than HNL, also showed relatively high frequencies of these adaptive variants ( fig. 5C). Even though Tai-Kadai speakers are regarded as corresponding to the origin of rice farmers from the Yangtze River Basin in ancient South China (Li, Huang et al. 2007;Molina et al. 2011;Gutaker et al. 2020;Wang, Yeh, et al. 2021), our observations suggest that their adaptation was driven by traditional animal-based diets rather than plant-based diets. We propose that such adaptation in East Asia could be traced to ancestors in the more ancient periods such as pre-Neolithic hunter-gatherers (Matsumura et al. 2019;Yang et al. 2020) rather than the farmers with a prosperous rice culture. This hypothesis may be supported by a previous aDNA study illustrating that present-day Tai-Kadai speakers in South China comprise a higher proportion of ancestry sources from a Liangdao hunter-gatherer than other Chinese populations (Wang, Yeh, et al. 2021). Additionally, we observed that the haplotype frequency of the FADS region is differentiated between southern and northern East Asians (supplementary fig. S32, Supplementary Material online). The rs174570 within FADS2 with the highest PBS value was also identified as a highly differentiated variant between northern and southern Han Chinese in our previous study (Xu et al. 2009). Such differentiation could have resulted from the differences in local historical diets between northern and southern populations in East Asia, and also the more frequent admixture between the Bai-Yue population and southern Han Chinese. The HNL settlement area was once a region with a high incidence of malaria. In the genome-wide scan of PBS, we identified several signals of local adaptation related to malaria infection, including CR1, FREM3, and IL6 (fig. 4A). These genes are highly correlated with hematopoietic functions, implying strong interaction with parasite invasion. For example, CR1 plays a key role in the Knops blood group on erythrocytes; the CR1 polymorphisms can result in the CR1 deficiency and help confer protection against severe malaria (Cockburn et al. 2004;Kwiatkowski 2005). Such variants of CR1 were reported to be under selection in populations living in Sardinia (Kosoy et al. 2011) or prevalent in other malaria-endemic regions such as Papua New Guinea, India, and Kenya (Cockburn et al. 2004;Thathy et al. 2005;Rout et al. 2011). In addition, both functional enrichment and tests of polygenic selection detected the pathway of hematopoietic cell lineage (KEGG: hsa04640) as evidence that genes related to human hematopoietic function in HNL were differentiated, probably due to malaria pathogenesis. We then focused on genes under selection in HNL associated with the occurrence of B-cell lymphomas ( fig. 4A), a blood disorder that occurs at a higher incidence in equatorial areas endemic to malaria (Molyneux et al. 2012;Robbiani et al. 2015;Nelson 2016). These genes, including ATP5L, BCL9L, CD3G, CXCR5, DDX6, and FLT3, all showed relatively high gene expression levels in tissues highly correlated with B cells such as spleen and EBV-transformed lymphocytes (supplementary fig. S29, Supplementary Material online). Moreover, a previous epidemiological study also described a higher incidence of B-cell lymphomas occurring on Hainan Island compared with other types of malignant lymphomas (The Nationwide Lymphoma Pathology Cooperative Group 1985). The main functions of B cells are producing antibodies to attack invading pathogens and to be involved in the immune response against pathogenic infections. The parasites that affect human health, such as malaria pathogens, could interact directly with and manipulate B-cell functions (Nothelfer et al. 2015). Therefore, we propose that malaria-driven selection influenced the hematopoietic function and B-cell immunoreaction in the HNL and further increased the incidence of hematological diseases such as B-cell lymphomas. In this study, our efforts on genetic structure, population history, and natural selection of HNL improved the understanding of Bai-Yue groups and Tai-Kadai-speaking populations. However, some limitations are also shown in our study. First, the sampling of HNL from different locations on Hainan Island is unbalanced in our study. In addition, the Li population is also distributed outside Hainan Island with a small amount. These sampling biases for the Li individuals result in difficulties to investigate the detailed substructure within HNL.Main in this study. Second, "Bai-Yue" is a historical and ethnological definition, rather than a linguistic classification. We caution that our proposed evolutionary model of Bai-Yue lineage may be not suitable for Tai-Kadai speakers from other mainland Southeast Asian countries such as Thailand since they were deemed to be admixed with South Asian populations (Kutanan et al. 2021). Third, even though mainland Bai-Yue populations share a relatively high proportion of Bai-Yue ancestry and are less isolated than HNL.Main, differences in genomic diversity were also observed among mainland Bai-Yue populations. For example, we found that mainland Bai-Yue populations show differences in admixture (supplementary fig. S12, MBE Supplementary Material online) and within-population IBD sharing (supplementary fig. S14, Supplementary Material online). These observations probably suggest different genetic histories occurred in these populations. Further studies are also needed to investigate the genetic connections and differences among mainland Bai-Yue populations. With the extension of genetic studies for populations in South China and Southeast Asia, it is anticipated that the complex history of Bai-Yue lineage as well as divergent evolution within Tai-Kadai speakers will be further refined. Ethical Statement All procedures performed in studies involving human participants were approved by the Ethics Committee of Hainan Medical University (HYLL-2011-001), and in accordance with the 1964 Helsinki declaration, its later amendments, or comparable ethical standards. Informed consent was obtained from all individual participants included in the study. The personal identifiers of all samples, if any existed, were stripped off before sequencing and analysis. Sample Collection, Whole-genome Sequencing, and single-nucleotide variant calling To extend the representativeness of the Li population, we randomly selected Li individuals aged over 40 years old in the middle-aged and aged generations, with an average age of 69 years old. Based on the questionnaires and statements of participants, each individual was officially recognized as Li nationality, and was the offspring of a nonconsanguineous marriage of members of the same nationality within three generations. The name and language affiliation of the Li population in this study were referred to the National Ethnic Affairs Commission of the People's Republic of China (https://www.neac.gov.cn). The map of China used in this study was obtained from http://bzdt.ch.mnr.gov.cn under the approval number GS(2020)4618. Whole-genome sequencing (WGS) data with high coverage (30-50×) for 150 bp paired-end reads was carried out on an Illumina HiSeq X10 platform (Wuxi NextCODE, Shanghai, China). Reads of each sample were mapped to the human reference genome (GRCh37) using BWA-MEM v0.7.10 (Li and Durbin 2010). We executed duplicate mark and base quality recalibration using GATK v3.8 (McKenna et al. 2010). WGS data of 33 Tibetan (TBN) samples from Lu et al. (2016) and 131 Han Chinese samples from the PGG.Han (https://www. hanchinesegenomes.org) database (Gao et al. 2020) was also collected for comparison. We performed a joint variant calling of HNL with Tibetan and Han Chinese samples as well as samples from the Simon Genome Diversity Project data set (Mallick et al. 2016) through the HaplotypeCaller module of GATK based on the GVCF mode and implemented strict quality control through variant quality score recalibration. As a result, 38,605,313 bi-allelic single-nucleotide variants (SNVs) with high quality were retained for downstream analyses. Among these SNVs, we observed 13,605,313 SNVs for HNL samples, including 362,034 (2.66%) novel variants based on dbSNP database (https://www.ncbi.nlm.nih.gov/snp) v154 (Sherry et al. 2001). Most of these novel variants were rare, with 83.46% singletons, 12.63% doubletons, 2.61% tripletons, 0.74% other rare variants, and only 0.55% of the novel variants were common with MAF ≥ 0.05. We further annotated these novel variants using Ensembl Variant Effect Predictor v94 (McLaren et al. 2016) and observed that most of these variants were intron variants (52.71%) or intergenic variants (35.03%), and 0.2% of the novel variants were annotated as loss-of-function categories (supplementary table S1, Supplementary Material online). Public Data Collection and Data Compilation To investigate the population structure of HNL in a broader context, we used Human Origin (HO) Affymetrix data set (Lazaridis et al. 2014) representing diverse global populations as a comparison. In addition, five Tai-Kadai-speaking populations (Dong, Gelao, Maonan, Mulam, and Zhuang) living in South China from Wang, Yeh, et al. (2021), five Tai-Kadai-speaking populations (Colao, Lachi, Nung, Tay, and Thai), and Kinh living in North Vietnam from Liu et al. (2020) were also collected to extend our analyses. We distinguished the Thai population in Vietnam (from Liu et al.) and Thailand (from the HO data set) as Thai_V and Thai_T, respectively. Since Southeast Asia is close to Hainan Island and there are fewer Southeast Asian populations in the HO data set, we also collected genotype data including 178 Southeast Asians (Vietnamese individuals had been excluded to avoid the ambiguity with populations from Vietnam in other data sets) as references (Morseburg et al. 2016). We combined our joint-calling data set and multiple genotype data as a Global Panel data set (supplementary table S2, Supplementary Material online), which resulted in 118,942 SNVs. This data set is mainly used for analyses of population structure and genetic affinity. The Global Panel data set shows limitations in the accuracy and density of genome-wide markers. To address more comprehensive analytical purposes, we combined our joint-calling data set with the 1,000 Genomes Project phase 3 (KGP) data set (1000 Genomes Project Consortium 2015) as the NGS panel (supplementary table S2, Supplementary Material online) to solve problems at the genome-wide level, including local ancestry inference, estimation of effective population size, inferring population separation, scan of natural selection, as well as other analyses when needed. MBE To investigate the ancestry of HNL on a larger time scale, we collected aDNA samples of EAS.South and MSEA from Allen Ancient DNA Resource (AADR) v44.3, the curated data set (https://reich.hms.harvard.edu/allenancient-dna-resource-aadr-downloadable-genotypes-presentday-and-ancient-dna-data) of public aDNA samples. In addition, aDNA samples from Guangxi and Fujian of South China (Yang et al. 2020; were also collected. We selected samples that share more variants with the Global Panel data set and have a lower missing rate, resulting in 21 ancient samples from public research (Lipson et al. 2018;Yang et al. 2020;Wang, Yeh, et al. 2021) that were retained in the final ancient data set (supplementary table S9, Supplementary Material online). We then merged the ancient data set with the Global Panel data set and filtered out SNVs with a missing rate >0.05. As a result, a total of 31,654 SNVs were retained as an Ancient Panel for ADMIXTURE analysis. Population Structure and Genetic Affinity All of the HNL samples were self-reported to be unrelated, although we identified a total of four individuals (two pairs of two individuals) within third-degree relatedness using KING v2.1.2 (Manichaikul et al. 2010;supplementary fig. S1B, Supplementary Material online). To avoid the bias caused by a close genetic relationship, we excluded related samples within third-degree relationships for subsequent population structure analyses. To investigate the population structure of HNL, PLINK v1.9 (Purcell et al. 2007) was used to carry out LD-pruning by first filtering out SNVs with a missing rate <0.05 and then selecting SNVs in the 200-kb nonoverlapping windows. A series of PCA at the individual level were performed by further analyzing populations of concern on the PC plot based on the same data set using SNPRelate v1.16.0 (Zheng et al. 2012 (Weir and Cockerham 1984) was used to measure the overall genetic differentiation among populations using SNPRelate v1.16.0 (Zheng et al. 2012) which allows the correction for different sample sizes of populations. The matrix of the unbiased F ST was used to construct a phylogenetic tree representing the genetic relationships between the HNL and surrounding populations. ADMIXTURE v1.3.0 (Alexander et al. 2009) was applied to perform global ancestry inference, by assuming the number of ancestries (K) from 2 to 12 for the Global Panel and from 2 to 10 for the Ancient Panel. The input data for ADMIXTURE analysis were prepared using the same process as for the PCA. To lessen the bias caused by different sample sizes, we set 40 as the maximum sample size for each population. The admixture proportion of ancestry in a population was presented as mean ± standard deviation. To examine the relatedness between HNL and populations in East Asia and Southeast Asia, we also computed the outgroup f 3 statistics (Reich et al. 2009) using the program qp3pop implemented in ADMIXTOOLS v7.0.2 (Patterson et al. 2012). The form of f 3 (HNL, X; Yoruba) was used in the calculation, where X represents different East Asian and Southeast Asian populations, and the output Z score was used to measure the genetic affinity between HNL and different populations. Population Admixture Analyses To detect potential admixture events in HNL and mainland Bai-Yue populations, we first applied haplotype-based ChromoPainter v2 (Lawson et al. 2012) to get the haplotype painting for all recipients and the copying vectors for all individuals from East Asia and Southeast Asia. We sampled 10 paintings per haplotype for recipients in ChromoPainter. GLOBETROTTER (Hellenthal et al. 2014) was further employed to explore potential population admixture of target populations using other East Asian and Southeast Asian populations as donors. A population with an "uncertain" as a best-guess conclusion was deemed difficult to describe admixture events in GLOBETROTTER inferences. We applied qpAdm implemented in ADMIXTOOLS v7.0.2 (Patterson et al. 2012) to perform f 4 statistics-based admixture modeling. To model the composition of ancient ancestry in present-day HNL.Main, we selected five ancient individuals, including (1) Bianbian representing ancient northern East Asian ancestry, (2) Qihe representing ancient southern East Asian ancestry (or proto-Austronesian ancestry), (3) Longlin in Guangxi related to Hòabìnhian ancestry, (4) and (5) LadaKH01 ∼1,500 years and HuatuyanNL21 ∼500 years ago in Guangxi who were close to the presentday Tai-Kadai speakers. We performed three-, two-, and single-source mixture models using different combinations of these ancient ancestries for HNL.Main and other Bai-Yue populations to estimate ancestry coefficients of each model and determine the model with the largest P-value as the best-fitting one for each population (supplementary table S8, Supplementary Material online). After we observed HNL.Main showed the best representativeness of a Bai-Yue ancestry among Bai-Yue populations of our study, we further used (1) HNL.Main, (2) Bianbian, (3) Qihe, and (4) Longlin as ancestral sources to model ancestral components of present-day EAS.South and MSEA (supplementary table S8, Supplementary Material online). The best-fitting model for each target population was determined by a similar process as described above. To identify the ancestral sources of HNL.Admixed individuals, we performed local ancestry inference using RFMix v2.0.3 (Maples et al. 2013) with a 0.5 cM random forest window size and assuming the expected admixture generation as 150. The estimated ancestry proportions of HNL.Admixed were presented as mean ± standard MBE deviation. We phased the data of the NGS panel using Beagle v5.2 (Browning et al. 2018) and used the genetic map from HapMap (International HapMap Consortium 2007). The local ancestry inference was carried out using 48 unrelated HNL.Main individuals and randomly selected 50 Han individuals as ancestral populations based on phased VCF. The results of local ancestry of genomic regions were visualized by karyoploteR v1.16.0 (Gel and Serra 2017). We further estimated the admixture time of HNL.Main and Han. We used MultiWaver v2.0 (Ni et al. 2019) which supports automatically selecting the bestfitting admixture model based on the distribution of ancestral segments. We carried out MultiWaver analysis with the default parameters based on the output results of ancestral segment distribution generated by RFMix, and the hybrid isolation model was determined as the best-fitting model as MultiWaver described. Analyses of Uniparental Genomes To construct a paternal and maternal genealogy of HNL, we classified NRY haplogroups using Y-LineageTracker v1.3.0 ) based on the ISOGG Y-DNA tree v2019-2020 (https://isogg.org/tree), and mtDNA haplogroups using HaploGrep v2.1.16 (Weissensteiner et al. 2016) based on a PhyloTree mtDNA tree v17 (https:// www.phylotree.org/tree; Van Oven 2015). To investigate the population structure at paternal and maternal levels more comprehensively, we also collected NRY and mtDNA haplogroup data of East Asian and Southeast Asian populations from published research (Kong et al. 2003;Wen et al. 2005;Hammer et al. 2006;Hill et al. 2007;Li, Cai, et al. 2007;Li, Zhong, et al. 2007;Jin et al. 2009;Zhao et al. 2010;Delfin et al. 2012Delfin et al. , 2014Ko et al. 2014;Trejaut et al. 2014;1000Genomes Project Consortium 2015Lu et al. 2016;Poznik et al. 2016;Song et al. 2019;Gao et al. 2020;He et al. 2020;Ma et al. 2021) for comparison (supplementary tables S4 and S5, Supplementary Material online) and performed PCA based on haplogroup frequency. We also calculated the haplogroup diversity for each population following the formula: HD = n 1−Σx 2 ( ) n−1 , where n is the sample size of each population and x is the frequency of each haplogroup in each population. To investigate the specific paternal lineages of Bai-Yue populations on a fine scale, we used Y-chromosomal sequencing data of HNL, TBN, and East Asians of the KGP data set in the NGS panel comprising 290 male samples with sufficient coverage and covering the main NRY haplogroups in East Asia. To construct an NRY phylogeny and estimate the coalescent times of haplogroups, we applied BEAST v2.6.0 (Bouckaert et al. 2014) to perform Bayesian evolutionary analyses using the GTR model under the strict clock and mutation rate of 7.6 × 10 −10 . The age of NRY haplogroup CT-M168 (71,760 years, 95% CI = 69,777-73,799) was used for calibration in age estimation (Karmin et al. 2015). The final consensus tree was constructed by the TreeAnnotator module implemented in BEAST and visualized by FigTree v1.4.4 (http://tree.bio.ed. ac.uk/software/figtree). Runs of Homozygosity, f Statistics, and Identity by Descents We identified ROH in the HNL and other East Asian populations under the NGS panel using BCFTOOLS v.1.6 (Narasimhan et al. 2016) based on the Hidden Markov Model approach. We used the -G option and set the argument as 30 to account for GT errors. We classified ROH with lengths of ≤0.5, 0.5-1, and >1 Mb as short, medium, and long ROH, respectively. We calculated the number of ROH for each individual in each classified ROH category and calculated the average length of ROH as the average length of ROH = the total length of ROH the number of ROH . To test the potential admixture of HNL, we calculated f 3 statistics in the form of f 3 (X, Y; HNL) using qp3pop implemented in ADMIXTOOLS v7.0.2 (Patterson et al. 2012), where X and Y represented all the possible population combinations of East Asian and Southeast Asian populations. We also calculated f 3 statistics in the form of f 3-(HNL.Main, Han; mainland Bai-Yue) and f 3 (mainland Bai-Yue, Han; HNL.Main) to compare the admixture between HNL.Main and other mainland Bai-Yue populations. We used qpDstat in ADMIXTOOLS to calculate f 4 statistics in the form of f 4 (HNL.Main, mainland Bai-Yue groups; Bianbian/Qihe, Yoruba) to investigate the genetic relationships with ancient northern and southern ancestries (Yang et al. 2020). We also performed f 4 (mainland Bai-Yue groups, X; HNL.Main, Yoruba) to further investigate the genetic characterization of isolated HNL compared with other Bai-Yue populations. To measure and compare the genetic connections between ancient individuals and present-day populations, we first merged the Global Panel data set of present-day populations with every single ancient individual in the Ancient Panel data set to create multiple specific data sets for f 3 calculations (supplementary table S9, Supplementary Material online). We then used ancient individuals and present-day populations of EAS.South and MSEA as X and Y to calculate outgroup f 3 statistics in the form of f 3 (X, Y; Yoruba). We applied hap-IBD (Zhou et al. 2020) to estimate the IBD sharing segments within and between populations. The genotype data were phased using Beagle v5.2 (Browning et al. 2018) to estimate the IBD blocks among individuals. Both IBD and HBD blocks identified by hap-IBD were used as IBD sharing segments in our analyses. The total length of IBD sharing segments in each pair of individuals was used to evaluate the shared IBD between two individuals. We also calculated the average total length and number of IBD between HNL.Main and populations of EAS.South and MSEA. We further inferred a recent change in effective population size (N e ) within 60 generations of Han and Bai-Yue populations using IBDNe v23Apr20 (Browning and Browning 2015). We set the minimum length of IBD segments to be used in each IBDNe estimation within the population as 2 cM. Detection of Archaic Introgression We applied ArchaicSeeker v2.0 (Yuan et al. 2021) to detect archaic introgression in the present-day populations, using Denisovan (Meyer et al. 2012) and Altai Neanderthal (Prufer et al. 2014) as archaic genomes in the analysis. To test the correlation between archaic ancestry and Bai-Yue ancestry enriched in HNL, we first performed global ancestry inference of HNL and five other mainland East Asian populations (CDX, CHB, CHS, KHV, and TBN). We used the result of K = 2 with the lowest CV-error in ADMIXTURE to profile Bai-Yue ancestry proportions (supplementary fig. S17, Supplementary Material online). We then calculated the archaic ancestry proportion of these East Asian populations to test the correlation between archaic ancestry proportion and Bai-Yue ancestry proportion. Based on the results of ArchaicSeeker, we also searched HNL-specific archaic introgression segments that were enriched in HNL.Main but showed relatively lower frequency in other global populations. Inference of Population Demography We applied MSMC v2.1.2 (Schiffels and Durbin 2014) to estimate the long-term effective population size of HNL, Dai, Kinh, and Han from high-coverage genomes in the NGS panel. The mask files and single-sample VCF files were generated from BAM files and phased data of the NGS panel, respectively. The estimates of N e were based on autosomal sequences by analyzing four genomes (eight haplotypes) for each population separately. Population separation between each pair of populations was estimated using four autosomal sequences from two individuals of each population. We assumed a mid-point of 0.5 as the start of separation and a point of 0.2 as when the two populations were separated. We used 64 segments for each MSMC estimation and scaled the output parameters to real-time and population sizes using an autosomal neutral mutation rate of 1.25 × 10 −8 per base pair per generation and 25 years per generation. We also employed RELATE v1.1.7 (Speidel et al. 2019) to estimate historical N e from the same samples as used for the MSMC. The hap/sample files were converted from phased VCF and were further processed as input files by the PrepareInputFiles module implemented in RELATE. We used the same genomic mask as that of MSMC and the human ancestor sequence of GRCh37 downloaded from Ensembl Release 71 (http://ftp.ensembl.org/pub/ release-71/fasta/ancestral_alleles) in the process of preparing input files. The anc/mut files used for N e estimation were generated from hap/sample files by the RelateParallel module using a mutation rate of 1.25 × 10 −8 per base pair per generation, default N e of haplotypes, and the genetic map from HapMap (International HapMap Consortium 2007). Finally, N e estimation was performed by the EstimatePopulationSize module with parameters of the mutation rate of 1.25 × 10 −8 per base pair per generation and 25 years per generation. Scanning for Natural Selection We applied PBS (Yi et al. 2010) to detect signals of recent positive selection at the genome-wide level. We used variant sites with a depth above 10× and a missing rate of <5% for PBS calculation. The PBS is defined as: PBS A = T AB +T AC −T BC 2 , where T = −log(1 − F ST ); A is the target population for the selection scan, and B and C are ingroup and outgroup populations used as references, respectively. We only considered variant sites that were polymorphic in at least one of the three populations in the PBS calculation. To detect specific selection signals in 48 unrelated HNL.Main individuals, we used Han and CEU as ingroup and outgroup populations, respectively. We focused on signals for which the PBS values were above the 99.995th percentile. We also zoomed in on the local PBS distribution of selection signals concerning genes within 20-kb upstream and downstream and analyzed the LD pattern of the variant with the highest PBS value within the gene. To validate identified variants or genomic regions within concerning genes, we also estimated integrated haplotype scores (iHSs) and cross-population extended haplotype homozygosity (XP-EHH) of these genes within 20 kb upstream and downstream. The iHS and XP-EHH were both estimated using selscan v1.2.0 (Szpiech and Hernandez 2014), and the Han was used as a reference population in XP-EHH estimation. We also explored differential selection within Bai-Yue lineage by comparing island (HNL) and mainland (CDX and KHV) Bai-Yue populations. We used HNL as a target population, assuming each of the other two mainland Bai-Yue populations as the ingroup population and HAN as the outgroup population. Functional Annotation of Natural Selection Signatures To explore the detailed information of concerning variants with strong selection signals, we obtained the functional annotation and the global population prevalence from the PGG.SNV database (https://www.pggsnv.org; Zhang et al. 2019) and the association with gene expression from the GTEx Portal (https://gtexportal.org; GTEx Consortium 2013). The protein tertiary structure showing the functional consequence of the concerning variant was obtained from the AlphaFold Protein Structure database (https://alphafold.ebi.ac.uk; Jumper et al. 2021). We also referred to the reported cases from the GWAS Catalog database (https://www.ebi.ac.uk/gwas; MacArthur et al. 2017) to search previous published genome-wide associations concerning genes and variants. To investigate the interactions of genes with strong PBS signals, we performed functional enrichment by metascape (https://metascape.org; Zhou et al. 2019), an online program that incorporates popular ontologies of functional categories. We used genes with variants of PBS values in the top 0.005% percentile as the input gene set. The top 20 functional categories with −log 10 (P-value) ≥ 2 were displayed as enriched terms. Similar functional categories MBE were classified into one group, and the category with the summarized −log 10 (P-value) was shown in the enrichment figure. To detect enrichment of PBS values in gene sets corresponding to a given biological pathway, we downloaded KEGG gene sets (Kanehisa et al. 2017) of Homo sapiens from the NCBI BioSystems database (http://www.ncbi. nlm.nih.gov/biosystems). We excluded nonautosomal genes and genes unmapped to the human reference genome of GRCh37 for each gene set and further excluded gene sets of less than ten genes. As a result, a total of 365 gene sets remained for the detection of polygenic selection. We compared the distributions of PBS in each gene set relative to the rest of the genes across the genome using one-sided Mann-Whitney U tests. Each gene set was tested independently and accounted for multiple testing using the Bonferroni correction. Supplementary Material Supplementary data are available at Molecular Biology and Evolution online. Author Contributions S.X. and Y.H. conceived the study. S.X. designed and supervised the project. R.L. contributed to sample collection. Y.G. developed a pipeline for processing NGS data and performed variants calling analysis. H.C., Y.L., and R.Z. performed the population genetic analyses presented in the manuscript. H.C. drafted the manuscript and prepared additional materials. S.X. revised the manuscript. All authors discussed the results and implications and commented on the manuscript. Data Availability The genome data of 55 Hainan Li samples generated during this study are available in the National Omics Data Encyclopedia (NODE) at https://www.biosino.org/node and can be accessed with accession number OEP003168. Data application is conditioned on the following commitments: (1) the data will not be used for commercial purposes; (2) the data will not be shared with anyone else; and (3) no attempt will be made to identify any of the sample donors. Requests for access to data may be directed to xushua@fudan.edu.cn or heyungang@fudan. edu.cn.
2022-09-30T06:17:39.716Z
2022-09-29T00:00:00.000
{ "year": 2022, "sha1": "58d0afc0af088bc986aea0d213a2062dd41823bb", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1093/molbev/msac210", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8a824cc6dabd54206d50ffe9ae7fa2666889682e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
252275241
pes2o/s2orc
v3-fos-license
Ecdyonurusaurasius sp. nov. (Insecta, Ephemeroptera, Heptageniidae, Ecdyonurinae), a new micro-endemic mayfly species from Aurès Mountains (north-eastern Algeria) Abstract Ecdyonurusaurasiussp. nov., a micro-endemic species reported from several streams within the Aurès Mountains (north-eastern Algeria), is described and illustrated at nymphal, subimaginal and imaginal stages of both sexes. Critical morphological diagnostic characters distinguishing the new species are presented, together with molecular affinities as well as notes on the biology and distribution of the species. In Africa, only three Ecdyonurinae genera are present: Ecdyonurus is restricted to North Africa, whereas Afronurus and Notonurus Crass, 1947 are found in the Afrotropical region (Webb and McCafferty 2008;Vuataz et al. 2013). Bauernfeind and Soldán (2012) proposed to split the West Palearctic species of the genus Ecdyonurus into two subgenera: Ecdyonurus (25 species) and Helvetoraeticus Bauernfeind & Soldán, 2012 (15 species), according to the arrangement of setae on the superlingua, the number of bristles on the ventral side of the labrum and the number of comb-shaped bristles on the maxilla in nymphs, as well as the shape of the apical sclerite of the male genitalia. Currently, four taxa of this genus are reported from North Africa (Thomas 1998). Two of them are well-known species with a clear status: Ecdyonurus rothschildi Navás, 1929 and Ecdyonurus ifranensis Vitte & Thomas, 1988, whereas one remains doubtful: Ecdyonurus venosus var. constantinicus Lestage, 1925, and the presence of Ecdyonurus venosus (Fabricius, 1793) mentioned by Gauthier (1928) is still unconfirmed. All of them belong to the subgenus Ecdyonurus. Navás (1929) described Ecdyonurus rothschildi from an oasis in Biskra Province, north-eastern Algeria, based on a male imago. The species was redescribed by Thomas and Dakki (1979) which gave a detailed account of the adult morphology and related it to the E. aurantiacus (Burmeister, 1839) species group. Later, Soldán and Gagneur (1985), proposed the first description of the nymph and an identification key to separate E. rothschildi, E. dispar (Curtis, 1834) and E. aurantiacus nymphs. The species is now known from all Maghreb countries and is one of the most widespread species (Boumaiza and Thomas 1995;Zrelli et al. 2016;Bouhala et al. 2020). Vitte and Thomas (1988) described Ecdyonurus ifranensis at nymphal and adult stages from the Middle Atlas; the species has later been found in other areas of Morocco (El Alami et al. 2022). The present study aims to examine Ecdyonurus populations from the Aurès region (Algeria). We collected and reared fresh material at all stages. After critical observations and comparison with other Ecdyonurus species, we have clearly distinguished a new Algerian endemic species. Materials and methods The material was collected by the first author between February 2020 and November 2021 from six localities from the Aurès region; the sampling sites are located in the Belezma National Park (BNP) and the Western Aurès Massif (Fig. 1). The region is characterized by a semi-arid climate with cold winters and very hot and dry summers. Sampling was performed using a standard benthic net using the kick-sampling method. Imagos and subimagos were obtained by rearing mature nymphs from the Charchar, Yabous and Berbaga sites. All specimens were preserved in 96% ethanol in the field and stored in the laboratory at 4 °C. The physical and chemical parameters of the water was measured in situ for each sampling site using a multi-probe. The following variables were measured: average water depth, bed width, current velocity with a FLOWATCH flowmeter; conductivity, water temperature and pH using an Adwa AD32 tester and a HANNA HI1271 pH electrode; while dissolved oxygen was recorded using a Lutron PDO-519 Dissolved Oxygen Meter. Morphological analysis Morphological characteristics for the description of the new species were used according to Hrivniak et al. (2018). Pictures of habitus were made using a Canon EOS 6D camera and the Visionary Digital Passport imaging system (formerly available and distributed by Dun Inc., Virginia), and processed with Adobe Photoshop Lightroom ver. 4.4. and Helicon Focus ver. 5.3. Four nymphs were dissected in Cellosolve (2-Ethoxyethanol) with subsequent embedding in Euparal medium and mounting on slides. Microscopic pictures were taken using an Olympus BX51 microscope coupled with an Olympus SC50 camera; pictures were enhanced with the stacking software Olympus Stream Basic ver. 2.3.2. and Adobe Photoshop ver. 21.2.2. Molecular analysis Five specimens belonging to the new species as well as five specimens of Ecdyonurus rothschildi were used for DNA extraction to get a 658 bp fragment of the mitochondrial cytochrome oxidase I gene (COI) (see Table 1). DNA extraction, PCR amplification, sequencing and alignment construction were performed according to Benhadji et al. (2020) or Martynov et al. (2022). One sequence of E. rothschildi was retrieved from GenBank, as well as two sequences of E. aurantiacus and two of E. dispar. Three Electrogena sequences were chosen as the outgroup. We estimated the evolutionary divergence within and between our new species and the other Ecdyonurus species using the COI genetic distances. Both pairwise distance between all sequences and mean distance between and within species were calculated in MegaX (Kumar et al. 2018;Stecher et al. 2020) under the Kimura 2-parameter (K80) substitution model (Kimura 1980). We then applied the recently developed species delimitation method ASAP (Assemble Species by Automatic Partitioning; Puillandre et al. 2021) to our COI data set using the graphical web-interface available at https://bioinfo.mnhn.fr/abi/public/asap/asapweb.html. This distance-based method is similar to the popular ABGD (Automatic Barcode Gap Discovery; Puillandre et al. 2012) approach but has the advantage of providing a score (i.e. asap-score) that indicates the most likely species delimitation. Pairwise genetic distances were computed under the K80 model, and all other settings were set to default. Because ASAP outputs produced two partitions with equal asap-scores, we favored the partition with the smallest p-value. Finally, we conducted a Bayesian inference gene tree reconstruction in MrBayes ver. 3.2.7a (Ronquist et al. 2012), using the best evolutionary model (GTR + Γ + I) selected in JModelTest ver. 2.1.10 (Darriba et al. 2012) following the second-order Akaike information criterion (AICc). We used five substitution scheme and six gamma categories, with all other parameters set to default. To accommodate different substitution rates among COI codon positions, we analyzed our data set in two partitions, one with first and second codon positions and one with third positions (1 + 2, 3). Two independent analyses of four MCMC chains run for one million generations with trees sampled every 1000 generations were implemented, and 100 000 generations were discarded as a burnin after visually verifying run stationarity and convergence in Tracer ver. 1.7.2 (Rambaut et al. 2018). The consensus tree was visualized and edited in iTOL 6 (Letunic and Bork 2021). Material is deposited in the following institutions: IB-US Institute of Biology, University of Szczecin, Poland; Molecular analysis The COI ingroup data set was 100% complete (no missing data) and included 25% of parsimony informative sites. The COI gene tree grouped the five sequences of Ecdyonurus aurasius sp. nov. into a well-supported monophyletic clade, and was supported as a distinct species in the ASAP analysis (Fig. 2). The K80 mean genetic distance within the five Ecdyonurus aurasius sp. nov. COI sequences was 0.14%. As expected, all other included species were also recovered as distinct species with high node supports. The K80 mean genetic distance between Ecdyonurus aurasius sp. nov. and the other three species of Ecdyonurus ranged from 7.6% (mean distance to E. rothschildi) to 20.1% (mean distance to E. aurantiacus), with a minimum distance of 7.1% between GBIFCH01119302 / GBIFCH00673192 and EC-CH0 sequences. Description. Male imago Size: body length: 9.0-9.8 mm; forewing length 9.1-10.9 mm; cerci broken. General body color distinctly brown to reddish-brown (Fig. 3A). Head. Light brown, clypeal plate with blackish maculations; eyes grayish blue separated by a distance equal to the diameter of the frontal ocellus; a brownish lateral stripe present at one third of the ventral side; ocelli apically whitish-yellow, dark brown basally; antennae with scapus medium brown, flagellum grayish brown. Abdomen. General color brown to rusty tawny. Terga light tawny to rusty tawny. Tergum I dark brown, terga II-VII reddish-brown with two median pairs of light markings, proximal pair elongated and slightly divergent, distal pair subparallel to body axis (Fig. 4A). Segments II-VIII with rusty-brown lateral stripes stretching from anterior to posterior margin of the segment (Fig. 3A) and connected dorso-posteriorly (Fig. 4A); terga VII-X slightly darker that other ones; tergum X reddish-brown, yellowish-brown posteriorly. Abdominal sterna yellowish to light brown, with two pairs of light markings, the proximal pair elongated, and divergent, distal pair rounded (Fig. 4B). Sterna VIII-IX darker. Nervous ganglia well visible and tinted with purple on sterna II-VII. Cerci brown, with joints of segments blackish. Genitalia. Styliger plate medium brown, lighter in the middle, strongly convex, with two small bumps near gonostyli base; first segment of gonostyli dark brown, second and third lighter (Fig. 4D). Penis lobes yellowish-brown to brown moderately expanded laterally, outer margin rather quadratic (Fig. 4D, E). Basal and lateral sclerites brown, darker than apical sclerite (Fig. 4E). Lateral sclerite rather quadratic slightly larger on inner side; apical sclerite with few medium sized teeth on inner margin (Fig. 5A); basal sclerite outer margin smooth, without teeth. Titillators straight, yellowish-brown, darker on outer margin, with two spines on the dorsal face. Female imago. Size: body length: 9.9-13.3 mm; forewings length: 10.5-12.9 mm; cerci length: 17.9-21.3 mm. General color of body similar to that in male imago, markedly paler. Head. yellowish-brown; eyes grayish. Thorax. Prothorax yellowish gray to brown. Mesothorax dorsally pale, yellow to yellowish-brown, basisternum and furcasterum medium brown. Abdomen. Terga yellowish laterally and tawny to rusty tawny dorsally. Terga I-VIII with central longitudinal rusty tawny parallel bands and lateral stripes (Fig. 3B). Abdominal sterna yellowish to light brown, especially VIII-IX, segments I-VII generally with two central light short strokes; nervous ganglia strongly tinted with purple on sterna II-VII. Subgenital plate large, whitish and angular, reaching two third of sternum VIII length; subanal plate acutely rounded (Fig. 4C). Cerci brown, with joints blackish. Male subimago. Size: body length: 9.8-10.5 mm; forewings length: 10.5-11.4 mm; cerci length: 13.3-26.9 mm. Head brown to reddish-brown. Eyes grayish blue. Ocelli as in male imago. Antennae yellowish, brown basally, same than in male imago. Fore legs darker than middle and hind ones. Fore femora intensively brown distally. Middle and hind legs uniformly yellowish gray to yellow. Wings dark gray. Abdominal terga similar to male imago. Sterna slightly lighter than terga. Protuberances of styliger plate well marked, slightly yellowish, gonostyli intensively brown, yellow to whitish-yellow apically. Typical shape of penis already well apparent. Cerci brown. Mature nymph. Size: body length: up to 7.12 mm for male and 9.6 mm for female; cerci slightly longer than body length. General body color yellowish-brown with pale yellowish markings. Head. Mean width/length ratio 1.4-1.6, yellowish-brown to brown, with two central light spots near fore margin, and two whitish stripes along the dehiscence line (Fig. 6A). Eyes blackish grey; ocelli whitish grey, antennae with scape and pedicel medium brown; flagellum yellowish-brown. Thorax. Pronotum. Mean width/length ratio 4.2-5.0, yellowish-brown to brown; lateral projections ca as long as the length of the pronotum; with lateral margin regularly convex, and tip slightly pointed (Fig. 6A). Mesonotum medium brown with yellowish markings. Abdomen. Terga brownish gray; on terga II-VIII two centrally elongated yellowish spots increasing in size posteriorly and fused on tergum IX; tergum X uniformly medium brown (Fig. 6A). Abdominal sterna yellowish white, nervous ganglia tinted with purple. Posterior margin of terga with large pointed marginal teeth alternating with medium and short ones, and several rows of microdenticles above the margin (Fig. 8C). Posterolateral projections short, weakly sclerotized, reaching from slightly above 1/7 to 1/5 of the length of the following segment (Fig. 6C). Gills grayish brown with distinct brown and developed tracheation; gill I tongueshaped, gills II-VII leaf-shaped, asymmetrical, gills III-IV slightly longer than wide ( Fig. 8D-J). Cerci and paracercus yellowish-brown; each segment with a row of pointed stout setae. Discussion Ecdyonurus aurasius sp. nov. belongs to the subgenus Ecdyonurus by the shape of the apical sclerite of male genitalia and the single row of stout setae on the ventral side of the labrum. However, this species presents some intermediate characters between the subgenera Ecdyonurus and Helevetoraeticus; the number of comb-shaped setae on the crown of the galea-lacinia is generally less than 20 in Ecdyonurus s.s., whereas our species exhibits a range from 16 to 22 setae; the setae on the lateral margin of superlingua are supposed to be long, including the tip, whereas in our species, those at the tip are shorter. We can also add the posterolateral projections on the abdomen which are very short, and the nervous ganglia tinted in purple, two characters not frequent in Ecdyonurus s.s. but more common in Helvetoraeticus. Nevertheless, we are confident that our new species belongs to the subgenus Ecdyonurus. By the shape of the penis lobes and the posterolateral projections of the abdomen, E. aurasius sp. nov. is closely related to E. aurantiacus, E. dispar, E. rothschildi, and E. ifranensis. The first two are considered as Mediterranean faunal elements, expanding to Central Europe or even the British Islands for E. dispar (Bauernfeind and Soldán 2012). The nymph of E. aurasius sp. nov. can be separated from those of E. aurantiacus and E. dispar by the nervous ganglia tinted with purple, and the tongue-shaped gill I, from E. dispar also by the shape of the stout setae on the dorsal surface of femora (acute and pointed in the latter). The new species presents more affinities with the two other North African endemics but can be distinguished from E. rothschildi by the much longer pronotal projections, the shape of the stout setae on the dorsal surface of femora (pointed in the latter), the shape of the gills (more symmetrical in E. rothschildi) and the shape of the glossae ((inner margin rounded and convex in E. rothschildi). Ecdyonurus aurasius sp. nov. differs from E. ifranensis by the shape of the labrum (less broad in E. ifranensis), the shape of the stout setae on the dorsal surface of femora (pointed in E. ifranensis), and the shape of the glossae similar to E. rothschildi. In males, E. aurasius sp. nov. differs from E. rothschildi, E. dispar and E. aurantiacus by the compound eyes separated and not touching (character not stated in E. ifranensis description), from E. aurantiacus and E. dispar by the posterior margin of the basal sclerite smooth, and from E. ifranensis by the first transversal vein in the costal field surrounded by a dark brown maculation (the same in E. rothschildi), and by the shape of the posterior margin of the basal sclerite rounded (straight in E. ifranensis). It is also worth noting that E. aurasius sp. nov. differs from the two other North African species by the nervous ganglia tinted in purple in female imagos, whereas they are colorless in E. rothschildi and E. ifranensis. Distribution and biology Ecdyonurus aurasius sp. nov., as known so far, is restricted to the Aurès region. The species has been recorded from only six localities in the Western Aurès area; most habitats are located in the highest part of the streams, within altitudes ranging from 1010 to 1800 m a.s.l. These sites are represented by small mountain watercourses with gravel substrate (Fig. 9). The average annual water temperature ranges from 5 °С to 18 °С with high concentration of dissolved oxygen (6.5 to 9.35 mg/L). The nymphs were sampled under current velocity ranging from 0.24 to 0.48 m/sec, the average streams width from 60 cm to 1.50 m, with depth from 10 to 35 cm, and pH from 6.8 to 7.2. The highest population density was recorded at the Charchar site (60 individuals/m 2 ) and the lowest one was observed at the Bouailef site (2-5 individuals/m 2 ). The mature nymphs and subimagos (together with early-instar nymphs) were observed in May/June and another generation observed in September/October, thus suggesting a bivoltine life cycle. The other Ephemeroptera species sporadically occurring in the same sites were Caenis luctuosa (Burmeister, 1839), Baetis chelif Soldan, Godunko & Thomas, 2005 and Baetis sinespinosus Soldán & Thomas, 1983. production. Our sincere thanks to Sonia Zrelli (Bizerte, Tunisia), Mokhtar Benlasri (Marrakech, Morocco) Lina Kechemir (Tizi-Ouzou, Algeria) and Boudjéma Samraoui (Guelma, Algeria) for providing material useful for this study. Céline Stoffel (MZL) is thanked for her dedicated work with the molecular lab. We also express our gratitude to Robert Czerniawski, Director of IB-US for encouraging this collaboration and Tomasz Krepski for providing support and lab material for morphological observations in IB-US laboratory. Comments from and discussion with the two reviewers, Lubos Hrivniak and Ernst Bauernfeind, greatly helped to improve the manuscript. This research was supported by the Algerian Ministère de l'Enseignement Supérieur et de la Recherche Scientifique.
2022-09-15T15:48:51.531Z
2022-09-12T00:00:00.000
{ "year": 2022, "sha1": "9925514d051675305094836519145a39a8294ee3", "oa_license": "CCBY", "oa_url": "https://zookeys.pensoft.net/article/89613/download/pdf/", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "3304ffa25d452465d6cc883feca536e76879e6fe", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
5250275
pes2o/s2orc
v3-fos-license
Physical distress is associated with cardiovascular events in a high risk population of elderly men Background Self-reported health perceptions such as physical distress and quality of life are suggested independent predictors of mortality and morbidity in patients with established cardiovascular disease. This study examined the associations between these factors and three years incidence of cardiovascular events in a population of elderly men with long term hyperlipidemia. Methods We studied observational data in a cohort of 433 men aged 64–76 years from a prospective, 2 × 2 factorial designed, three-year interventional trial. Information of classical risk factors was obtained and the following questionnaires were administered at baseline: Hospital Anxiety and Depression Scale, Physical Symptom Distress Index and Life Satisfaction Index. The occurrence of cardiovascular death, myocardial infarction, cerebrovascular incidences and peripheral arterial disease were registered throughout the study period. Continuous data with skewed distribution was split into tertiles. Hazard ratios (HR) were calculated from Cox regression analyses to assess the associations between physical distress, quality of life and cardiovascular events. Results After three years, 49 cardiovascular events were registered, with similar incidence among subjects with and without established cardiovascular disease. In multivariate analyses adjusted for age, smoking, systolic blood pressure, serum glucose, HADS-anxiety and treatment-intervention, physical distress was positively associated (HR 3.1, 95% CI 1.2 – 7.9 for 3rd versus 1st tertile) and quality of life negatively associated (HR 2.6, 95% CI 1.1–5.8 for 3rd versus 1st tertile) with cardiovascular events. The association remained statistically significant only for physical distress (hazard ratio 2.8 95% CI 1.2 – 6.8, p < 0.05) when both variables were evaluated in the same model. Conclusion Physical distress, but not quality of life, was independently associated with increased risk of cardiovascular events in an observational study of elderly men predominantly without established cardiovascular disease. Trial Registration Trial registration: NCT00764010 Background Several psychosocial factors are shown to have independent adverse effects on cardiovascular mortality and morbidity. These include major depression [1], anxiety symptoms [2], type D personality (a stable tendency of generally negative affects and social inhibition) [3], low socio-economic status [4], lack of social support [5], stressful life events and job stress [6]. In addition, a positive psychological factor as optimism seems to have a protective effect [7]. In current European guidelines for cardiovascular prevention, evaluation of such factors is recommended in all patients, but is not a part of risk-score strategies [8]. The role of perceived distress from physical symptoms has been studied in various clinical materials. Epidemiological studies show an independent association between selfreporting symptoms as dyspnoea, cough and feeling cold and all-cause mortality [9]. In studies on patients with stable coronary heart disease, symptom score measured by the Seattle Angina Questionnaire is associated with increased disease-specific mortality when adjusted for established risk factors and objective measures of cardiac function [10]. Thus, measuring subjective patient perceptions of physical distress may yield information not revealed by objective examinations. This could be valuable when performing considerations of risk and primary preventive measures. We are not aware of studies examining self-reported physical distress in relation to novel cardiovascular events among patients at high risk of cardiovascular disease (CVD). Quality of life (QOL) is a construct reflecting the patient's perception of several psychosocial factors. Such questionnaires are frequently applied as secondary outcomes in interventional studies, but have also been evaluated in prospective studies on cardiovascular health in recent years. Low QOL has been associated with increased mortality after cardiac surgery [11], percutaneous coronary intervention [12], acute myocardial infarction [13], and in stable coronary artery disease [10]. However, in healthy populations with risk factors of CVD, data is limited. Low well-being predicts stroke in elderly men with hypertension and hypercholesterolemia [14]. To our knowledge, the association between other cardiovascular events as death, myocardial infarction and revascularization procedures and low QOL has not been studied in a high risk population. The aim of this study was to examine whether increased self-reported physical distress or low QOL is independently associated with three years incidence of cardiovascular mortality and morbidity among elderly men at high risk of CVD. Study design and sample The basis of recruitment in the present study, Diet and Omega-3 Intervention Trial on atherosclerosis (DOIT), were the 910 survivors from the original population of 1232 otherwise healthy men with hypercholesterolemia (>6.45 mmol/l) in the Oslo Diet and Antismoking Study, carried out from 1972-1977 [15] (Figure 1). Altogether, 655 men aged 64-76 years attended a screening visit in 1997. Exclusion criteria in the DOIT were: Total cholesterol > 8 mmol/l, blood pressure levels > 170/ 100 mmHg, specific disease states thought to influence longevity or study compliance (cancer, end-stage renal failure, chronic alcoholism). A total of 77 were excluded prior to randomization and 15 were unwilling to participate. The 563 participants were randomized to a 2 × 2 factorial designed three-year prospective study with n-3 polyunsaturated fatty acids and/or dietary counselling on progression of atherosclerosis measured by biochemical, functional and structural arterial wall properties. Further details of inclusion criteria, intervention and follow-up have previously been reported [16]. The present article reports observational data from a subgroup of participants in the DOIT with complete baseline data collection. Data collection At randomisation, information about previous morbidity, medications and current smoking was registered. Data from clinical examinations were collected by one of the authors (EMH) and blood tests were drawn after overnight fasting under standard procedures. HADS is a 14-item questionnaire on symptoms of anxiety (HADS-A) and depression (HADS-D), each ranging from 0 (no problems) to 3 (maximum distress). Its reliability and validity as a screening instrument has been confirmed in cardiovascular patients [20]. PSDI quantifies physical distress using 13 questions on a scale ranging from 1 (not at all) to 5 (very much). It is a revised version of a questionnaire developed by The National Heart, Lung and Blood Institute for follow-up studies in hypertension [18]. Quality of life was measured by the 14-item LSI, using a scale of 1 (very satisfied) to 4 (very unsatisfied). PSDI and LSI have previously been used in Norwegian populations [21]. The individual items on PSDI and LSI are given in table 1. Questionnaires from individuals with one missing item on the HADS-subscales, up to two missing items on PSDI and four missing items on LSI were included in the analyses after simple imputation, as the missing items were estimated as the average from the other items. The primary endpoint was a composite of cardiovascular death, myocardial infarction, percutaneous coronary intervention, coronary-artery bypass grafting, cerebral stroke, surgery on abdominal aortic aneurysm and revascularization procedures in peripheral arterial disease. The events were confirmed by medical records and data sup-plied from the death-cause register in Norway, and verified by an independent cardiologist. The study was approved by the regional ethics committee, and all subjects gave their written informed consent prior to participation. Statistics Continuous variables with normal distribution were standardized by dividing with its standard deviation (SD). Variables with skewed distributions were categorized in tertiles. The strength of bivariate correlation between con-Flow chart of the study population Figure 1 Flow chart of the study population. n-3 PUFA, n-3 polyunsaturated fatty acids; CV, cardiovascular. tinuous variables was assessed using Pearson's correlation analysis. Univariate Cox proportional hazards regression analyses were used to estimate the associations between classical risk factors for CVD at baseline, physical distress, quality of life, and cardiovascular events. The hazard ratios were calculated as the increase in risk of event when the selected variable was increased by one SD or one tertile. Classical risk factors associated (p < 0.20) with the outcome in univariate analyses were entered in multivariate analyses, and those significantly (p < 0.05) associated with the outcome were kept into the model. Further, physical distress and quality of life were entered separately in the multivariate model. Finally, physical distress and quality of life were evaluated together in the same model. Results A total of 433 subjects filled out all questionnaires at baseline, and were included in further analyses. Baseline characteristics of these participants are given in table 2. When applying standard European risk stratification (SCORE) [8], 369 subjects (85%) were considered to be at high risk or using antihypertensive medication or statins. The 130 subjects with incomplete questionnaires had significantly higher systolic blood pressure (152 ± 18 vs. 148 ± 20, p = 0.036) and lower body mass index (25.9 ± 3.3 vs. 26.7 ± 3.5, p = 0.001), but showed no significant differences in other classical risk factors, anxiety or depression. The proportions of patients reporting distress or dissatisfaction on each item in PSDI and LSI are given in table 1. There were 49 cardiovascular events among the 433 participants (11%), not significantly different from the 19 (15%) cardiovascular events among the 130 subjects without complete questionnaires. The additional file 1 presents univariate associations between classical risk factors, psychosocial parameters and cardiovascular events. High physical distress and low QOL were significantly associated with the incidence of cardiovascular events. In correlation analyses, PSDI and LSI were significantly correlated (p < 0.001) both with each other, and with Participants with complete questionnaires (n = 433). Numbers referring to proportion reporting "some", "moderate", "much" or "very much" on PSDI and "unsatisfied" or "very unsatisfied" on LSI, respectively. HADS. The strongest correlation was between HADS-A and PSDI and HADS-D and LSI (r = 0.42, p < 0.001), the other r's ranging from 0.33 to 0.35. Among the classical cardiovascular risk factors, previous cardiovascular disease, current smoking, level of serum glucose, diabetes, systolic blood pressure, LDL-cholesterol and in addition HADS-anxiety, were entered into the multivariate analyses together with the treatment modality. However, only level of serum glucose, systolic blood pressure, current smoking and HADS-anxiety were significantly associated with the outcome, and composed the multivariate model that our main co-factors PSDI and LSI were entered in. When entering only PSDI in the model, subjects in the upper tertile PSDI had significantly increased hazard compared to subjects from the lower tertile (HR 3.1, 95% CI 1.2 -7.9), while HADS-anxiety no longer was significantly associated. When entering only LSI, the association with the main outcome was somewhat stronger than in univariate analyses (HR 2.6, 95% CI 1.1-5.8). Finally, entering both co-factors at focus in the same multivariate model weakened the association between these and cardiovascular events, although still statistically significant for PSDI (additional file 1). Discussion This study demonstrates that physical distress is independently associated with the three year incidence of cardiovascular events in high risk elderly men. We found no significant independent association between low QOL and cardiovascular events. Our results support previous suggestions that an association between physical distress and incidence of cardiovascular events exists (10). Such a measure probably reflects two dimensions; The level of symptom burden, and the patient's perception of this. Fatigue/tiredness, dyspnoea and peripheral cold were the most prevalent symptoms reported. These are general symptoms, but they are all associated with the cardiovascular system and circulation. One hypothesis is that such symptoms may represent atherosclerotic disease not yet manifested as end-organ damage or events, and is in accordance with previous data in healthy subjects with breathlessness [22]. Although our data did not support an independent adverse effect of anxiety symptoms on three years incidence of CVD after adjusting for physical distress, the correlation between anxiety and physical distress opens for an underestimation of the association between anxiety and cardiovascular events. Likewise, some of the effect of physical distress on the incidence of CVD may be explained by anxiety. Our subjects with low QOL had a tendency to have more cardiovascular events than subjects with higher QOL, adjusted for classical risk factors and HADS-anxiety. This is in line with a recent report in a similar population, where low QOL was independently associated with the risk of cerebral stroke [14], and may represent an effect common for cardiovascular diseases. However, this association was weakened when adjusted for physical distress. Although our scale measured mostly social dimensions such as social relations, the participants reported least satisfaction when asked about energy, which was an item closely related to the PSDI. Hence, one possibility is that the univariate association between QOL and cardiovascular events reflects mainly consequences of physical health, in this case by cardiovascular related symptoms. There are several limitations to consider in interpretation and generalisation of our data. The inclusion procedures in both the Oslo Diet and Anti-Smoking Study in 1972-77 and the present DOIT may have been biased by lower prevalence of psychiatric symptoms or other individual psychosocial problems hindering motivation to participate in a lifestyle intervention study. We believe that this selection bias does not weaken our conclusion, since inclusion of subjects with more anxiety and depression at baseline would have led to a higher incidence of CVD. The significant proportion of patients with missing data could be another selection bias. As the main cause of missing data was random administration failure of questionnaires (n = 55) and the incidence of new cardiovascular events was similar to those completing the questionnaires, we believe that this bias would not have major influence on our main findings. Although our population primarily consisted of high-risk individuals without prior manifestations of CVD, a minority had verified previous cardiovascular events. However, the presence of previous events was not significantly associated with the incidence of new events during the study. In addition, our main findings were similar when considering only patients without previous CVD (data not shown). Finally, due to a limited sample size and the observational design, unknown confounders could have weakened the association between physical distress and cardiovascular events. The clinical relevance from our results is that evaluation of subjective health complaints may contribute with valuable information in addition to standard risk evaluation in patients at high risk of CVD. Information of physical distress and quality of life can easily be obtained by all health personnel by asking a few questions of the degree of breathlessness, fatigue or peripheral coldness, or with predefined questionnaires. Such factors may be considered as elements in future prospective studies on cardiovascular health, to further evaluate this association. Conclusion We have shown that increased self-reported physical distress, but not quality of life, was significantly associated with the three years incidence of cardiovascular events after adjustment for classical risk factors in an observational study of elderly men at high risk.
2017-06-16T14:41:43.523Z
2009-03-30T00:00:00.000
{ "year": 2009, "sha1": "9cbba6846b6ff26e0e5d62d356ad524146d431f9", "oa_license": "CCBY", "oa_url": "https://bmccardiovascdisord.biomedcentral.com/track/pdf/10.1186/1471-2261-9-14", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9cbba6846b6ff26e0e5d62d356ad524146d431f9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232189363
pes2o/s2orc
v3-fos-license
An analysis of chronic kidney disease as a prognostic factor in pediatric cases of COVID-19 Abstract Advanced age is a risk factor for severe infection by acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Children, however, often present with milder manifestations of Coronavirus Disease 2019 (COVID-19). Associations have been found between COVID-19 and multisystem inflammatory syndrome in children (MIS-C). Patients with the latter condition present more severe involvement. Adults with comorbidities such as chronic kidney disease (CKD) are more severely affected. This narrative review aimed to look into whether CKD contributed to more severe involvement in pediatric patients with COVID-19. The studies included in this review did not report severe cases or deaths, and indicated that pediatric patients with CKD and previously healthy children recovered quickly from infection. However, some patients with MIS-C required hospitalization in intensive care units and a few died, although it was not possible to correlate MIS-C and CKD. Conversely, adults with CKD reportedly had increased risk of severe infection by SARS-CoV-2 and higher death rates. The discrepancies seen between age groups may be due to immune system and renin-angiotensin system differences, with more pronounced expression of ACE2 in children. Immunosuppressant therapy has not been related with positive or negative effects in individuals with COVID-19, although current recommendations establish decreases in the dosage of some medications. To sum up with, CKD was not associated with more severe involvement in children diagnosed with COVID-19. Studies enrolling larger populations are still required. Advanced age is a risk factor for severe infection by acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Children, however, often present with milder manifestations of Coronavirus Disease 2019 . Associations have been found between COVID-19 and multisystem inflammatory syndrome in children (MIS-C). Patients with the latter condition present more severe involvement. Adults with comorbidities such as chronic kidney disease (CKD) are more severely affected. This narrative review aimed to look into whether CKD contributed to more severe involvement in pediatric patients with COVID-19. The studies included in this review did not report severe cases or deaths, and indicated that pediatric patients with CKD and previously healthy children recovered quickly from infection. However, some patients with MIS-C required hospitalization in intensive care units and a few died, although it was not possible to correlate MIS-C and CKD. Conversely, adults with CKD reportedly had increased risk of severe infection by SARS-CoV-2 and higher death rates. The discrepancies seen between age groups may be due to immune system and reninangiotensin system differences, with more pronounced expression of ACE2 in children. Immunosuppressant therapy has not been related with positive or negative effects in individuals with COVID-19, although current recommendations establish decreases in the dosage of some medications. To sum up with, CKD was not associated with more severe involvement in children diagnosed with COVID-19. Studies enrolling larger populations are still required. IntRoductIon December of 2019 marked the start of the dissemination of a new infectious disease called Coronavirus Disease 2019 in the Chinese city of Wuhan, caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) 1 . In light of its rapid global dissemination, the World Health Organization (WHO) elevated the disease to the category of a pandemic on March 11, 2020 2 . SARS-CoV-2 is a severe threat to public health that has produced a significant number of deaths and tens of millions of confirmed cases of infection throughout the world. On August 30, 2020, when this paper was being finalized, 25.03 million individuals had been diagnosed with the disease with an incidence rate of 3,211 cases per million population and a mortality rate of 3.4%, adding up to 843,158 deaths and 108.17 deaths per million population 3 . Six months after the first case of COVID-19 was recorded in Brazil, the 28 th Special Epidemiological Newsletter indicated that 316,814 Brazilians infected with SARS-CoV-2 by August 22, 2020 had been hospitalized, of which 7,436 (2,3%) were aged 0-19 years. Children were further subdivided into groups of subjects aged less than a year with 1,934 cases, aged 1-5 years with 1,862 cases, and aged 6-19 years with 3,642 cases. The death toll read 111,258, of which 759 (0.6%) were aged 0-19 years. In the subgroups of children, 248 deaths of individuals aged less than a year, 123 deaths of individuals aged 1-5 years, and 388 deaths of individuals aged 6-19 years were recorded. About a quarter of the deaths (29,522) were recorded among individuals aged 20-59 years, while 80,977 individuals aged 60 years or older (75%) died of the disease 4 . Although pediatric patients apparently experience milder forms of COVID-19, multisystem inflammatory syndrome in children (MIS-C) has been reported as a possible complication of infection by SARS-CoV-2 5 . Middle-aged and elderly subjects are at higher risk of developing severe acute respiratory syndrome (SARS), experiencing complications, and dying. There is consensus in the literature about the fact that advanced age and underlying disease -including chronic kidney disease (CKD) 6 -are important risk factors for severe infection by SARS-CoV-2 1 . However, we still lack conclusive information on the possible role CKD may have in the development of COVID-19 in pediatric patients. objectives This review looked into literature on pediatric patients with CKD to verify whether they were more prone to developing more severe symptoms when diagnosed with COVID-19 compared to children without CKD and adults with CKD. ethics This study is a literature review and, as such, does not require submission to or approval by an Ethics Committee, as set out in Resolution 466/12 of the National Health Council (CNS, Brazilian acronym). However, established ethical principles concerning the legitimacy, secrecy, and private nature of the information provided herein were complied with when needed. A study performed in Northern China reviewed the cases of 31 children aged six months to 17 years who were hospitalized after testing positive for SARS-CoV-2 with RT-PCR. None had comorbidities. Four patients (13%) were asymptomatic, 13 (42%) had mild symptoms, and 14 (45%) were categorized as common type. None developed severe disease. Twenty (65%) had fever lasting from one to nine days. One had high fever (> 39.1 o C), nine had moderate fever (38.1-39.0 o C), and ten had low fever (37.3-38.0 o C); 14 (45%) had cough; three (10%) had fatigue; three (10%) had diarrhea. Other symptoms including sore throat, coryza, dizziness, headache, and vomiting were rarely reported. Kidney dysfunction and blood glucose level alterations were not found. Fourteen children had alterations in their chest computed tomography scans, of which nine had nodules and uneven ground-glass opacity. Most of the patients (n = 29) were prescribed antiviral therapy: ten were treated with interferon (nebulized or pulverized for oral administration, 6-10 days); one individual was treated with oseltamivir alone (oral, twice a day, two days); the other patients were given combined therapy with interferon (nebulized or oral spray, 2-24 days), oseltamivir phosphate (oral, twice a day, five days), ribavirin (intravenous, 9-13 days), Arbidol (oral, 6-16 days), lopinavir/ritonavir (oral, twice a day, 7-18 days). On day 18, one patient had elevated transaminase levels without other adverse reactions. Antibacterial drugs were prescribed to six children for 5-11 days. Two children were treated with intravenous infusions of gamma globulin (379 mg/kg/ day and 278 mg/kg/day) for four days. Eight children were given treatment for symptoms based on traditional Chinese medicine. Between Days 7 and 23, 25 children tested negative; six remained hospitalized in short-term isolation; one subject was not discharged for presenting acute suppurative tonsillitis 8 . The studies mentioned in this section are listed in Table 1. kAwAsAki-like diseAse And hyPeRinflAmmAtoRy syndRome in PediAtRic PAtients A Brazilian study followed 79 children admitted to 19 pediatric intensive care units and found that 41% had comorbidities prior to hospitalization, 28% of which were neuromuscular conditionspredominantly non-progressive encephalopathies, while chronic respiratory diseases, cancer and blood diseases, congenital heart defects, and malnutrition combined were seen in 27% of the patients. Diabetes, prematurity, chronic liver disease, and obesity were observed in 18% of the patients. In this study, Prata-Barbosa et al. looked into the predictive factors tied to severe forms of COVID-19. The patients were subdivided into groups to elicit correlations between sex, ethnicity, age of less than one year, and comorbidities (32-41%) to severity of involvement, denoted by the prescription of invasive mechanical ventilation (IMV). Results with a confidence interval (CI) of 95% failed to find direct associations between male sex (p = 0.32), non-Caucasian ethnicity (p = 0.80), and age of less than one year (p = 0.64) with use of IMV; however, the study revealed that the presence of comorbidities alone (p = 0.01) was significantly associated with severe involvement. The patients included in the study were aged between one month and 19 years, resulting in a mean age of four years; 43 (54%) were boys and 36 (46%) were girls. Nineteen (24%) of the hospitalized patients were aged less than one year; five (26%) had comorbidities and three (16%) required IMV. Fourteen patients (18%) required IMV. Ten of them (71%) had comorbidities and in none was CKD described as a chronic condition. The study found that ten children (12,7%) had MIS-C, and that two of them (20%) had comorbidities and one (10%) required IMV; their main Two deaths (3%) of individuals without MIS-C and with comorbidities were recorded. The children with comorbidities were generally older, with a mean age of 7.5 years; they also required more IMV (31% vs. 9%, p = 0.01) and were more frequently diagnosed with acute respiratory distress syndrome (ARDS) (25% vs. 4%, p = 0.01) 9 . Feldstein et al. analyzed the cases of 186 patients aged 0-21 years diagnosed with MIS-C, which incidence increased when COVID-19 rates were decreasing in the United States. Seventy percent of the patients (n=131) had tested positive in RT-PCR tests, antibody tests or both for infection by SARS-CoV-2. The remaining 55 (30%) had been in contact with individuals with COVID-19. Seventythree percent (n=135) had been healthy individuals, and comorbidities affecting the other patients were not thoroughly described. Thirty-five patients (19%) were non-Hispanic whites, 46 (25%) were non-Hispanic blacks, 9 (5%) were of other non-Hispanic ethnicities, 57 (31%) were Hispanic or Latinos, and 41 (22%) were of unknown ethnicity. A total of 132 patients (71%) had involvement of at least four systems, the more common of which were the gastrointestinal (92%), cardiovascular (80%), hematologic (76%), mucocutaneous (74%), and respiratory (70%) systems. Eighty percent of the patients (n=148) required hospitalization at an intensive care unit, one fifth (n=37) needed IMV, and eight individuals (4%) received extracorporeal membrane oxygenation (ECMO). In the subgroup with MIS-C, 74 patients (40%) had fever for at least five days and presented four or five traits similar to Kawasaki disease or two or three traits similar to Kawasaki disease with additional lab workup and echocardiogram findings. On May 20, 2020, 130 patients (70%) were discharged, four (2%) died, and the rest of the group remained in hospital 10 . The studies mentioned in this section are listed in Table 2. PediAtRic PAtients with ckd A study performed in Spain enrolled 16 children under the age of 18 years previously diagnosed with CKD tested positive for infection by SARS-CoV-2 in RT-PCR tests. Ten patients had pre-dialysis CKD, three were on hemodialysis, and three had undergone kidney transplantation. By way of symptoms, 62.5% of the children had coughs and/or rhinorrhea, 50% had fever, 25% had gastrointestinal symptoms, and three (19%) were asymptomatic. Data on lymphopenia were available only for 12 patients, and four (30%) had the condition. Eight of the 16 patients were hospitalized, but none required admission to a pediatric intensive care unit. Nine children had been on immunosuppressants previously (three transplant patients, one on chronic hemodialysis with vasculitis, four with nephrotic syndrome, and one with IgA nephropathy). Immunosuppressant therapy was interrupted or dosages were decreased in four children. Five patients were prescribed hydroxychloroquine and one patient had been first given lopinavir-ritonavir, which resulted in adverse gastrointestinal effects, and was later converted to therapy with hydroxychloroquine. The patients recovered fully 19 days after diagnosis on average, and no death was recorded 1 . The European Rare Kidney Disease Reference Network started a study involving 16 centers from 11 countries. The study included 18 children aged 0-19 years on immunosuppressants diagnosed with COVID-19; 11 patients had previously undergone kidney transplants. None of the patients had dyspnea or required admission at an intensive care unit (ICU) 11 . Members of the European Reference Network on Pediatric Transplantation from the Padova University Hospital (Italy) and the La Paz University Hospital (Spain) sent out a questionnaire to other members in order to assess the impact of COVID-19 in pediatric transplants performed in Europe. Eighteen centers from 11 countries answered the questionnaire, in which two kidney transplant candidates and two transplant patients were cited as having been diagnosed with COVID-19. All had mild clinical signs and none required admission at an intensive care unit or changes to immunosuppressant therapy 12 . A case report published by Bush et al. described the case of a 13-year-old who contracted COVID-19 five years after having a kidney transplant. The patient did not develop complications or require changes to immunosuppressant therapy, and recovered rapidly 13 . In the United Kingdom, five children with CKD stages IV or V tested positive for infection by SARS-Cov-2. None died 14 . The studies mentioned in this section are listed in Table 3. Adult PAtients with ckd A study enrolled 12 kidney transplant patients aged 29-66 years diagnosed with COVID-19 based on positive RT-PCR tests for SARS-CoV-2. The most common symptoms were fever, cough, and dyspnea, seen in 75%, 75%, and 41.7% of the patients, respectively. A third of the patients had leukopenia. All had been on immunosuppressants, and dosages were decreased based on the protocols in effect at the center in which the study was carried out. The patients were prescribed hydroxychloroquine 400 mg, lopinavir-ritonavir 400/100 mg twice a day, and intravenous antibiotics. Ten were admitted to an intensive care unit and eight died of severe pneumonia caused by COVID-19 and acute respiratory distress syndrome 15 . In a meta-analysis, Brandon Michael Henry and Giuseppe Lippi showed that four studies analyzed separately did not report CKD as a significant predictor for severe COVID-19 in adults. However, the combined data revealed a significant association between CKD and severe COVID-19 6 . The studies mentioned in this section are listed in Table 4. dIscussIon In the context of the SARS-CoV-2 pandemic, there is mounting concern over the infection of pediatric patients with CKD, since in adults the disease is a risk factor for severe involvement. Specialists suggested that children with CKD might be at higher risk during the pandemic on account of immunosuppressant therapy and greater exposure to hospitals and health care clinics, since they require hemodialysis and other therapies 16 . Few papers have been published about children with CKD who become infected by SARS-CoV-2, possibly because children account for only 1-5% of the cases of COVID-1 17 . and there is little epidemiological data on pediatric CKD. The prevalence of CKD stage V in the United States in 2016 was 104 per million patients aged 0-21 years 18 . In contrast, prevalence among young adults aged 22-44 years was approximately -Gastrointestinal symptoms were more prevalent in patients with MIS-C; -Sixty-nine children did not develop MIS-C; 30 had comorbidities and 13 required IMV; -Two patients died; -Children with comorbidities were older, required more oxygen therapy, and were more frequently diagnosed with ARDS. -None of the patients had CKD. Feldstein et al. Multicenter study 186 -Comorbidities were not described in detail; -The incidence of MIS-C increased after peaks in the number of cases of COVID-19; -One hundred and thirty-two patients had involvement in at least four systems; -One hundred and forty-eight were admitted to an intensive care unit, 37 were on IMV, and eight on ECMO; -One hundred and thirty were discharged, four died, and the other patients were still hospitalized at the end of the study. 967 per million population 18 . and in the group aged 45-64 years prevalence was approximately 3,883 per million population 18 . The prevalence of CKD stage V clearly increases with age. In the analyzed studies, the most common symptoms among pediatric patients with CKD were cough, fever, gastrointestinal symptoms, and symptoms consistent with acute upper airway infection, as also seen in children without CKD. Although children with CKD stay in hospital for longer, the disease played out similarly as it did in healthy children, with mostly mild symptoms and good progression. Evidence suggests the existence of an association between infection by SARS-CoV-2 and MIS-C based on temporal relations, positive COVID-19 tests in most patients with MIS-C, and hyperinflammatory manifestations similar to what is seen in adult COVID-19 patients 10,[19][20][21] . MIS-C has been described as an unusual complication of COVID-1 10 . that manifests weeks after infection with greater prevalence observed after peaks in COVID-19 cases. The condition has been associated with more severe involvement and greater need of IMV. However, the reasons why only some children and adolescents develop MIS-C are unclear. A potential explanation is based on age-related differences, which might yield different probabilities of exposure to SARS-CoV-2 or differences in the nasal expression of ACE2 10,22 . Another possibility is that children genetically susceptible to Kawasaki disease might present decreased expression of membrane-bound ACE2 23 . Once they are infected with SARS-CoV-2, the expression of ACE2 would be more significantly deregulated by TNF-α, leading to inflammation and development of Kawasaki-like disease 23 . However, an association between CKD and MIS-C has not been described. It should be noted that the epidemiological manifestation of Kawasaki disease associated with COVID-19 is different from the norm. In general terms, the condition is more prevalent during early childhood in individuals of Asian descent, while in subjects with SARS-CoV-2 it occurs in older, previously healthy pediatric patients of African and Hispanic descent 5 . Comparisons between pediatric and adult patients with CKD revealed that adult individuals with COVID-19 developed more severe disease. The most common symptoms observed in these patients were fever, cough, dyspnea, severe pneumonia, and acute respiratory distress syndrome. Most of the adult patients had to be admitted to an intensive care unit and many eventually died. An association has been found between CKD and severe COVID-19 in adult patients. The same cannot be said of pediatric patients with CKD. A few ideas have been considered to explain the causal link between the different forms and manifestations of COVID-19 in adults and children, since prognosis is connected not only to preexisting comorbidities, but also to physiological oscillations in the molecules of the renin-angiotensin system (RAS) that occur during the course of life 24 . Zhou et al. confirmed that SARS-CoV-2 uses angiotensin converting enzyme 2 (ACE2) to penetrate host cells 25 . ACE2 is the first known angiotensin converting enzyme (ACE) homologue, with 40% identity and 60% similarity. It carries an apparent signal peptide, one active site of metalloproteinase and one transmembrane domain 26,27 . ACE2 is also present in other sites and cells, including alveolar cells and lymphocytes, which may explain lung involvement and lymphopenia in individuals with COVID-19 28 . In the RAS, ACE promotes the conversion of angiotensin I (Ang I) into angiotensin II (Ang II), which binds to the AT1 receptor. This axis, known as the classical axis, produces vasoconstriction and proinflammatory and pro-oxidative effects. By its turn, ACE2 converts Ang II into Angiotensin-(1-7) [Ang-(1-7)] 28 . The main effects of Ang-(1-7) are mediated by a G-protein coupled receptor called Mas (MasR) 29 , which triggers vasodilation, antioxidant and anti-apoptotic effects, and inhibition of inflammatory response and fibrosis 28,30,31 . Therefore, the ACE2/Ang-(1-7)/MasR axis, also known as the counter-regulatory axis, acts in opposition to the ACE/AngII/AT1R axis 28 , as both play an important role in the regulation of the immune system [30][31][32] . COVID-19 apparently causes, at least in part, imbalances in the RAS by negatively regulating ACE2, thereby exacerbating the ACE/AngII/AT1R axi 53 , and producing predominantly proinflammatory effects. This imbalance is the outcome of an apparent paradox related to the bioavailability of ACE2: either (A) infected individuals have higher levels of ACE2 and are able to support the exacerbated consumption of this enzyme and neutralize the deleterious effects caused by low ACE2 levels; or (B) individuals with naturally lower ACE2 levels cannot activate the anti-inflammatory pathway of the RAS, thereby exacerbating the ACE/AngII/AT1R axis 54 . By its turn, this exacerbation, via endothelial dysfunction and cytokine storm, induce acute lung injury 30 . In general terms, children have higher ACE2 expression levels than adult 31,55 , and tend to present responses consistent with case A described above, with cytokine storm and severe COVID-19 becoming less probable events. Furthermore, Bunyavanich et al. showed that children aged less than ten years have lower ACE2 expression levels in the nasal epithelium -one of the main entry points of SARS-CoV-2than older children and adults 22,56 , which explains the lower incidence of infection in this age group. In addition, estrogen and testosterone levels decrease with age, thereby increasing plasma renin activity and changing the balance between the RAS axes 57 . Therefore, a natural negative regulation of the ACE2/ Ang-(1-7) axis occurs in the elderly 58 , making them more susceptible to severe COVID-19. The milder forms of viral diseases seen in children may also be due to the fact that they have relatively immature immune systems 56,59 , with low T and B cell levels, lesser Th1 and IFN type 1 inflammatory response, and greater Th2 and Th17 inflammatory response 56,60,61 . Furthermore, the high level of regulatory T cells in children may protect them against severe manifestations of COVID-19 56,62 . By their turn, adults usually present greater levels of Th1 inflammatory response, as verified in infections by SARS. Greater magnitude Th1 response is closely related to severe diseas 56,63,64 , and, due to the significant homology between its viral sequence and SARS-CoV-2, it is likely that this mechanism also occurs in COVID-19 56 . One should notice, however, that we still lack evidence to define the causes of milder disease in children. The hypotheses currently considered require confirmation from future studies. The effects of immunosuppressant therapy in individuals infected with SARS-CoV-2 have been discussed in the literature. Immunosuppression is one of the cornerstones in the treatment of kidney disease. Discontinuing therapy may significantly compromise the health of individuals with kidney disease and the management of their underlying condition, which may become more harmful than the viral infection itself 65 . Immunosuppression may facilitate contamination and dissemination of the virus in one's body, ultimately producing more severe cases of COVID-19 66 . Data from the Centers for Disease Control and Prevention (CDC) collected between February and April 2020 showed that among children with comorbidities diagnosed with COVID-19, immunocompromised individuals were the more severely affected 65 . Among children with CKD, immunocompromised patients include children with stage IV or V disease, subjects on hemodialysis, and individuals on immunosuppressants, including steroids, calcineurin inhibitors, cyclophosphamide, mycophenolic acid, and rituximab 65 . Calcineurin inhibitors are prescribed to transplant patients and cause significant decrease in adaptive immune response, which potentially increases the chances or uncontrolled virus dissemination 66 . In vitro studies have shown that non-immunosuppressive derivatives of cyclosporine may decrease the levels of viral N protein, an element in viral replication. One might speculate that such derivatives may be used instead of calcineurin inhibitors within the context of the SARS-CoV-2 pandemic 66 . Some authors have suggested that immunosuppression may have a role in the treatment of patients with COVID-19 67,68 . Since one of the pathophysiological mechanisms of the disease is a state of hyperactivation of the immune system, particularly with the exacerbated activation of T cells, the anti-inflammatory effects of immunosuppressants might decrease immune response, and consequently ameliorate lung injuries and reduce the severity of the disease 67 . Xu et al. analyzed post mortem biopsy specimens and found high levels of CD4 T cells with CCR6+ and Th17 in lung tissue 68 . The severity of lung injury may be connected, among other factors, to increased T cell activity in the organ. Additionally, lymphopenia in patients with severe COVID-19 brings up the possibility of hyperactive defense cells being sequestered to the lungs, thus decreasing their circulating levels. Chronic immunosuppressant therapy and compromised T cell function might produce a protective effect against the complication arising from the disease 67 . Prescribed medications include tacrolimus and cyclosporine, which decrease the production of IL-2, a cytokine that modulates T cell proliferation and maturation, and tacrolimus and mycophenolic acid, which inhibit IL-17 and Th17 lymphocytes, thus decreasing stimuli to produce IL-6 and IL-8 67 . Therefore, another consequence of immunosuppressant therapy is decreased cytokine release, which by its turn may prevent the occurrence of cytokine storms characteristically seen in patients with COVID-19 and thus yield milder symptoms 13 . The continuation of immunosuppressant therapy and the initiation of such therapy in clinically eligible patients with CKD are warranted 65,66,67,69 . The general recommendation for patients suspected or diagnosed with infection by SARS-CoV-2 is to reduce immunosuppression to safer levels that allow the management of the underlying condition and decrease the risk of infection 66 . More studies are needed to define the order of magnitude of such decrease in immunosuppressant therapy. In this context and in all other related medical decisions, patient individual characteristics must be considered in the development of a treatment plan 66 . conclusIon In regard to the differences in pediatric cases of COVID-19, children with and without CKD generally develop mild disease and do not require hospitalization in an intensive care unit. The papers reviewed in this study did not show significant differences in the severity of COVID-19 in these groups of patients. Therefore, current evidence does not allow the establishment of a relationship between CKD and severity of COVID-19 in children. Additional studies enrolling larger populations are needed. Albeit rare, some children developed MIS-C as a possible complication of infection by SARS-CoV-2 and had more severe disease. This complication is apparently tied to decreased membrane-bound ACE2 expression. Therefore, studies looking into the polymorphisms of the ACE2 gene and expression levels may be usefu 23 , in defining disease severity and providing input to patient management. Finally, adults with CKD had more severe COVID-19 symptoms compared to children with CKD. The underlying mechanisms that explain these differences are still the fruit of speculation, but they are possibly related to negative regulation of the ACE2/Ang-(1-7) axis in the elderly, greater expression of ACE2 in children, and differences in the immune response of pediatric, adult, and elderly patients. However, additional studies are required to confirm the ideas mentioned above. AuthoRs' contRIbutIons Bárbara Caroline Dias Faria and Luiz Gustavo Guimarães Sacramento are the main authors. The two reviewed the literature and wrote the first version of the manuscript. Carolina Sant' Anna Filipin, Aniel Feitosa da Cruz, and Sarah Naomi Nagata contributed equally with the manuscript by helping to review an select papers and writing parts of the manuscript. Ana Cristina Simões e Silva contributed substantially by critically reviewing the manuscript and approving the final version for publication.
2021-03-12T06:16:08.321Z
2021-03-05T00:00:00.000
{ "year": 2021, "sha1": "ee24f91a222ea81d5d045269313ed05ddfedb86e", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/jbn/a/gWHfcvMZfcdhpKYpVxF3xqc/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9a927c303c856d903c8044d104be070c66121653", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
235482659
pes2o/s2orc
v3-fos-license
Modeling the dynamic of COVID-19 with different types of transmissions In this paper, we propose a new epidemiological mathematical model for the spread of the COVID-19 disease with a special focus on the transmissibility of individuals with severe symptoms, mild symptoms, and asymptomatic symptoms. We compute the basic reproduction number and we study the local stability of the disease-free equilibrium in terms of the basic reproduction number. Numerical simulations were employed to illustrate our results. Furthermore, we study the present model in case we took into consideration the vaccination of a portion of susceptible individuals to predict the impact of the vaccination program. man population: Wu et al. [49] employed a simple susceptibleexposed-infectious-recovered (SEIR) based model to forecast the potential of the COVID-19 to spread in China and beyond. They estimated the reproduction number ( R 0 ) and epidemic doubling time, indicating the exponential growing nature of the COVID-19 outbreak. Tang et al. [44] developed and analyzed a deterministic model which incorporates quarantine and hospitalization to estimate the transmission risk of the COVID-19 and its implication for public health interventions. Their model was extended in Ngonghala et al. [37] by dividing infectious compartment into two essential compartments of hospitalized and isolated individuals, and those in intensive care units to assess the impact of the national provider identifier on curtailing the spread of the COVID-19 pandemic. Musa et al. [35] proposed a deterministic model to show the importance of timely quarantine and hospitalization to reduce the epidemic. In [34] the authors proposed to study the transmission dynamics of COVID-19 in Nigeria. Their model incorporates different hospitalization measures for mild and severe cases to assess the effect of awareness programs on the dynamics of COVID-19 infection. In [38] , the authors presented a mathematical model qualitatively using stability theory of differential equations and the basic reproduction number that represents an epidemic indicator obtained from the largest eigenvalue of the so-called next-generation matrix to find the likely outcome of an outbreak that is beneficial for public health initiatives. Other mathematical models with ordinary time derivatives can be found in literature [4,14,[29][30][31]38] . In recent years, fractional-order dynamical systems have been appeared in several areas of science and engineering as a field of mathematical analysis [22] . It deals with the investigation and applications of integrals and derivatives of arbitrary order instead of classical integer-order and integration, see [5,23,25,32,39,45,46] . Nowadays, many researchers have focused their interest on investigating the dynamics of fractional-order in connection with the pandemic COVID-19, since it can more accurately explain natural phenomena than the differential equations of the integer-order, especially those associated with hereditary properties and historybased phenomena [40] : In [1] , Akgul, Ahmed et al. analyzed a model of differential equation related to COVID-19. They used fractal-fractional derivatives, and they analyzed the equilibria of the model and its stability in detail and solved the model numerically. In [17] , the authors considered the model (SIRU): susceptible S(t ) , asymptomatic infectious I(t ) , reported symptomatic R (t) and unreported symptomatic infectious U (t) , they used the operator called Caputo fractional operator to the reported and unreported cases by analyzing a time-fractional model and finding its solution, see also [53] . The authors of [18] studied the model (SEIARM): susceptible S(t ) , exposed E(t ) , infected I(t ) , asymptotically-infected A (t) , recovered R (t) and reservoir M(t ) , defined by a system of six equations, they generalized this model to incorporate memory consequences and hereditary properties, using the fractional derivative in the sense of Caputo. Singh et al. [43] proposed a mathematical model (ABCDE): susceptible A (t) , exposed B (t) , infected C(t ) , asymptomatic D (t) and recovered E(t ) . They replaced timederivative in model with fractional-order time-derivative. They studied the COVID-19 infection by fractional-order model, following the Grünwald--Leitnikov fractional derivative. Motivated by the afore-mentioned works, we intend first to develop a new deterministic model, which extends the models developed in Musa et al. [34] and Ngonghala et al. [37] . We do this by taking into account several essential properties of the pandemic COVID-19, such as the existence of individuals tested positive for COVID-19 with severe, mild, or asymp-tomatic symptoms, and dividing infectious compartment into two essential compartments of hospitalized individuals and those in intensive care units. We study the stability properties of the solutions of a proposed nonlinear mathematical model with nine compartments, namely, susceptible-exposed-infectious with severe symptoms-infectious with mild symptoms-asymptomatic infectious-hospitalized-intensive care unit-dead infectious corpsesrecovered to investigate the current outbreak of coronavirus disease in Morocco and beyond. We hope that this study will provide better hospitalist bed management and a clear guidance for public health measures to combat the spread of COVID-19. The manuscript is organized as follows: In Section 2 , we introduce a new model for COVID-19. In Section 3 , we investigate a qualitative analysis of the model and compute the basic reproduction number of the COVID-19 system model; we study the local stability of the disease-free equilibrium in terms of basic reproduction number. In Section 4 , we add vaccination into consideration in our model, and we study its stability under vaccination. Section 5 , concerns data fitting, we illustrate our model by numerical simulation, and we compared it with actual data of Morocco. Lastly, we give a conclusion and future work in Section 6 . The SEI ss I ms I a HI cu RD model proposed In this section, we present a new model which is a generalization of the models studied in Musa et al. [34] and Ngonghala et al. [37] . The model take into account the existence of individuals with severe, mild, or asymptomatic symptoms, we propose a new epidemiological compartment model that takes into consideration the difference between individuals with severe symptoms, mild symptoms, and without symptoms. The model under consideration subdivides the population of humans at time t into five compartments. That is, susceptible class S(t ) , exposed class E(t ) , severe symptoms infectious individuals I ss (t) , mild severe symptoms infectious individuals I ms (t) , infectious but asymptomatic individuals I a (t) , hospitalized H(t ) , intensive cure unit class I cu (t) , recovery with immunity class R (t) , and dead class D (t) . Before presenting the model, we put some assumption as given by the following assertions: that is, recovered individuals have immunity against the disease; they cannot become infected again and cannot infect susceptible either. ( A 6 ) Infected individuals in the hospital or intensive care unit (ICU) are isolated, then they do not contribute to the transmission of the infection. The model takes the following form summarizing the main structure of our model. where β is the human-to-human transmission coefficient per unit time per person; k represents the rate at which an individual leaves the exposed clan by becoming infectious. The period 1 /k is called an incubation period. The parameter p 1 is the probability that an individual leave exposed compartment E and become symptomatic infectious with severe symptoms I ss ; p 2 is the probability at which exposed individuals become infectious with mild symptoms I ms ; while 1 − p 1 − p 2 is the probability at which exposed individuals went to asymptomatic clan I a ; h will be the rate at which an individual leave the compartment I ss ; q 1 is the probability at which a person in I ss went to the compartment H of hospitalized individuals; γ 3 is the recovery rate of people with mild symptoms and asymptomatic people without being hospitalized; δ 1 is the death rate of hospitalized patients H without intensive care; γ 1 is the death rate of hospitalized patients with intensive care I cu . Under the diagram in Fig. 2 , the evolution of the compartments mentioned above is modeled by the following system of ordinary differential equations. and D (t) denote the number of susceptible individuals, exposed individuals but not yet infectious, infectious individuals with severe symptoms, infectious individuals with mild symptoms, asymptomatic individuals, hospitalized individuals, individuals in intensive cure unit, recovered by immunity individuals and dead individuals, at time t, respectively. The reproduction number In epidemiology, the basic reproduction number denoted R 0 of infection can be thought of as the expected number of cases directly generated by one case in a population where all individuals are susceptible to infection. The method to compute the basic reproduction number using the next-generation matrix is given by Diekmann et al. (see [13] ) and elaborated by van den Driessche and Watmough see [15] . In our system there exists a disease-free equilibrium denoted E 0 which is given by S = N, E = I ss = I ms = I a = H = I cu = R = D = 0 . In order to calculate the basic reproduction number R 0 based on this steady state, we consider the following subsystem: the matrix associated to the rate of the appearance of new infections and V = The matrix associate with the net rate out of the corresponding compartments. Then the generation matrices F and V which are the Jacobian matrices of F and V at E 0 respectively, are given by The basic reproduction number R 0 is obtained as the spectral radius of −F. V −1 , precisely, Furthermore, since the total population size N is constant, one has (3.2) Therefore, the local stability of model (2.1) can be studied through the remaining coupled system of state variables, namely, the variables E, I ss , I ms , I a , H and I cu in system (2.1) . The system associated to these variables at the disease-free equilibrium is the fol-lowing: The Jacobian matrix J(E 0 ) associated to these variables of system (3.3) at E 0 is given by : The Jacobian matrix of system (3.3) at E 0 is The matrix J(E 0 ) has always two negative eigenvalues λ 1 = λ 2 = −1 , the other eigenvalues of J(E 0 ) are determined by the equation where a 0 = 1 , Next, by using the Routh-Hurwitz criterion, all the roots of P (λ) are negative or have negative real part if, the following conditions are satisfied: (1) a 0 > 0 and a 1 > 0 , We have the following two cases: i) If R 0 < 1 , then the conditions (1)-(4) are satisfied, hence real parts of all the eigenvalues of the matrix J(E 0 ) are strictly negative, thus the disease free equilibrium E 0 of the system (2.1) is locally asymptotically stable. Hence the disease will decay. ii) If R 0 > 1 the condition (4) is not satisfied, then at least one eigenvalue has a positive real part, thus the disease-free equilibrium E 0 of the system is unstable saddle point. In this case, the disease can resist. Epidemic model with vaccination Vaccination has been established as a powerful tool in managing and controlling infectious diseases by providing protection to susceptible individuals [47] . This section aims to study the dynamics of the SEI ss I ms I a HI cu RD model with vaccination. We assume that a certain proportion of individuals in the susceptible class are vaccinated. In this case, vaccinated individuals are moved to a new compartment V . Let p be the proportion of the population vaccinated per unit of time. Since the vaccine does not provide immunity to all vaccine recipients, vaccinated individuals may become infected but at a lower rate than unvaccinated. In this case, let σ ∈ [0 , 1] such that (1 − σ ) be the vaccine efficacy. The diagram 1 becomes: This diagram can be translated to the following system of ordinary differential equations. p) N, 0 , 0 , 0 , 0 , 0 , 0 , 0 , pN, 0) . The infection components in this model are E, I ss , I ms , I a , H, and I cu . Then the infection matrix F and the transition matrix V are given by Differentiating F and V with respect to E, I ss , I ms , I a , H, and I cu and evaluating at the disease-free equilibrium E 0 , respectively, we get Thus, the reproduction number for the vaccinated SEI ss I ms I a HI cu RD model is: This means that the disease free equilibrium E 0 will be asymptotically stable if R v < 1 , and unstable if R v > 1 . The critical percentage of the population p c necessary to achieve herd immunity is: p c represents the proportion for which the basic reproductive number under vaccination R v is equal to 1. Numerical simulations: the case study of Morocco In Fig. 3 the daily confirmed, dead, and recovered cases of COVID-19 have been depicted from July 01, 2020, to March 01, Fig. 4. Epidemic evolution predicted by the model. Efficacy of the vaccine 0.9 (Estimated) dimensionless 2021, which becomes seven months (or 210 days). We perform numerical simulations to compare the results of our model with the actual data in Fig. 3 . The predicted evolution of the outbreak of COVID-19 without and with vaccination in Morocco can be seen in Fig. 4 and Fig. 5 , respectively. The parameters of the mathematical model were fitted with the data provided from [50] and collected in Table 1 . We enlarge the plots by taking the maximum number of people 60 0,0 0 0. From Figs. 4 and 5 , we observe that all the trajectories adapt the same pattern and converge to the 0 point. In Fig. 4 , we observe from the plot that exposed E(t ) , asymptomatic I a (t) , infected with severe symptoms I ss (t) , infected with mild symptoms I ms (t) , hospitalized H(t ) and people in ICU, I cu (t) increased to the peak which is after 150 days from July 01, 2020. This is compatible with the data represented in Fig. 3 . Thus, we show that our COVID-19 model describes well the real data of daily confirmed, recovered, and dead cases during these seven months (from July 01, 2020, to March 01, 2021). In Fig. 5 , we observe that the plot is flatter than the one in Fig. 4 ; also the curve of asymptomatic people is almost identic to the x -axis, which shows the importance of vaccination program to reduce the epidemic. Conclusion and future directions Many models have been considered to study the new epidemic COVID-19. Here we have taken into consideration the different characteristic of COVID-19, and their relation with entering the hospital, the intensive care units or not, we propose a good model that describes the evolution of COVID-19 in Morocco, giving a good approximation of the reality of the Moroccan outbreak (see Fig. 4 ), and giving a simulation of this model under vaccination (see Fig. 5 ). In our future work, we intend to generalize the above model using fractional calculus, which can be more accurate to explain natural phenomena more than the classical differential. Furthermore, the model will be generalized to incorporate memory consequences and hereditary properties.
2021-06-20T13:12:04.134Z
2021-06-19T00:00:00.000
{ "year": 2021, "sha1": "2eb280916987f14ffa48cf9d0de5246b99264b31", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.chaos.2021.111188", "oa_status": "BRONZE", "pdf_src": "ScienceParsePlus", "pdf_hash": "2eb280916987f14ffa48cf9d0de5246b99264b31", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Medicine" ] }
17950044
pes2o/s2orc
v3-fos-license
Characterization of the first enzyme in 2,4-dichlorophenoxyacetic acid metabolism. This paper reviews the properties of the Alcaligenes eutrophus JMP134 tfdA gene product, the enzyme responsible for the first step in 2,4-dichlorophenoxyacetic acid (2,4-D) biodegradation. The gene was overexpressed in Escherichia coli and several of its enzymatic properties were characterized. Although this enzyme catalyzes a hydroxylation reaction, it is not a monooxygenase. Rather, TfdA is an Fe(II) and alpha-ketoglutarate-dependent dioxygenase that metabolizes the latter cosubstrate to succinate and carbon dioxide. A variety of other phenoxyacetates and alpha-ketoacids can be used by the enzyme, but the greatest catalytic efficiencies were found using 2,4-D and alpha-ketoglutarate. The enzyme possesses multiple essential histidine residues, whereas catalytically essential cysteine and lysine groups do not appear to be present. The biodegradation of 2,4-dichlorophenoxyacetic acid (2,4-D), a broadleaf herbicide, has been shown to involve side chain removal, hydroxylation of the resulting 2,4-dichlorophenol (2,4-DCP), ortho cleavage of 3,5-dichlorocatechol, conversion of 2,4-dichloro-cis,cis-muconate to succinate, and subsequent metabolism of this intermediate by the cells (Figure 1) as reviewed by Haggblom (1). The genes encoding the enzymes involved in these processes in Alcaligenes eutrophus JMP 134 have been localized to the pJP4 plasmid, cloned, and sequenced (2). Whereas the sequences of the tfdB gene (encoding 2,4-DCP hydroxylase) and the tfdC, tfdD, and fdEgenes (responsible for dichlorocatechol degradation) exhibit similarities to sequences of genes involved in mineralization of nonchlorinated or monochlorinated analogues (3), the sequence of the q'A gene (4) (TfdA) had been reported. Here, we describe our efforts to characterize the first enzyme in the 2,4-D pathway. We have shown that the enzyme is not a 2,4-D monooxygenase, as commonly stated in the literature but is rather a ferrous and aketoglutarate (a-KG)-dependent dioxygenase. A 3. 1-kilobase pair (kbp) Sacl fragment of pJP4 containing the fdA gene was subcloned into the Sacl site of pUC19, followed by elimination of a 1.5-kbp XbaI fragment to yield plasmid pUS3 11. This plasmid was transformed into Escherichia coli JM 109 and the recombinant cells were shown to synthesize high levels of a peptide with relative molecular mass (Mr) 32,000. Despite the abundance of TfdA, the 2,4-D-degrading activity in cell extracts was very low (0.003 pmoles of 2,4-D converted to 2,4-DCP/min/mg protein) compared to the published rate of degradation in whole cells of A. eutrophusJMP134(pJP4) (0.105 pmole/min/mg) (5). The trace level of activity was abolished upon addition of chelators and restored upon addition of ferrous ion, consistent with a Fe(II) requirement for the enzyme. The presence or absence of reducing agents had no effect on activity, which was inconsistent with the behavior of a monooxygenase. Rather, we found (6) that the enzyme is a dioxygenase that requires a-KG as a cosubstrate and converts this compound to carbon dioxide and succinate, as illustrated in Figure 2. The thermolabile enzyme (stable only up to 300 C) was purified to apparent homogeneity (specific activity of 16.9 jimoles substrate converted per minute per milligram of protein) by a simple two-step procedure (Table 1) and extensively characterized (7). The presence of protease inhibitors during early stages of TfdA purification enhances the stability of the enzyme by preventing conversion of the subunit to an inactive TfdA fragment of apparent M 27,000. Whereas N-terminal sequence analysis of the nondegraded subunit revealed the residues (S-V-V-A-N-) expected from DNA sequence analysis, the amino-terminal sequence of the proteolytic fragment of TfdA (F-K-Y-A-E-L-) was consistent with hydrolytic cleavage after arginine (residue #77). By using anti-TfdA IgG in Western blot analysis of various samples, the conversion was found to occur after cell disruption rather than during the cell cultivation period. The same methods were used to demonstrate that the M of 32,000 form of the protein was present in cell extracts of A. eutrophus JMP1 34. The native protein has an apparent Mr of 50,000 ± 2500, which is consistent with a homodimeric structure. The enzyme exhibits maximum activity at pH 6.5 to 7; however, it is stable over a pH range of 6.5 to 11. Ferrous ion is absolutely required for activity and cannot be replaced by Co(II), Cu(II), Li(II), Mg(II), Mn(II), Ni(II), or Zn(II). As shown by a time-dependent decrease in enzyme activity, however, ferrous ion alone is unable to sustain enzyme catalysis over long time periods. The rate of activity loss was greatly reduced (although not completely eliminated) by inclusion of ascorbic acid in the assay. Catalytic turnover of the enzyme is not required for inactivation, as shown by the loss of activity during enzyme incubation with ferrous ion before addition of substrate. Although not completely characterized, the inactivation clearly results from a metal ion-mediated event as demonstrated by the retention of activity when the enzyme is stored in the absence of metal ions and the presence of EDTA. To minimize enzyme inactivation in kinetic studies, reactions were initiated by addition of enzyme to assay mixtures, and short assay periods were used to calculate initial rates. Although capable of hydroxylating a wide range of phenoxyacetates and related compounds (Table 2), the enzyme exhibits the greatest affinity and highest catalytic efficiency for 2,4-D. Nonhalogenated phenoxyacetate possesses a larger Km value than the halogenated substrates. Similarly, the Km value for 2-phenoxypropionate is substantially larger than that of 2-(2,4dichlorophenoxy)propionate. The additional methyl group in the side chain of these two compounds, however, greatly decreases their kcat values for hydroxylation, perhaps due to the change from a secondary to a tertiary carbon atom. Although 3-phenoxypropionate is a very poor substrate, it is hydroxylated by the enzyme, demonstrating that the substrate binding site can accommodate one extra methylene carbon in the side chain. In contrast, TfdA exhibits no activity toward 4-(2,4dichlorophenoxy)butyrate, 2-phenoxybenzoate, 2-phenoxyethanol, hydrocinnamic acid, indolylacetic acid, and methyl esters "Reproduced from Fukumori and Hausinger (7). Although a-KG is the preferred cosubstrate for the enzyme, TfdA can use a range of other a-ketoacids with lower efficiency ( Table 3). The non-a-ketoacid carboxyl group is not required for recognition by the enzyme; the Km values for the two substrates possessing a second acidic group are significantly less, and the catalytic rates are generally higher compared to those for the other substrates. Addition of an extra methylene group between the a-ketoacid group and the free carboxyl group, as in aketoadipate, leads to small changes in the kinetic constants. In contrast, removal of one of the methylene groups, as in oxalacetate, led to an ineffective substrate. Furthermore, 3-ketoglutarate, malonate, succinate, and glutarate were unable to support hydroxylation. Finally, in the absence of 2,4-D, no decomposition of a-KG was observed. Chemical modification studies were used to provide evidence consistent with the absence of essential thiol or arginine residues and the presence of multiple essential histidine residues in the enzyme. Whereas iodoacetamide, N-ethylmaleimide, and butanedione failed to affect TfdA activity, the addition of diethylpyrocarbonate (DEP), a histidine-selective reagent, led to rapid pseudo-first-order loss of activity. The ability of 2,4-D, a-KG, Fe(II) plus ascorbate and combinations of these substances to protect the enzyme against DEP inactivation was examined. Whereas none of the individual compounds are able to significantly protect the enzyme from inactivation by DEP, the combinations of 2,4-D plus Fe(II) or a-KG with Fe(II) decrease the inactivation rate. Furthermore, the combined presence of 2,4-D and a-KG is very effective in protecting the enzyme from inactivation by DEP. We interpret results from the above studies in terms of the model illustrated in Figure 3. Consistent with the expected requirements for positive charges at the binding sites of 2,4-D and a-KG, we propose that essential histidine residues are present at the binding sites for each of Figure 3. Model of the TfdA active site. Modified from Fukumori and Hausinger (7). these substrates. In addition, we propose that one or more additional histidine residues may be buried in the protein at the Fe(II) binding site. Binding of 2,4-D and x-KG protects the histidine residues at the substrate binding sites and additionally may protect the Fe(II) ligands by steric constraints. Binding of either 2,4-D plus Fe(II) or both a-KG and Fe(II) might lead to the observed reduced rate of inactivation. In contrast, addition of any one compound alone is unable to protect the three or more distinct sites of inactivation postulated in this model. The specific requirements for a-KG and Fe(II) in 2,4-D degradation and the stimulation of activity by ascorbate are typical characteristics of X-KG-dependent dioxygenases (8); however, the TfdA sequence exhibited no significant similarity to any of the known x-KG-dependent dioxygenase sequences (e.g., prolyl hydroxylase, lysyl hydroxylase, aspartyl hydroxylase, hyoscyamine hydroxylase, deacetoxycephalosporin hydroxylase, or the mechanistically related p-hydroxyphenylpyruvate hydroxylase) available in GenBank. We speculate that TfdA may have evolved from a gene involved in biodegradation of a plant-derived compound containing an aromatic ring in ether linkage to an acidic sidechain. One intriguing possibility is that a TfdA-like enzyme is involved in-lignin degradation; i.e., lignin peroxidases and manganese peroxidases degrade the complex polyaromatic lignin substrate to smaller pieces, and those fragments that retain ether linkages may subsequently be degraded by an ax-KG-dependent dioxygenase.
2014-10-01T00:00:00.000Z
1995-06-01T00:00:00.000
{ "year": 1995, "sha1": "f25771f7f9baf5efbbfb429ca456ead15058ffdd", "oa_license": "pd", "oa_url": "https://doi.org/10.1289/ehp.95103s437", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f25771f7f9baf5efbbfb429ca456ead15058ffdd", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
257676365
pes2o/s2orc
v3-fos-license
Acupuncture treatment for post-stroke depression: Intestinal microbiota and its role Stroke-induced depression is a common complication and an important risk factor for disability. Besides psychiatric symptoms, depressed patients may also exhibit a variety of gastrointestinal symptoms, and even take gastrointestinal symptoms as the primary reason for medical treatment. It is well documented that stress may disrupt the balance of the gut microbiome in patients suffering from post-stroke depression (PSD), and that disruption of the gut microbiome is closely related to the severity of the condition in depressed patients. Therefore, maintaining the balance of intestinal microbiota can be the focus of research on the mechanism of acupuncture in the treatment of PSD. Furthermore, stroke can be effectively treated with acupuncture at all stages and it may act as a special microecological regulator by regulating intestinal microbiota as well. In this article, we reviewed the studies on changing intestinal microbiota after acupuncture treatment and examined the existing problems and development prospects of acupuncture, microbiome, and poststroke depression, in order to provide new ideas for future acupuncture research. Introduction The most common neuropsychological disorder after stroke is depression, which can occur at any point during the process. It is estimated that more than one-third of stroke survivors experience post-stroke depression (PSD) (Frank et al., 2022), which is a global public health issue that requires urgent attention in national health policy. Depression ranks as one of the leading causes of disability worldwide and contributes significantly to the global burden of disease, according to the World Health Organization (WHO). Additionally, treatment options recommended by the WHO report include, but are not limited to, psychotherapy and/or antidepressants, the most classic of which are tricyclic antide pressive agents (TcAs) and selective serotonin reuptake inhibitors (SSRIs) (Li et al., 2022a). Drugs, however, have inherent side effects, such as the development of drug resistance, and often reported adverse effects including sexual dysfunction and gastrointestinal symptoms, neuropsychiatric symptoms, and other systemic symptoms (Anagha et al., 2021). Throughout the years, medical treatment patterns have gradually changed, patients' vital interests are better served by improving the safety of therapeutic measures while pursuing curative effects. When it comes to treating PSD comprehensively, acupuncture has important advantages as an ideal "green treatment." In recent years, more and more standardized clinical trials have shown acupuncture not only promotes nerve function recovery after stroke, but also significantly benefits in treating patients' depression symptoms as well as improving their quality of life after stroke, when comparing with drugs, acupuncture offers better biosafety and socioeconomic benefits in the treatment of PSD (Hang et al., 2021;Wang Z. et al., 2021). In addition to the combination of neurological and psychiatric symptoms, it is also common for PSD patients to have abnormal digestive tract function (Jiang, 2022). Researchers have found a significant difference between patients with PSD and those without PSD in terms of gut microbial community and metabolites (Jiang et al., 2021;Zhong et al., 2022). Therefore, it is possible that reasonable gut microbiome composition may play a significant role in maintaining healthy metabolism, and it is increasingly being recognized that depression can be treated with direct changes in intestinal microbiota composition (such as prebiotic intake and fecal microbiota transplantation) (Evrensel and Tarhan, 2021). Studies have shown that mood and behavior are controlled and affected by intestinal flora through neuroimmune mechanisms and nutritional metabolism, whereas unbalanced gut flora can cause mental illness (Liang et al., 2018b;Waclawiková and El Aidy, 2018). Psychoneurotic symptoms in rats can be significantly improved by acupuncture, which may be related to rebalancing the intestinal flora Xian et al., 2022). Consequently, maintaining the balance of intestinal microbiota is expected to be a potential target for PSD with acupuncture. Moreover, bibliometric analysis shows that the number of studies focusing on intestinal flora has been increasing over the past 10 years, indicating that acupuncture regulation of intestinal microbiota is a promising research area (Zhang et al., 2022a). In this article, an overview of the relationship between acupuncture and intestinal flora, the relationship between intestinal flora and PSD, the effect and mechanism of acupuncture on the intestinal flora in preventing and treating PSD were provided in an attempt to provide new ideas and targets for studies of traditional Chinese medicine in the treatment of PSD, and we hope to provide some assistance in the decision-making process for acupuncture PSD treatment in the future. The relationship between acupuncture and intestinal microbiota Human-related microbial communities are mainly found in the large intestine, which mainly composed of prokaryotes (like bacteria), eukaryotes (like fungi and parasites), and viruses (Li et al., 2022b). It is estimated that the total number of bacteria colonized in the human intestinal tract is about 10 13 -10 14 , most of which are Bacteroides and Firmicutes and the amount of bacteria is about 10 times as much as the total number of cells in the human body and their number of genes is 100 times higher than the human genome (Gill et al., 2006). Although it only weighs 1-1.5 kilograms total, it plays an important role in maintaining the dynamic balance of the internal environment and promoting human health (Bäckhed et al., 2005;Borrel et al., 2020). Microecological community's succession are intricately intertwined with intestinal various physiological and pathological processes in the body, especially metabolism and immunity (Lynch and Hsiao, 2019). As soon as intestinal microecology homeostasis is disrupted (e.g., reduced richness, dysfunctional microflora, an interference with metabolism or microflora translocation, etc.), it stimulates immune response disorders through different mechanisms and destroys the host immune system as a result and various immune-mediated inflammatory responses will occur, endangering human health (Ruff et al., 2020). In different disease states, acupuncture can effectively regulate intestinal microbiota, making it a useful microecological regulator. According to existing studies, acupuncture can treat diseases by conducting information exchange between immuneneuro-endocrine-microbial metabolism through brain-intestinal interaction (Xu and Lu, 2020). For example, acupuncture affects the abundance and structure of intestinal bacteria, balancing the number and proportion of probiotics and pathogens in the host body (Xie et al., 2020;Wang J. M. et al., 2021;Wang T. Q. et al., 2021;Li et al., 2022d). In turn, acupuncture reverses a variety of intestinal flora metabolic disorders caused by various diseases by restoring the function and metabolic pathway of key metabolites in human body (Xu et al., 2017;Si et al., 2022). In the restoration of human health, acupuncture plays a crucial role. However, research on acupuncture's regulation of intestinal flora is still limited in the domestic and overseas, facing the problem of limited scope and insufficient depth. The relationship between intestinal flora and PSD As a secondary to stroke, PSD is characterized by mental and emotional disorders, as well as insomnia, low mood, loss of interest, and loss of appetite and serious people will even exhibit concerning behavioral and psychological characteristics, such as fantasy, delusion, world-weariness, suicide (Wijeratne et al., 2022). Currently, there is no clear pathogenesis for PSD. Current mainstream views is that depression consists of a number of interactions between neurobiology and social psychology and other factors , and cerebral vascular disease may be a predisposing factor or a motivating factor for depression (Jeon and Kim, 2018). Depression and digestive problems have strong comorbidities, and many patients with depression go to the hospital for the first attendance usually due to difficult-to-treat gastrointestinal conditions (Liang et al., 2018a;Jiang, 2022). With the deepening of research, researches have gradually illuminated the close connection between psychological factors and gastrointestinal disease over the years. In a meta-analysis, depression was found in 22-38% of patients with irritable bowel syndrome (Hu et al., 2021), and in a cohort study, depression was a comorbid condition for 40.1% of those with inflammatory bowel disease (Lewis et al., 2019). Not only a high prevalence of depression in digestive diseases, but vice versa as well. In another study, researchers found that the rate of gastrointestinal abnormalities among patients with depression was significantly higher than that of those without depression and compared with patients with depression alone, those with depression combined with gastrointestinal symptoms had a more severe depression (Fang and Li, 2022). As a result of these findings, researchers are now paying more attention to gut-brain connection. According to some available research, changes in the composition of the gut microbiome were strongly associated with the severity of PSD, as shown in Figure 1 (Jiang et al., 2015;Ye et al., 2021). A neuro-endocrine-immune network exists between the brain and gut flora, called the brain-gut axis, which communicates two-way signals, and the gastrointestinal tract is closely linked to the brain mainly through neural and humoral pathways, allowing this network to circulate and reinforce each other (Begum et al., 2022;Han et al., 2022). Studies have shown that long-term stress responses in depressed patients increase intestinal wall permeability, making it easier for aggressive bacteria or antigens to translocate to the lymphatic system or circulatory system, and then activating immune cells to trigger serum IgA and IgM production, then causing depressive episodes through systemic inflammation (Maes et al., 2008(Maes et al., , 2012O'Malley et al., 2010), or microbial metabolites are more easily to enter the blood circulation through the intestinal wall and pass the blood-brain barrier (BBB), directly affecting the cognitive and behavioral functions of the body (Rao et al., 2021). Notably, due to two-way communication characteristics of brain-gut axis, changes in gut microbiome composition may in turn influence depressive symptoms. Several studies have shown that the gut microbiota of people with depression differs significantly from that of people without depression in terms of diversity and abundance, with abundance being negatively correlated with the severity of depression (Jiang et al., 2015;Hu S. et al., 2019), moreover, some studies have shown that an increase in potentially harmful bacteria or a decrease in beneficial bacteria could reduce short-chain fatty acids (SCFAs) production, leading to intestinal barrier dysfunction and inflammation (Wong et al., 2006;Ramos Meyers et al., 2022). Additionally, a transplantation of patients' gut microbiota caused mice to exhibit depression-like behavior when metabolic processes and inflammatory responses were affected by fecal microbiota transplantation . Hence, there may be a potential bidirectional interaction between stress and microbiome. Antibiotics target bacteria, inhibiting their growth and proliferation, and is the most direct, widespread, and important influencing factor for changing intestinal flora composition. Antibiotics have a double-edged effect on depression, on the one hands, antibiotic treatment led to the disappearance of depressionlike ethological disorders , and it has to be pointed out that the first drugs to be used to treat depression was Iproniazid, which was originally developed for tuberculosis (Juli and Juli, 2014;Macedo et al., 2017); on the other hands, antibiotics can damage intestinal flora's homeostasis in the gut, resulting in depression (Hao et al., 2020). Multiple studies have indicated that antibiotic exposure increases depression risk, further, the risk of depression may increase with each additional treatment course and medication, while the declines in risk is characteristic by slow and sustained (Lurie et al., 2015;Köhler et al., 2017;Hu et al., 2022;Pouranayatihosseinabad et al., 2023). The hypothalamic-pituitary-adrenal (HPA) axis is an important part of the neuroendocrine system. When the human body is exposed to stress, cortisol in the HPA axis is activated, which reduces inflammation and protects against extreme immune responses (Mikulska et al., 2021). However, cortisol elevation caused by chronic stress is also an important factor in the development of depression (Qin et al., 2015). Several studies have demonstrated that neuroendocrine regulation plays an important role in the pathophysiology of neuropsychiatric disorders, and there is an interaction between gut microbiota and HPA axis activity (Ge et al., 2021). Microbial communities can be changed by altering HPA axis activity [such as adrenalectomy, subcutaneous injection of adrenocorticotropic hormone (ACTH) fragments] (Amini-Khoei et al., 2019;Song et al., 2019). In addition to regulating the HPA axis dysfunction caused by stress, probiotics supplementation can also alleviate some depressive behaviors (Liang et al., 2015;Rea et al., 2016). There are also considerable literatures suggesting a link between the vagus nerve and depression and gastrointestinal disorders (Liu et al., 2020a;Tan et al., 2022). The vagus nerve is one of the most important components of the parasympathetic system, which plays a major role in the regulation of gut-brain axis by acupuncture. Vagus nerve is a hybrid nerve with afferent and efferent fibers that senses gut microbiota metabolites and transmits information about them to the central nervous system (CNS). Additionally, activated efferent vagus nerves can also exert a systemic anti-inflammatory response by directly stimulating the HPA axis and cholinergic pathways, which alleviates damage to intestinal tight junctions and reduces intestinal permeability, thus regulating changes in microbial composition (Borovikova et al., 2000;Hu et al., 2013;Zhou et al., 2013). Acupuncture regulates intestinal microbiota in PSD Once a patient has a stroke event, due to stroke, drugs, chronic stress, abnormal activation of the HPA axis and vagus nerve, a series of digestive tract symptoms will occur in the The interaction between inflammation, intestinal microbiota and post-stroke depression (PSD). human body. These processes lead to damage to the intestinal mucosal barrier, resulting in an imbalance of intestinal microbiota due to excessive production of pro-inflammatory substances [lipopolysaccharide (LPS), proinflammatory cytokines (CKs)] and too little production of anti-inflammatory substances (SCFAs, antiinflammatory cytokines), causing abnormal immune responses (local and systemic inflammatory responses) in the body, ultimately damaging neurons and exacerbating depression. Acupuncture can regulate the structure of intestinal microbiota, inhibit inflammatory storms and improve the symptoms of patients with PSD mainly through the following six ways in Figure 2. The original research evidence of acupuncture regulates intestinal microbiota is summarized in Table 1. Regulation of intestinal microbial structure Recent studies have gradually shown that acupuncture indirectly alters the microbial composition and communities in various ways, and researchers have found that post-stroke depression-like behavior is strongly associated with intestinal microbial changes after acupuncture treatment (Jiang et al., 2021). Based on 16S rRNA sequencing, Lv et al. (2022) found that manual acupuncture treatment significantly increased the abundances of Firmicutes, Bacteroidetes, and Patescibacteria and significantly decreased the abundances of Proteobacteria in mice at the phylum level; and the abundance of Candidatus Arthromitus, Lactobacillus, Muribaculaceae_unclassified, and Clostridia_UCG-014_unclassified were significantly increased and the abundances of Escherichia-Shigella, Burkholderia-Caballeronia-Paraburkholderia, and Streptococcus were decreased at the genus level in response to manual acupuncture. In general, the Lv et al. (2022) study demonstrated acupuncture alleviated disease-associated gut microbiome imbalances. Furthermore, there was also a significant correlation observed between the development of depression and the content of Clostridiaceae, Candidatus Arthromitus, and Lactobacillus. Additionally, Zhang et al. (2022d) observed that manual acupuncture significantly reduced the abundance of Firmicutes, Proteobacteria and Escherichia-Shigella in Alzheimer's disease mice, while significantly increasing the abundance of Bacteroides, which led to improvements in intestinal flora. Liu et al. (2020bLiu et al. ( , 2022 demonstrated that electroacupuncture regulated the overall structure of the intestinal microbiota in the intestinal tract of diseased mice, making the abundance and diversity of Firmicutes is similar to what it is in the healthy mice's intestinal tracts. The mechanism of intestinal microbiota dysbiosis induced by post-stroke depression (PSD) and how acupuncture regulates the intestinal microbiota to treat PSD. According to Hao et al. (2022), mice experiencing manual acupuncture showed a significant increase in Bacteroides while a decrease in Proteobacteria and Escherichia-Shigella, however, no significant improvement in intestinal microbiota diversity was found, perhaps method of calculating diversity or insufficient samples might limit the result of diversity. Wang J. M. et al. (2021) observed how electroacupuncture affected patients' flora structures, and observed that part of their flora structures reversed and gradually began to resemble those of healthy individuals, finding the following: at the phylum level, the relative abundance of Firmicutes and the Firmicutes/Bacteroides ratio decreased significantly; at the genus level, the relative abundance of Blautia increased while the abundance of Escherichia-Shigella decreased. And by the way, Firmicutes and Bacteroides are believed to be the predominant bacteria in healthy individuals' intestinal tracts, and using the ratio between them, researchers can assess the degree of intestinal microbial health and establish a landmark parameter for determining the degree of intestinal health . Furthermore, Blautia produces a variety of SCFAs that have anti-inflammatory properties (Koh et al., 2016), and gramnegative bacteria such as Escherichia-Shigella contains LPS, which is a proinflammatory compound found in the cell wall of gram-negative bacterium (Yoo et al., 2022). Consequently, the decreased level of Escherichia-Shigella and the increased relative abundance of Blautia in patients treated with electroacupuncture indicate reduced host's inflammation. Xu et al. (2021) found that electroacupuncture rebalances the structure of the intestinal microbiota in mice by reducing Firmicus/Bacteroides ratio and the relative abundance of Roseburia, Lachnoclostridium, and Ruminiclostridium 9 to bring them closer to healthy mice's state. A manual acupuncture treatment was conducted on rats with depression by Li et al. (2021), and researchers found that manual acupuncture regulation could reduce Bacteroides/Firmicutes ratios in the intestinal tract of depressed rats and improve the biodiversity of intestinal flora. Hence, electroacupuncture and manual acupuncture are capable of reversing the proportion of gut bacteria, thus alleviating intestinal ecological disorder in patients (Liu et al., 2020c). Wang T. Q. et al. (2021) found that electroacupuncture could reverse the increase in the abundance of Streptococcus in the disease state, while increasing the abundance of the Bacteroides and Agathobacter (beneficial bacteria). There was a strong correlation between fecal Streptococcus abundance and Hamilton depression scale (HAMD) scores (possibly related to intestinal mucosal barrier and immunity being affected by tryptophan metabolism) (Zhang et al., 2022c), Health-beneficial SCFAs can be produced by Bacteroides and Agathobacter (beneficial bacteria) in the gut that inhibit opportunistic pathogens and prevent the host from inflammation (Koh et al., 2016;Hua et al., 2020). According to Jang et al. (2020), manual acupuncture restored bacterial abundance and approximately 70% of microbiome composition in the intestinal tract of diseased mice, and increased the number of Butyricimonas, which has anti-inflammatory properties by increasing the production of butyrate, a SCFA . Xie et al. (2020) carried out 2-week electroacupuncture intervention in mice, and found that electroacupuncture could inhibit proinflammatory shift with promoting the recovery of the relative abundance of Akkermansia, Clostridium, Lactococcus, and Butyricimonas in the intestinal tract, and significantly increase the relative abundance of Lactobacillus. A number of bacteria above are capable of inhibiting inflammation, protecting the intestinal barrier and preventing depression (Guo et al., 2022;Lai et al., 2022;Ramalho et al., 2022). All of the above bacteria have the potential to inhibit inflammation, protect the intestinal barrier, and prevent depression (Guo et al., 2022;Lai et al., 2022;Ramalho et al., 2022). It can be seen that acupuncture alleviates systemic inflammation in rats overall through the increase in beneficial microorganisms' abundance. observed that, at the phylum level, electroacupuncture could modulate the intestinal microbiota structure of T2DM mice to a level similar to that of normal control mice, and researchers found electroacupuncture could increase probiotics (Blautia and Lactobacillus) and decreased opportunist pathogens (Alistipes, Helicobacter, Prevotella), moreover, a significant correlation was also observed between changes in intestinal flora and changes in LDL-C. After electroacupuncture intervention for 8 weeks, Nazarova et al. (2022) observed changes in intestinal bacteria of participants, and result showed the relative abundance of Bacteroides and Parasutterella increased significantly at the genus level, whereas the abundances of the genera Dialister, Hungatella, Barnesiella, Megasphaera, Allisonella, Intestinimon, and Moryella were significantly lower. Thus, researchers emphasized the role of the gut-brain axis in the process of the treatment in central system diseases. Regulating the intestinal mucosal barrier to prevent bacterial translocation The composition of intestinal microbiota can be indirectly affected by acupuncture. Additionally, protecting the structure and function of the intestinal mucosal barrier system, which indirectly affects the colonization of bacteria and prevents pathogenic antigens penetrating (translocation) the physical barrier, so that human health can be maintained (Macpherson et al., 2002). As soon as the intestinal mucosal barrier is damaged, it increases permeability of the intestinal epithelium (leaky gut), which allows inflammation-related factors and other harmful substances to enter the circulatory system and initiate systemic inflammation (Wasinger et al., 2020;Dou et al., 2022). As part of the intestinal barrier, tight junctions and their proteins protect organisms from pathogens entering from the external environment, which play a significant role in maintaining intestinal barrier integrity . By establishing cell polarity, tight junctions determine paracellular permeability and serve as a major barrier to the paracellular pathway (Zihni et al., 2016). In intestinal epithelium, tight-junction proteins identify the permeability of paracellular ions at tight junctions, which are located mainly on the lateral sides of the junction tops of adjacent cells (Zeisel et al., 2019). As well as maintaining the integrity of the tight junctions between cells and maintaining the barrier function, it plays a role in the repair of intestinal epithelial damage (Krug and Fromm, 2020). An experiment conducted by Lv et al. (2022) revealed that acupuncture promoted tight junction proteins (ZO-1, Occludin, Claudin-5) and improved the function of mice's intestinal mucosal immune barriers. Additionally, this study (Lv et al., 2022) showed that intestinal tight-junction protein expression is correlated with changes in intestinal flora abundance after acupuncture intervention. In two electroacupuncture experiments, Liu et al. (2020bLiu et al. ( , 2022 found that Claudin-1, Occludin, ZO-1 properties were repaired. Thus, by improving the tight junctions of intestinal epithelial cells, it can stabilize permeability and maintain intestinal homeostasis. In diseased animals, Hao et al. (2022) observed under an electron microscope that the damage of small intestine structures were significantly reduced after intervention with manual acupuncture. An electron-microscopical examination reveals a mild separation of the epithelium from the lamina propria, an orderly arrangement of intestinal gaps as well as narrower connection gaps. An immunofluorescence experiment reveals the fluorescence structure of tight junction proteins (Occludin and ZO-1) was restored by manual acupuncture intervention, and the fluorescence proteins showed continuity and enhanced intensity, maintaining the intestinal mucosal barrier. Regulation of hypothalamic-pituitary-adrenal (HPA) axis disorders The HPA axis disorder is closely related to the host circadian rhythm disorder and the body's stress response. The HPA axis is regulated by the circadian rhythm cycle, and its abnormal function can trigger sleep disorders and contribute to depression development (Wirz-Justice, 2006;Kim et al., 2015). The result of a cross-sectional study examining the link between insomnia and PSD suggests that insomnia before stroke is an indicator of depression, and stroke is a risk event that can worsen depression (Zheng, 2021). In another clinical cross-sectional study, stroke survivors with poorer subjective sleep were also more likely to suffer from depression (Davis et al., 2019). Patients with PSD often suffer from sleep disorders, so the two frequently require active treatment together (Cai et al., 2021). The composition and function of the gut microbiome also exhibits circadian rhythmicity in relation to the host's activity (Thaiss et al., 2014). This manifests itself in the fact that interference with the sleep pattern of the host can alter the expression of clock genes, ultimately altering the structure and diversity of the gut microbiome (Voigt et al., 2014;Leone et al., 2015), which in turn can drive changes in the circadian rhythm of the host (Thaiss et al., 2016). Moreover, the HPA axis, as one of the key components of stress regulation, can timely perceive pressure and quickly initiate signals in the paraventricular nucleus (PVN) of the hypothalamus, and HPA axis abnormalities may be one of the biological indicators for depression in its early stages (Spalletta et al., 2006;Du and Pang, 2015). Moreover, there is evidence that acute ischemic stroke can act as a stressor to activate the HPA axis (Wexler, 1970;Yoo et al., 2011). There are several basic studies showing that acupuncture can down-regulate the expression of CRH mRNA in hypothalamus, reduce plasma levels of ACTH and CORT (Le et al., 2016;Zheng et al., 2019), which plays an antidepressant role by inhibiting the over-excitation of HPA axis (Han X. et al., 2021). There is also a bidirectional regulatory relationship between the HPA axis and intestinal microecology. Microbiomes in the gut regulate corticosteroid production, including cortisol and glucocorticoids, in turn, the HPA axis can regulate intestinal motility and affect the living environment of intestinal microbiota, and it has been shown that overactivity of the HPA axis can increase intestinal mucosal permeability, activate intestinal immunity, and further alter the composition of microbiome in the intestines, disrupting the gutmicrobiome balance (Li et al., 2018;Wu et al., 2018;Młynarska et al., 2022). According to Lv et al. (2022), manual acupuncture could restore ACTH, CRH and cortisol CORT expression levels, as well as improve dysfunction of the HPA axis. Also, this study found that changes in intestinal flora abundance and hormone expression were correlated after manual acupuncture intervention, suggesting that the regulation of the HPA axis by acupuncture is related to acupuncture's influence on the intestinal flora composition. Effect on metabolites and metabolic pathways There are 100 times more genes in the gut microbiome than in the human genome, and those genes can encode at least 10 times as many unique genes as the host's genes (Ley et al., 2006). It is likely that the products of these genes play an important role in the pathogenesis of depression after entering the circulation and integrating into the host metabolic pathway (Li et al., 2023). Moreover, as a consequence of stroke, the structural integrity of the BBB is affected and under inflammatory conditions, matrix metalloproteinases (MMPs) can degrade basal layer proteins, increasing the BBB's permeability (Zlokovic, 2006;Lakhan et al., 2013). LPS, SCFAs, adiponectin, vasoactive intestinal peptide (VIP), and some neurotransmitter precursors (e.g., 5-HTP) were more readily transported across the BBB to the brain due to increased BBB permeability (Birdsall, 1998;Dogrukol-Ak et al., 2003;Nedorubov et al., 2019;Megur et al., 2020;Formolo et al., 2022;Zhao et al., 2022). Multiple metabolism pathways and metabolites were altered by manual acupuncture in subjects according to Lv et al. (2022). A serum metabolomics study conducted by Lv et al. (2022) has revealed the following: acupuncture can regulate the differential metabolites, including biosynthesis of N-methylnicotinamide, beta-glycerophosphoric acid, geranyl acetoacetate, serotonin and phenylalanine, tyrosine and tryptophan, as well as metabolic pathways of hypotaurine and beta-alanine taurine and hypotaurine, and beta-alanine. And it should be noted that the metabolic pathways and metabolites described above are closely associated with multiple neurotransmitter precursors of depression-related (Parker and Brotchie, 2011;Strasser et al., 2016;Hüfner et al., 2019). As a result of Lv et al. (2022) correlation analysis of differential microflora and differential metabolites, the authors speculated that the changes of microflora caused by manual acupuncture will affect the changes in serum metabolites, and integrating acupuncture into the process of regulating depression. It has been reported that the intestinal microbiota affects neurotransmitter production and tryptophan metabolism (O'Mahony et al., 2015), and that tryptophan can produce a variety of indole metabolites under the influence of the microbiome. In the intestinal environment, tryptophan and its indole metabolites are precursors or signaling molecules of many bioactive substances (such as 5-HT, Aryl -Hydrocarbon, Oxindole and Isatin), which have an important role to play in the "gut-brain axis" (Hubbard et al., 2015;De Vadder et al., 2018;Jaglin et al., 2018;Roager and Licht, 2018;Qu et al., 2019;Li et al., 2020;Zhang et al., 2022c). By chemical labeling assisted liquid chromatography-tandem mass spectrometry, Zhang et al. (2022b) successfully determined 15 tryptophan indole metabolites in feces of rats with functional dyspepsia after acupuncture intervention. Manual acupuncture treatment was administered to depressed rats by Li et al. (2021), and his results showed that the levels of DA and 5-HT in serum and hippocampus increased after treatment. After Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis, it was found that manual acupuncture affected the cell growth, the apoptosis pathway, the cofactor and vitamin metabolism pathway, amino acid metabolism pathway, and the carbohydrate metabolism pathway in rats, as well as improving their depression-like behavior through the brain-gut axis. Li et al. (2022d) detected the diversity and richness of microflora in the stomach and duodenum after electroacupuncture, as well as the content changes of VIP, DA, and trilobal factor (TFF) in serum. Researchers found that electroacupuncture increased TFF and DA levels in the serum, as well as the diversity and richness of stomach microbiota. In addition to being protective to gastrointestinal mucosa (Mashimo et al., 1996;Gyires, 2004;Huang and Wu, 2021), trefoil factor can also reverse depressionlike behavior in the same way as dopamine (Shi et al., 2012;. Accordingly, electroacupuncture's effectiveness may be related to levels of dopamine and trefoil factor and microbiome structural changes. Adiponectin, as a adipocyte derived protein, can inhibit the infiltration of macrophages and the increase of pro-inflammatory cytokines, maintain intestinal homeostasis and improve intestinal barrier integrity (Obeid et al., 2017), and is significantly associated with Firmicutes differential OTUs, which may be a key node between intestinal microbiota and depression (Bai et al., 2022). Liu et al. (2020bLiu et al. ( , 2022 found electroacupuncture restored melatonin and adiponectin levels in plasma to near-normal levels in diseased rats as a result of electroacupuncture, while it could also restore the expression of VIP and its VIP type 1 receptor (VPAC1) and its VIP type 2 receptor (VPAC2). In addition, there is evidence that VIP can improve the immunity of intestinal mucosa (Seillet et al., 2020), and it is also associated with biological depression Shukla et al., 2022), thus, acupuncture may be a mediator of gut-brain communication. According to studies of Xia X. et al. (2022), electroacupuncture activated Nod-like receptor signaling pathways and promoted intestinal defensin production in order to protect the host from intestinal pathogens, thus maintaining intestinal homeostasis. Based on Spearman correlation coefficient analysis, suggested that electroacupuncture of intestinal defensins appeared to be a key mechanism for restoring intestinal microflora homeostasis. Furthermore, electroacupuncture can also upregulated energy metabolism and down-regulate lipid metabolism. Si et al. (2022) found that electroacupuncture intervention restored 10 significantly altered bacterial genera and 11 metabolites in obese mice to normal levels, as well as that intestinal flora and metabolic levels were strongly correlated. According to researchers' speculation, acupuncture restored gut flora balance primarily by regulating glycerophospholipid metabolism and primary bile acid biosynthesis. Several studies have shown that intestinal microflora is involved in the pathogenesis of depression through glycerophospholipid metabolism and primary bile acid biosynthesis MahmoudianDehkordi et al., 2022). Accordingly, it can be speculated that electroacupuncture might be useful in treating depression by regulating intestinal flora's production of glycerophospholipids and bile acids. The study of Xie et al. (2020) found that electroacupuncture could reduce the level of total cholesterol, TG and LDL in serum, while improving the level of HDL. This result may be related to the increase of the relative abundance of Lactobacillus, thus affecting lipid metabolism. And according to relevant study, the aberrant lipid metabolism is one of the predictive biological indicators of PSD . Wang et al. (2019) showed that electroacupuncture could regulate lipid metabolism and improve insulin sensitivity and glucose homeostasis by regulating intestinal flora composition (mainly by reducing Firmicutes/Bacteroides ratio and increasing Prevotella_9 abundance). Effect on inflammatory responses During times of inflammation or homeostasis disorders, the microbiota can act as a protective force for the body by affecting the immune system's function. Basically, the gut microbiota protects host by controlling the function and number of inflammatory cells, either directly or indirectly, in response to systemic or local infection challenges (Gaboriau-Routhiau et al., 2009;Ivanov et al., 2009;Miller et al., 2009). Certainly, there may also be the overabundance of bacteria in the intestinal tract which have potential to magnify inflammation, leading to local and systemic pathological consequences effects (Belkaid and Hand, 2014). According to the regulation effect of acupuncture, on the one hand, acupuncture can restore the balance of intestinal flora structure, adjust the proportion, abundance and number of pathogenic bacteria and beneficial microorganisms, thus affecting the activation or inhibition of pro-inflammatory and antiinflammatory cells. On the other hand, acupuncture can protect the structure and function of intestinal mucosal barrier system, prevent the translocation of pathogenic bacteria and inflammation-causing substances, thereby avoiding the occurrence of inflammatory storms in the body. Basic studies have shown that increased production of proinflammatory cytokines after cerebral ischemia can activate indoleamine 2,3-dioxygenase (IDO) in glial cells and reduce the bioavailability of tryptophan (tryptophan is metabolized mainly via two main pathways, the serotonin and kynurenine pathways), as a result, 5-HT is depleted, serotonergic transmission is blocked, as well as neuroactive tryptophan metabolites (such as kynurenine) are produced (O'Connor et al., 2009;Souza et al., 2017;Körtési et al., 2022), which eventually leading to PSD (Spalletta et al., 2006). Despite the lack of a complete understanding of the pathophysiology of depression, inflammation is a key driver of its development, and inflammatory factors is important biological factors that increase the risk of depressive episodes. Several cohort studies have found that the increase of serum levels of proinflammatory factors (such as IL-6, IL-17, TNFα, and IL-1β) in the acute phase after stroke is independent predictors of depression when using logistics regression analysis (Kim et al., 2017;, and reducing the expression of IL-6, TNF-α, and IL-1β in the cortex and hippocampus alleviated depression-like behavior in rats with PSD . Post-stroke depression is a brain disease. In addition to stroke itself, which activates glial cell activation and causes CNS inflammation (Lubart et al., 2021;Rayasam et al., 2022;Tariq et al., 2022), peripheral inflammatory factors may establish relationship with the CNS after crossing the BBB as well (Beurel et al., 2020). Furthermore, a significant correlation between CNS inflammation and peripheral inflammation is also supported (Leng et al., 2018;Richards et al., 2018). It is worth noting that inflammation of the CNS is closely associated with microglia and astrocytes, and there has been a lot of evidence that inflammatory microglia and astrocytes play an important role in the development of depression (Rajkowska and Stockmeier, 2013;Peng et al., 2015;Leng et al., 2018;Xia W. et al., 2022;Xie et al., 2022). As soon as intestinal barrier function is compromised, the bacterial translocation becomes easy, the immune system becomes activated and inflammatory factors increase, resulting in almost all of the changes associated with depression occurring in neural activity (such as neuroendocrine function, neuroplasticity, neurotransmitter signaling, cerebrovascular endothelial cell signaling, circumventricular organ signaling, and peripheral immune cell-to-brain signaling and so on), eventually, neuroinflammation causes behavioral changes and depression (Smith, 1991;Miller et al., 2009;Miller and Raison, 2016;Kronsten et al., 2022). Lv et al. (2022) found that manual acupuncture inhibited the levels of pro-inflammatory cytokines (IL-1β, IL-6, and TNF-α) in the gut and hippocampus, and there was a correlation analysis that suggested that acupuncture promoted intestinal microbiota regulation, improved intestinal barrier function, reduced intestinal inflammation, and decreased central inflammation. Pro-inflammatory cytokines (TNF-α and IL-1β) and LPS may induce depressive symptoms and are the most reliable biomarkers for the presence of inflammation in depressed patients (Spalletta et al., 2006;Maes et al., 2008;Miller et al., 2009). Researchers have proven that intestinal barrier destruction in depression is related to an increase in proinflammatory factors like LPS and TNF-α and IL-1β (Guo et al., 2022). As a consequence of acupuncture, significant reductions in LPS, TNF-α, and IL-1β in serum and brain were observed, and Zhang et al. (2022d) hypothesized that this was the result of manual acupuncture restoring the intestinal barrier and reducing inflammation by regulating the intestinal flora. Xie et al. (2020) examined the inflammatory cytokines and inflammatory mediators in serum and articular synovial fluid of rats, and the results showed that electroacupuncture could reduce the levels of IP-10, IL-1α, and MCP-1 in serum and LPS in articular synovial fluid, and play an anti-inflammatory role. Hao et al. (2022) observed that after manual acupuncture intervention, the fluorescence intensity of LPS that could be stained by immunofluorescence decreased, as well as the number of cells that could express glial fibrillary acidic protein (GFAP) in the lamina propria of the intestine and the contents of LPS and TNF-α in serum and intestinal tract. Researchers suggested by reducing the toxic effect of TNF-α on intestinal mucosa and the inflammatory effect of LPS, the structure of tight junction proteins was protected, and inflammatory mediators were reduced into the circulation, thus protecting the CNS. It has been found that SCFAs, which are metabolized by the gut microbiome, are associated with changes in the gut microbiome in depressed mice (Hao et al., 2022). Electroacupuncture was studied to determine its effect on serum SCFA content, and Ke et al. (2022) found a strong correlation between prognosis of apoplexy rats and intestinal microbiota production of SCFAs (especially acetic acid and propionic acid). Electroacupuncture may improve stroke outcomes by increasing acetic acid and propionic acid levels to restore energy supply to the intestinal epithelium, reduce intestinal inflammation, and stabilize intestinal microbiota. The study of showed that the concentration of SCFAs (acetic acid and butyric acid) could be increased in feces as a result of electroacupuncture, which may be related to an increase in Lactobacillus and Blautia. The study of also showed that significantly reduced serum levels of inflammation markers such as LPS and IL-6, and positively correlated with changes in population of Alistipes, Helicobacter and Prevotella, and histopathological analysis revealed that there was significantly less mucosal inflammation, goblet cells, and epithelial damage in the colon. It is believed that intestinal epithelial cells contain a variety of pattern recognition receptors, including toll-like receptors (TLRS), that are important for the regulation of inflammatory responses by invading pathogens and pathogen-produced toxins (Chassin et al., 2010;Belkaid and Hand, 2014). In their previous experiment, Liu et al. (2020b) discovered that electroacupuncture could inhibit the proinflammatory factors IFN-γ, TNF-α, and IL-6 through TLR4 signaling via MyD88-dependent pathway to prevent excessive immune response in the whole body. According to subsequent related experiments, Liu et al. (2022) also found that electroacupuncture could reduce the level of proinflammatory factor IL-6 in plasma, and significantly increase Th2/ILC2 related cytokines (including IL-4, IL-5, IL-9, IL-13, IL-10), as well as increase ILC3-derived cytokines IL-22 and GM-CSF, among which IL-10 is a potent anti-inflammatory cytokine for ILC2s to exert their functions. Regulatory T cells (Treg) and pro-inflammatory T helper T cell 17 (Th17) cells are a pair of CD4+ T lymphocyte subsets that are functionally opposite, with Th17 promoting tissue inflammation while Treg exhibiting anti-inflammatory properties, and PSD is driven in part by an imbalance between the two cell subsets forming the immune axis (Ju and Wang, 2019;Cui et al., 2021;Westfall et al., 2021). Depressive symptoms can be improved by regulating the gut microbiome's role in regulating the Treg/Th17 immune axis (Westfall et al., 2021). In their study, Wei et al. (2019) found that electroacupuncture regulated the increase in the diversity and abundance of gut microbiota, positively correlated with the improvement in the percentage of Treg cells in CD4+ T lymphocytes, and negatively correlated with the percentage of Th17, indicating that a possible mechanism by which electroacupuncture may regulate gut microbiota structure is through its effects on the internal immune environment. Regulation of central neurons In stroke survivors, depression is associated with survival status of neurons Zavvari and Nahavandi, 2020). The findings of suggested that electroacupuncture can reduce the abundance of Erysipelotrichaceae that have pro-inflammatory properties, decrease the mRNA levels of proinflammatory cytokines IL-6, TNF-α and reduce the loss of dopaminergic neurons in the substantia nigra (SN). Researchers speculated that electroacupuncture acted as a neuroprotective role on dopaminergic neurons by inhibiting inflammation in the SN to alleviate behavioral defects in mice, and this effect may be related to the regulation of intestinal microbes. Jang et al. (2020) suggested that the immunomodulatory function of the gut microbiome plays a key role in the process of neuroprotection and anti-inflammation. Manual acupuncture can inhibit the expression of Bax, NF-κB and TNF-α and restore the expression of Bcl-2, and reduce the activation and overexpression of microglia and astrocytes. Neuroprotection occurs through manual acupuncture by blocking neuroinflammation responses and apoptosis, and increasing the level of dopaminergic fibers and neurons in the striatum and SN. Li et al. (2021) conducted manual acupuncture treatment on depressed rats and they found that acupuncture regulates gut microbes and neurotransmitters to alleviate depressionlike manifestations in rats. Brain-derived neurotrophic factor (BDNF) signaling was enhanced by manual acupuncture intervention, increasing the mRNA and protein expression of BDNF and N-methyl-D-aspartate receptor (NMDAR), and increasing the number of astrocytes in the hippocampus as a result, at the same time, the mRNA and protein expression of β-CaMKII, which can block BDNF receptor, was decreased in the hippocampus. Conclusion and prospects Acupuncture can positively promote the prognosis of patients with PSD by maintaining the dynamic balance of intestinal flora structure, which proves that acupuncture is a promising non-drug treatment for reducing depressive symptoms. This paper examines the relationship between intestinal flora and PSD and the role of acupuncture in this relationship to summarize acupuncture will be able to treat PSD through multiple targets (Protect the intestinal mucosal barrier system, avoid overactivity of the HPA axis to activate intestinal immunity, regulate metabolites and metabolic pathways to maintain intestinal homeostasis, control the balance of inflammatory cells and inflammatory factors, avoid neuroinflammation and protect central neurons), and proposes that the common and core link of these mechanism is that intestinal microbiota regulates the local and systemic immune system. However, the above studies involving acupuncture and intestinal flora structure adjustments are only capable of proving correlation rather than causation. From the current development perspective of acupuncture in the treatment of PSD, the mechanism of acupuncture to maintain the dynamic balance between the type and number of intestinal flora to treat and prevent depression has not been thoroughly studied. And it remains a major challenge to understand the dynamics of microbial ecological adjustment in vivo. The future should involve more researches to explore whether acupuncture can restore ecological balance of intestinal microbes in the immune deficiency model of depression to improve depression, and determine the causal relationship among the three to fill the gaps in current knowledge. It is believed that with more studies, the pathogenesis of depression will be further clarified in the future. Author contributions HJ: conceptualization and writing-original draft. SD and JZ: writing-review and editing. BL, WZ, JC, and MZ: investigation. ZM: supervision. CZ: project administration. All authors read and agreed to the published version of the manuscript. Funding This work was funded by the Basic Theory Research Project of Tianjin Education Commission (2019KJ067).
2023-03-23T15:15:07.960Z
2023-03-21T00:00:00.000
{ "year": 2023, "sha1": "9e3bb9c46bc801ca3e85366d74704132d569e6c7", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2023.1146946/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "cbb44f92bcf9ecfc604df6f82734528b0220b7b7", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
203648951
pes2o/s2orc
v3-fos-license
Can a building read your mind? Results from a small trial in facial action unit detection In the last few decades, the energy consumption of individual buildings has been steadily improving. As a result, research efforts are shifting towards acquiring a deeper understanding of occupant comfort, health, and well-being in the built environment. However, existing techniques used to measure and predict the comfort of occupants have seen little change since Fanger. New research attempts are hence focusing on methods to gather more data, more frequently, and less intrusively. A little explored source of data is the one gathered from real-time videos of occupants, the so-called facial action units (FAU), which is the focus of this paper. These are the facial movements and positions that constitute the basic elements of emotions. Using software developed in the realm of affective computing, seven building occupants were monitored for a period of 2 weeks, whilst also completing surveys that gathered information about the office environment, and their work and personal life. Results found that participants that were happy with their office space showed significantly higher average values of the Cheek Raiser (AU06) and Lid Tightener (AU07) facial action units. These findings show the potential of using FAUs to assist in the control and design of buildings in a human-centric manner. Introduction Climate change is firmly on the agenda, and could well be humanities 'greatest threat in thousands of years' said David Attenborough [1]. With 40% of the total energy used in the EU attributable to buildings, the built environment accounts for 36% of its total CO 2 emissions, making it the largest contributor to climate change [2]. Although building and environmental regulations, together with research, standards, and technologies, are continuously helping enhance buildings' performance, the fact that they are increasing in number to accommodate the predicted 2.5 billion people migrating to urban areas by 2050 dampens these improvements [3]. This is causing green areas in cities to be slowly eroded away in favour of new developments [4], to the detriment of citizens' health and well-being [5]. In parallel, the cost of renewable energy and storage is plummeting, with Swiss Investment Bank UBS analysts predicting effectively free electricity by as early as 2030 [6]. Research efforts are therefore shifting towards acquiring a deeper understanding of occupant comfort, health, and well-being in the built environment, and putting a value on these metrics can help to have a positive impact on businesses worldwide, where over 90% of costs can be attributed to staff (Figure 1) [7]. New standards, such as WELL and Fitwell, are further spearheading this trend, creating ratings on the premise that a building's design has the potential to positively impact the health and well-being of its occupants. Yet, the more established building standards, namely LEED and BREEAM, are now also welcoming the idea, incorporating elements of comfort, health, and well-being to their more traditional energy and sustainability components [8]. All these are part of a much wider movement, triggered by an increasing interest of consumers in their own health and well-being and evidenced by the popularity of smart watches and fitness classes, amongst others [9]. Figure 1. WGBC report shows typical business operating costs [7]. Despite the positive role of standards, techniques used to measure and predict the comfort and satisfaction of occupants with respect to their environment have seen little change since Fanger [10]. The same can be said about post-occupancy evaluations (POE), whereby both indirect and direct feedback is collected from a building and its occupants. Examples of these include, but are not limited to, environmental measures and models to predict occupants comfort (indirect), and surveys or focus groups (direct). Some of their main drawbacks is that they provide infrequent data and can be extremely disruptive. Thus, new research attempts are not only focusing on finding new methods to gather more data, but also on acquiring it more frequently and less intrusively. These new data types could not only help to move towards a more effective control of the next generation of smart, tailored, connected buildings, but also, to inform future building's design. This paper briefly examines these traditional methods and newer physiological techniques, before setting out a framework and evaluating a small trail in facial action unit (FAU) detection, a technique which uses video data in an attempt to capture occupants comfort, health, and well-being. Existing Data and Models for Buildings There are three main stages in the development of comfort models in buildings. The first, devised by Fanger [10], is the predicted mean vote (PMV), which uses data from chamber experiments to predict thermal comfort. The second, the adaptive comfort model, takes the outdoor temperature as a reference to establish an acceptable range of indoor temperatures [11]. And finally, the personal comfort model, steps away from the previous two methods, both established on the basis of average values, to create a model that predicts an individual's response using data acquired from personal comfort systems (PCS), such as a heated chair, environmental data, and building system settings [12]. POE are another popular way to gather data, combining information from environmental sensors and occupant surveys in an attempt to optimise a building. This data can also be gathered in-use, allowing building managers to adjusts settings and fine-tune performance, and providing the opportunity to inform future designs and control strategies [13]. Physiological measurements -such as heart rate, galvanic skin response, and electrical brain impulses -can also help to give a more detailed picture of occupant comfort and well-being. Yet, a potential drawback is their intrusiveness. A clear example is the electroencephalogram (EEG), which despite providing interesting data on cognitive performance under a research setting, requires a large headset to be worn by the user [14]. Other, less intrusive physiological measurements are of course possible, with smartwatches being a great example of these. They are able to measure numerous factors, including heart rate, location, skin temperature, perspiration, and activity [15], making them a widespread tool in research studies [16,17,18,19]. Why Faces? The face offers an unexplored opportunity for capturing data. In his book The Expression of the Emotions in Man and Animals (1972), Darwin examined emotions as discrete entities with [20]. Much later on, Ekman [21] established six basic emotions: anger, disgust, fear, happiness, sadness, and surprise, which were slightly different in concept to the 4 pairs of opposites proposed by Plutchnik [22]: joy-sadness, anger-fear, trust-distrust, surpriseanticipation. More recent studies on monkeys and apes further suggest that facial expressions are less voluntary than manual gestures, which in themselves are less voluntary than language [23]. All these have paved the way for Affective Computing, which often uses real-time videos of occupants to capture the so-called facial action units (FAU), or the facial movements and positions that constitute the basic elements of emotions. Affective computing was defined by Picard [24] as 'computing that relates to, arises from, or influences emotions'. In the same paper she further ventured to state that computers ' [are] beginning to acquire the ability to express and recognise affect, and may soon be given the ability to have emotions', implying that their recognition and interpretation of affect may grant computers the ability to interact with humans in a more intelligent and natural fashion [25]. The prolific installation of sensors and control mechanisms in buildings are converting them into giant computers, but the next step is yet to be taken: using personal comfort models to understand occupants and deliver, through automated processes, optimum conditions promoting their health and well-being. The number of cameras used worldwide is staggering, with CCTV security cameras commonplace in commercial buildings, providing a potential source of occupancy data [26]. But it is their use in autonomous vehicles [27] or in Japanese vending machines [28] which is really opening up this new machine-user paradigm. Despite some questionable claims on the misuse of facial recognition to identify homosexuals [29], the advantages outweigh the disadvantages. For instance, assisting doctors in identifying psychological conditions, namely depression [30] and suicidal tendencies [31]. A motion detecting surveillance kit is set up using a Raspberry Pi Zero W, running MotionEyeOS, and a Pi camera, both encased within a standalone Octopus case (Figure 2). When an occupant is detected, 5-minute videos are recorded at a resolution of 320 x 240 pixels and a rate of 5 frames/second (fps), offering a good trade-off between data size and quality. These videos are then uploaded to the cloud, where they are automatically processed with OpenFace, a FAU software and then deleted. Setup A total of 7 people participated in the study, each being recorded for a period of 2 weeks. Participants were previously informed about the nature of the experiment and tasks involved. An Octopus was placed under their external monitor screen (a common setup in offices), which was found to be the optimum location. Daily surveys were also emailed to participants at 4pm, where 4 questions provided information on temperature, air quality, lighting, and acoustics. Every Thursday, a much longer survey consisting of 50 questions gathered more detailed feedback around background, health, well-being, job satisfaction, and work-space satisfaction. The system itself would preferentially be integrated into an existing BMS system. The Octopus has the ability to record on-the-fly and send data to a central time-series database (such as InfluxDB), where this data can then be analysed and viewed before recommendations or actions are taken. The Octopus also has the ability to recognise and locate individual occupants, thus allowing the use of personal comfort models to tailor the local environment in the vicinity of a particular occupant. This data process is schematically outlined in Figure 3. Results Videos were analysed on a powerful computer. In total over 2 weeks, 400GB of videos (800 hours) of video were produced and processed (!), hence the desire to process on-the-fly when implemented in the real world. The survey results were collected by Qualtrics and were used to split participants into two groups for analysis purposes. Results from the two longer surveys can be seen in Table 1. Results clearly show that the three occupants highlighted (1,5,7) expressed a particularly low overall satisfaction with indoor environmental quality. . Each frame of video results in 714 rows of data in .csv format, analysed in MATLAB. These include gaze, pose, facial landmarks, and the AUs. For each occupant, the 5-minute video .csv outputs were collated and the relevant columns retained leaving tables with circa 2 million rows and 37 columns of frame numbers, confidence scores, and FAUs. Data was then cleaned, removing rows with a confidence rating below 0.93 and Figure 4 shows the average AUs from both presence and intensity. Although more data and analysis needs to be gathered before any conclusive assertions can be made, there are already clear differences in average AU values between the two groups , particularly regarding action units in Table 2 Conclusion This paper has set the framework and methodology necessary for effectively capturing occupants' FAUs. Findings show that their is potential for FAUs to assist in the control and design of buildings in a human-centric manner. Along with wearables and advanced AI driven data analysis, it is hence possible to create a unique digital twin of each individual, that takes into account and learns their preferences, ultimately leading to an optimisation of their comfort, health, and well-being in the built environment.
2019-10-02T05:30:09.962Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "d3b5183154c08d78389589341b0bb1a169cf958c", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1343/1/012056", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "c910761a58a28e23f7f185fef7c189013b394573", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Psychology" ] }
220843287
pes2o/s2orc
v3-fos-license
Highly Cis-1,4 Selective Polymerization of Conjugated Dienes Catalyzed by N-heterocyclic Carbene-ligated Neodymium Complexes Neodymium complexes containing N-heterocyclic carbene (NHC) ligands, NdCl3[1,3-R2(NCH=)2C:]·THFx(Nd1: R = 2,6-iPr2C6H3, x = 0; Nd2: R = 2,6-Et2C6H3, x = 1; Nd3: R = 2,4,6-Me3C6H2, x = 1) were synthesized and employed as precatalysts for the coordination polymerization of conjugated dienes (butadiene and isoprene). In combination with triisobutylaluminium (TIBA), Nd1 promoted butadiene polymerization to produce extremely high cis-1,4 (up to 99.0%) polybutadienes with high molecular weight (Mw = 250–780 kg·mol−1). The Nd1/TIBA catalytic system also exhibited both high catalytic activity and cis-1,4 selectivity (up to 97.8%) for isoprene polymerization. The catalytic activity, molecular weight and molecular weight distribution of resulting polydienes were directly influenced by Al/Nd molar ratio, aging method, and polymerization temperature. Very interestingly, the high cis-1,4 selectivity of the catalyst towards butadiene and isoprene kept almost unchanged under different reaction conditions. The cis-1,4 polyisoprenes with high molecular weight (Mw = 210–530 kg·mol−1) and narrow molecular weight distribution (Mw/Mn = 1.9–2.7) as well as high cis-1,4 selectivity (~97%) could be synthesized by using the aged Nd1/TIBA catalytic system in the presence of isoprene (100 equivalent to Nd) at low Al/Nd molar ratios of 6–10. Polyisoprenes with low molecular weights (Mw = 12–76 kg·mol−1) and narrow molecular weight distributions (Mw/Mn = 1.7–2.6) were obtained by using Nd2 and Nd3 as precatalysts, indicating that the molecular weight of resulting polyisoprenes can be adjusted by changing the substitutes of ligand in Nd complex. INTRODUCTION Cis-1,4 selective polymerization of conjugated dienes, e.g. butadiene and isoprene, is of great importance in synthetic rubber industry to produce the high cis-1,4 polydienes with excellent properties such as excellent elasticity, high fatigue and crack resistance. [1] Catalyst, which mainly decides the catalytic activity and the microstructure of resulting polymers, plays an important role in the industrial production of polydienes. [2−5] Therefore, much effort has been devoted to developing various catalysts for producing polydienes with high cis-1,4-regularity and controlled molecular weight. [2−5] Rare earth based catalysts stand out as being highly active and selective for butadiene and isoprene polymerizations. [2−5] In general, these rare earth based catalysts are divided into two types: Ziegler-Natta catalytic system and cationic catalytic system. [3−5] The Ziegler-Natta rare earth metal catalysts, mainly the binary systems (LnCl 3 -R 3 Al) and ternary systems (LnL 3 -R 3 Al-X, Ln = lanthanide; L = carboxylate, phosphate, alkyl, or aryl oxide; X = compounds containing halogen atom), have been used in the synthetic rubber industry because of their advantages in easy preparation, thermal stability, and low moisture and air sensitivity. [3−5] The addition of oxygen-containing ligands, e.g. alcohols [6−8] and tetrahydrofuran, [9] enhanced the catalytic activity of the binary systems. Bidentate amidinate, [10−12] β-diketimines, [13] iminopyrrole, [14] indolide-imine, [15] aminopyridinato, [16] aminoindolyl, [17] alkoxy N-heterocyclic carbene [18] ligands, and tridentate pincer ligands, such as N^C^N, [19] C^C^C, [20] P^N^P, [22,23] N^C^O, [24] N^C^S, [24] N^N^O, [25,26] N^N^N [27−29] ligands were also used in the preparation of lanthanide complexes. Activated by organoborate cocatalyst, the obtained cationic catalytic systems display high catalytic activity and cis-1,4 selectivity toward conjugated diene polymerization. [10−29] The chemical structures of ancillary ligands could steer the behavior of the coordination polymerization and characteristics of the resulting polymers. The concept that the ligand plays regulatory role in the catalytic behavior of the catalyst was used to design Ziegler-Natta rare-earth metal catalysts. In the presence of triisobutylaluminium (Al i Bu 3 ), neodymium complexes containing heterocyclic Shiff base, [30] 8hydroxyquinolines, [31] quinolinylcarboxylates, [32] or NCN-pincer ligand [33,34] show high catalytic activity for isoprene polymerization with high cis-1,4 stereospecificity (95%−98%). Nheterocyclic carbene (NHC) has become an organo-catalyst and ubiquitous ligand in organometallic chemistry because of its extraordinary electron richness and facile access to structurally diverse analogues. [35] Scandium trialkyl complexes containing N-heterocyclic carbene ligand have been reported as precatalysts for α-olefin polymerization with excellent catalytic activity. [36,37] We also reported that the copolymerization of ethylene with propylene was realized by vanadium complexes containing NHC ligands and both the catalytic activity and microstructure of the resulting copolymers were influenced by the chemical structure of the NHC ligands. [38,39] Therefore, introduction of NHC ligand to NdCl 3 is of great interest for the development of a novel neodymium-based catalytic system with both high activity and regioselectivity for the coordination polymerization of the conjugated dienes. Herein, the synthesis of novel NdCl 3 ·NHC·THFx complexes (NHC: 1,3-R 2 (NCH=) 2 C:; Nd1: R = 2,6-i Pr 2 C 6 H 3 , x = 0; Nd2: R = 2,6-Et 2 C 6 H 3 , x = 1; Nd3: R = 2,4,6-Me 3 C 6 H 2 , x = 1) and their catalytic behavior for the coordination polymerizations of butadiene and isoprene upon activation with Al i Bu 3 were investigated. The Nd1-based catalytic system showed both high catalytic activity and 1,4-selectivity for conjugated diene polymerizations, affording polybutadienes with extremely high cis-1,4 content up to 99.0% and polyisoprenes with high cis-1,4 content of 97.8% as well as high molecular weight and narrow molecular weight distribution. EXPERIMENTAL General Considerations All manipulations of air-and moisture-sensitive compounds were performed in a nitrogen atmosphere using standard Schlenk techniques or under a nitrogen atmosphere in a drybox. Tetrahydrofuran (THF, Beijing Chemical Works) was distilled under nitrogen atmosphere and refluxed over sodium benzophenone for dehydration, and then stored in the drybox in the presence of molecular sieves (4Å). NdCl 3 ·xTHF [9] and NHC ligand [40] were prepared according to the reported methods. Chlorobenzene (C 6 H 5 Cl, Tianjin Fuchen Chemical Co.) was freshly distilled from phosphoric anhydride. Hexanes and cyclohexane (Beijing Yanshan Petrochemical Co.) were dried over calcium hydride (CaH 2 ) and distilled before use. Isoprene (purity: 99.9%, Beijing Yanshan Petrochemical Co.) was freshly distilled from CaH 2 before use. Butadiene (Beijing Yanshan Petrochemical Co.) and triisobutylaluminium solution in hexanes (0.74 mol·L −1 , Beijing Yanshan Petrochemical Co.) were used as received. Procedure of Conjugated Diene Polymerization All the operations were conducted under an atmosphere of dry nitrogen. For the polymerization using the in situ prepared catalyst, the conjugated diene monomers (butadiene or isoprene) and solvent were introduced into a vessel and Al i Bu 3 was added. Then, the solution of Nd complex was introduced into the vessel to start the coordination polymerization of conjugated diene at a defined temperature. For the polymerization using the aged catalyst, the mixture of Nd complex and Al i Bu 3 in the presence of different amounts of monomer was aged at the fixed temperature for the designated time in advance. The conjugated diene monomers (butadiene or isoprene) and solvent were introduced into a vessel and then the aged catalyst solution was added to start the coordination polymerization of conjugated diene at a defined temperature. The vessel with stirring was placed in a bath with constant temperature during the polymerization. After a definite time, the polymerization was terminated by addition of ethanol containing 1% of 2,6-di-tertbutyl-4-methylphenol. Then the mixture was poured into ethanol containing a small amount of hydrochloric acid. The precipitated polymer was further washed by ethanol and then was dried under vacuum at 45 °C until a constant weight. Characterization of Resulting Polymers Molecular weights of resulting polybutadienes and polyisoprenes, i.e. number-average molecular weight (M n ), weightaverage molecular weight (M w ), and polydispersity index (PDI, M w /M n ), were determined by gel permeation chromatography (GPC) using a Waters 1515-2410 system equipped with Waters RI 2410 and UV 2489 detectors and four Waters styragel HT3-4-5-6 columns (Milford, MA). The polymer sample was dissolved in THF with concentration of 2 g·L −1 . THF was used as eluent and the flow rate of the mobile phase was 1.0 mL·min −1 at 30 °C. The calibration curve was obtained by polystyrene standard. The contents of cis-1,4, trans-1,4, and 1,2 structures of resulting polydienes were determined using FTIR analysis according to the reported method. [42] The film of the copolymer was prepared by spreading a small amount of dichloromethane (CH 2 Cl 2 ) solution of the copolymer on the slice of KBr after the evaporation of CH 2 Cl 2 . The copolymer was characterized on a Nexus 670 FTIR spectrophotometer (Nicolet, Medison, WI). Synthesis of Nd Complexes with NHC Ligands The reaction of equimolar quantities of NHC ligands and NdCl 3 ·xTHF (x = 1,2,3) in THF under nitrogen at 25 °C for 5 h afforded the Nd complexes Nd1−Nd3, as shown in Scheme 1. All the paramagnetic complexes Nd1−Nd3 were characterized by elemental analysis and the Nd contents of these complexes were determined by titration. The results indicated that one THF molecule was incorporated in the complexes of Nd2 and Nd3, respectively. Comparably, no THF molecule existed in Nd1 complex due to the bulky isopropyl substitutes on phenyl rings in ligand. Coordination Polymerization of Conjugated Dienes Using Nd1 as Precatalysts Butadiene polymerization with the unaged Nd1/Al i Bu 3 catalytic system The neodymium complex Nd1 containing NHC ligand with bulky isopropyl substituents at the ortho positions of the phenyl rings was employed as precatalysts and triisobutylaluminum (Al i Bu 3 , Al) was used as a cocatalyst to investigate the coordination polymerization of conjugated dienes (butadiene and isoprene). Butadiene polymerizations and isoprene polymerizations under various Al/Nd molar ratios, polymerization temperatures (T p ), and polymerization time (t p ) were investigated using prepared Nd1/Al i Bu 3 catalytic system, in which the active centers formed in situ in the polymerization system. The experimental results are summarized in Table 1. It can be seen from the data in Table 1 that the conversion of butadiene and catalytic activity increased along with an increase in Al/Nd molar (entries 1−3 and 5−8). The neodymium complex Nd1 displayed good catalytic activity (2.8 × 10 4 g·mol −1 of Nd) for butadiene polymerization at Al/Nd molar ratio of 50 (entry 4 in Table 1), albeit poor catalytic activities were observed at low Al/Nd molar ratio (entries 1 and 2 in Table 1). Remarkably, polybutadiene with high cis-1,4 content of ~99.0% and high molecular weight (M w = 540 kg·mol −1 ) was obtained. The conversion of butadiene could be improved obviously from 6% to 20% at the Al/Nd ratio of 15 by increasing T p from 25 °C to 50 °C. As Al/Nd molar ratio increased from 15 to 50, the conversion of butadiene increased from 20% to 60%, and the catalytic activity increased from 1.4 × 10 4 g·mol −1 of Nd to 4.1 × 10 4 g·mol −1 of Nd (entries 5−8 in Table 1). Polybutadiene with high molecular weight (M w = 470 kg·mol −1 ) and uniform molecular weight distribution was afforded at the Al/Nd molar ratio of 15 (entry 5 in Table 1). However, the molecular weight distribution became broader (4.6−14.0) indicating that multiple active species formed or chain transfer reaction speeded up with increasing Al/Nd molar ratio. Interestingly, the distinguished cis-1,4 selectivity kept almost unchanged (97.9%−98.8%) in a broad range of Al/Nd molar ratio from 15 to 50 (entries 5−8 in Table 1). Overall, polybutadienes with high cis-1,4 contents were obtained by polymerization of butadiene using Nd1/Al i Bu 3 catalytic system. The Al/Nd molar ratio and polymerization temperature have an obvious influence on the catalytic activity, molecular weight, and molecular weight distribution. Isoprene polymerization with the unaged Nd1/Al i Bu 3 catalytic system According to the above investigation on butadiene polymeri- zation, Nd1 containing NHC ligand with bulky isopropyl substituents at the ortho positions of the phenyl rings was also selected as precatalyst for the coordination polymerization of isoprene herein. The effects of Al/Nd molar ratio and polymerization temperature on isoprene polymerization were investigated using unaged Nd1/Al i Bu 3 catalytic system. The results are summarized in Table 1 (entries 9−20). The isoprene polymerization was carried out and the yield of polymer was negligible under the similar polymerization conditions to those for butadiene polymerization. Negligible polyisoprene was obtained in the mixed solvent of hexane and cyclohexane, possibly due to the poor solubility of catalyst in the polymerization system. Chlorobenzene was firstly selected as a good solvent in the polymerization of isoprene to investigate systematically the effects of chemical structure of ligands, preparation process of catalytic system, and polymerization conditions on the catalytic activity and the microstructure of the resulting polymers. The amount of cocatalyst, which is usually expressed by the molar ratio of Al/Nd, has a significant influence on the catalytic activity and molecular weight and molecular weight distribution of the resulting polyisoprenes. It can be seen from Table 1 that an increase in isoprene conversion and catalytic activity could be noticed as the Al/Nd molar ratio increased (entries 10−14). The isoprene conversion of 71% and the catalytic activity of 5.0 × 10 4 g·mol −1 of Nd at T p of 50 °C could be obtained at Al/Nd molar ratio of 30 (entry 14 in Table 1). The molecular weight of the resulting polyisoprene decreased with an increase in Al/Nd molar ratio probably due to the more chain transfer reaction to Al i Bu 3 at higher Al/Nd molar ratio. It is worth noting that the microstructure of the resulting polyisoprenes was not affected by the change of Al/Nd molar ratio. As shown in Table 1, polyisoprenes with cis-1,4 content of ca. 96% could be prepared at T p of 50 °C when the Al/Nd molar ratio increased from 10 to 30. Isoprene polymerizations were carried out at polymerization temperature (T p ) ranging from 30 °C to 60 °C and the results are given in Table 1 (entries 15−20). It can be clearly observed that T p influenced the isoprene conversion, catalytic activity, and molecular weight, molecular weight distribution, and cis-1,4 content of the resulting polyisoprenes. It can be seen from Table 1 that isoprene conversion greatly increased from 50% to 84% and catalytic activity increased from 3.5 × 10 4 g·mol −1 of Nd to 5.9 × 10 4 g·mol −1 of Nd when T p was elevated from 30 °C to 50 °C (entries 15−20). However, the overall catalytic activity and isoprene conversion decreased when T p was higher than 50 °C since the catalyst deactivation became more prominent at higher polymerization temperature. Similar to other reported catalytic systems, [30,31,33] a slight decrease in cis-1,4 content in polymer products with increasing polymerization temperature can be observed. The GPC traces of the resulting polyisoprenes prepared at different temperatures from 30 °C to 50 °C are displayed in Fig. 1. It can be seen that all the GPC traces of the resulting polyisoprenes exhibit bimodal and broad molecular weight distribution. The overall molecular weight decreased greatly and the molecular weight distribution became broad with an increase in T p , as shown in Fig. 2. The chain transfer side reaction could be accelerated with increasing polymerization temperature and thus the overall molecular weight decreased greatly. Isoprene polymerization using the aged Nd1/Al i Bu 3 catalytic system The catalyst components of Nd1 and Al i Bu 3 reacted to form the active centers prior to the addition to the monomer solution, which is also referred to as catalyst aging process. Both the aging temperature (T a ) and aging time (t a ) played important roles in the formation of active centers in the aging process of catalyst. The obtained catalyst solution after the aging process was used for isoprene polymerization. The experimental results of isoprene polymerization using the above aged catalyst are displayed in Table 2 (entries 2−7) and isoprene polymerization using the unaged catalyst is also displayed in Table 2 (entry 1) for comparison. It can be found from Table 2 that isoprene conversion increased greatly from 2% to 25%−95% and catalytic activity increased greatly from 0.2 × 10 4 g·mol −1 of Nd to 6.7 × 10 4 g·mol −1 of Nd by using the aged catalyst instead of unaged catalyst under similar polymerization conditions. The catalytic behavior of aged Nd1/Al i Bu 3 catalytic system is affected by T a . The isoprene conversion increased from 25% to 87% and the catalytic activity increased from 1.8 × 10 4 g·mol −1 of Nd to 6.1 × 10 4 g·mol −1 of Nd along with an increase in T a from 40 °C to 60 °C at t a of 30 min, while the cis-1,4 selectivity kept at around 96.5% (entries 2, 4, and 7 in Table 2). The isoprene conversion and catalytic activity increased while the Table 2). All the results indicate that the catalytic activity could be remarkably improved by aging process of the catalysts. The reaction of Nd complex with Al i Bu 3 results in the formation of Nd compounds with σ-alkyl bonds in the absence of monomer. However, the reaction of Nd complex with Al i Bu 3 results in the formation of the π-allyl Nd complexes in the presence of monomer, which exhibit a higher stability than that of Nd compounds with σ-alkyl bonds. The isoprene polymerization using the aged Nd1/Al i Bu 3 catalyst in the presence of isoprene (Ip/Nd = 100) prepared for different aging time (t a ) was further investigated and the experimental results are displayed in Table 2 (entries 8−12). In order to distinguish two different aging methods and express clearly, aging method without isoprene is expressed as method A, while aging method with isoprene is expressed as method B. Isoprene conversion in the polymerization process could reach 99% and the catalytic activity could reach 6.9 × 10 4 g·mol −1 of Nd even at T p of 0 °C for polymerization time of 14 h by using the aged ternary catalyst with t a of 3 min. A very high conversion of 93% and catalytic activity of 6.5 × 10 4 g·mol −1 of Nd can also be obtained with the aging time of 9 min, which implies enough operation time. However, monomer conversion decreased to 21% if t a was 60 min, which was a different trend from that in aging method A. The molecular weight of the obtained polyisoprenes was also affected by aging time. The molecular weight of polyisoprenes increased with an increase in aging time, which might be attributed to the decreasing amount of active species in the catalytic system with increased t a . The aging time hardly affected the cis-1,4 content of the resulting polyisoprenes, indicating that the catalytic system displayed high cis-1,4 selectivity at even at long aging time. Although high isoprene conversion and preparation of polyisoprene with high molecular weight were realized, the molecular weight distribution was still broad. Therefore, the isoprene polymerizations with ternary catalyst (B) with low Al/Nd molar ratios were further conducted at low T p of −15 °C. As shown in Table 2 (entries 13−22), the molecular weight distribution of resulting polyisoprenes at low T p of −15 °C became much narrower than those of polyisoprenes synthesized at T p s of 0 and 25 °C, although the conversion of isoprene and catalytic activity decreased to 21%−53% and 1.5 × 10 4 − 3.7 × 10 4 g·mol −1 of Nd, respectively. The isoprene conversion of 53% could be obtained even the Al/Nd molar ratio was decreased to 8 by optimization of t a (entry 17 in Table 2). The regular effect of Al/Nd molar ratio on isoprene conversion was not observed. Very importantly, polyisoprenes with high molecular weight (M w ) ranging from 210 kg·mol −1 to 530 kg·mol −1 and narrow molecular weight distribution (M w /M n = 1.9−2.7) could be obtained at various Al/Nd molar ratios and t a s at low T p of −15 °C (entries 13−22 in Table 2). The relatively unimodal GPC traces of resulting polyisoprenes are displayed in Fig. 3. The influences of t a on M w and M w /M n were different at various Al/Nd molar ratios due to the complicated reaction of Nd1 with Al i Bu 3 in the presence of isoprene. Polyisoprene with high molecular weight (530 kg·mol −1 ) and narrow molecular weight distribution (M w /M n = 2.4) could be successfully synthesized at T p of −15 °C using the ternary catalyst (Ip/Al/Nd molar ratio = 100/6/1) by aging method B. Moreover, higher cis-1,4 selectivity (96.9%−97.6%) was observed using the ternary catalyst than that using unaged binary catalyst (entries 8−22 in Table 2 versus entries 9−19 in Table 1). The representative FTIR spectra of resulting polyisoprenes prepared by using aged ternary catalyst and unaged binary catalyst are shown in [27][28][29][30][31][32]42] As shown in Fig. 4, a stronger band at 1128 cm −1 and a weak band at 889 cm −1 can be observed in the FTIR spectrum of polyisoprene prepared by the aged ternary catalyst as compared with that of polyisoprene prepared by the unaged binary catalyst, indicating that the aged ternary catalyst displayed higher cis-1,4 selectivity than that of the unaged binary catalyst. The results of isoprene polymerization using aged catalyst indicate that the catalytic activity could be improved obviously by aged catalyst. Polyisoprene with high molecular weight and broad molecular weight distribution could be afforded by using aged Nd1/Al i Bu 3 catalytic system, while polyisoprene with high molecular weight and narrow molecular weight distribution could be afforded by using aged ternary catalyst (Ip/Nd1/Al i Bu 3 ). Effect of Ligands in Nd Complexes on Catalytic Activity and Microstructure of Resulting Polydienes Isoprene polymerizations by using Nd complexes containing NHC ligands with ethyl (Nd2) or methyl substitutes (Nd3) at the N-aryl ring were investigated. The experimental results of isoprene polymerizations at various Al/Nd molar ratios are summarized in Table 3. The aged ternary catalysts (Ip/Nd2/ Al i Bu 3 and Ip/Nd3/Al i Bu 3 ) prepared by aging method B exhibited both good activity and high cis-1,4 selectivity at relatively high Al/Nd molar ratios. At optimized Al/Nd molar ratio, the isoprene conversion and catalytic activity for Nd2 were 61% and 4.4 × 10 4 g·mol −1 of Nd, respectively. Meanwhile, the isoprene conversion and catalytic activity for Nd3 were 83% and 6.0 × 10 4 g·mol −1 of Nd, respectively. Polyisoprenes prepared by using precatalyst Nd2 at the Al/Nd ratios of 15 and 20 exhibited high cis-1,4 content of 97.8% (entries 1 and 2 in Table 3). The molecular weight and molecular weight distribution of the resulting polymers were significantly influenced by the structure of the Nd complex. Compared to polyisoprenes prepared with Nd1, polyisoprenes with the drastically lower molecular weight (M w = 12−51 kg·mol −1 for Nd2 and 15−76 kg·mol −1 for Nd3) and unimodal molecular weight distribution (M w /M n = 1.7−2.6) were afforded by using Nd2 or Nd3 as the precatalyst (entries 1−7 in Table 3). The result suggests that a uniform active species existed during polymerization of isoprene using Nd2 or Nd3 as precatalyst. 26 28 30 32 34 36 Elution time (min) Al/Nd = 6, t a = 3 min (entry 20 in Table 2) Al/Nd = 8, t a = 3 min (entry 17 in Table 2) Al/Nd = 10, t a = 3 min (entry 13 in Table 2) Signal intensity Table 2). Fig. 4 Representative FTIR spectra of resulting polyisoprenes prepared by using aged ternary catalyst (entry 13 in Table 2) and unaged binary catalyst (entry 17 in Table 1). The average number of polymer chains (n calcd ) can be calculated by the ratio of m p and M n due to the narrower molecular weight distribution, where m p is the weight of the resulting polyisoprene and M n is number-average molecular weight of the resulting polyisoprene (kg·mol −1 ). The theoretical numbers of polymer chains (n theo ) in the copolymerization system were 4.0 × 10 −5 mol. It can be observed that n calcd is much higher than n theo for Nd2/Al i Bu 3 catalytic system (Al/Nd = 20 and 30) and Nd3/Al i Bu 3 catalytic system (Al/Nd = 30 and 40), which is attributed to serious chain-transfer reaction to a cocatalyst during isoprene polymerization. The molecular weight and molecular weight distribution of resulting polyisoprenes are greatly affected by the structure of the ligand (as shown in Fig. 5). Polyisoprenes with the low molecular weights and narrow molecular weight distributions were obtained by using complexes Nd2 and Nd3 bearing NHC ligands with ethyl or methyl substitutes at the N-aryl ring due to the combination of steric hindrance effect and electronic effect of the ligands. Comparatively, polyisoprenes with the higher molecular weights were prepared by using complex Nd1 containing NHC ligand with bulky isopropyl substitutes. Therefore, polyisoprenes with low or high molecular weight and narrow molecular weight distribution could be afforded by changing the substitutes at the N-aryl rings of the Nd complex. The effect of ligand on the molecular weight of resulting polyisoprenes also indicates that the NHC ligand was associated with the active Nd centers during polymerization of isoprene. CONCLUSIONS A new binary catalytic system containing N-heterocyclic carbenes-ligated neodymium complex and Al i Bu 3 was developed for highly cis-1,4 selective polymerization of butadiene and isoprene. The Nd1/Al i Bu 3 catalytic system provided high cis-1,4 selectivity up to ~99% for polymerization of butadiene. The Al/Nd molar ratio and polymerization temperature had little effect on the regioselectivity, whereas the conversion of butadiene and catalytic activity increased with an increase in Al/Nd molar ratio. The new unaged binary catalytic system possessed high catalytic activity for the polymerization of isoprene, affording polyisoprenes with the high cis-1,4 content (95.8%−97.3%) and molecular weight (260−670 kg·mol −1 ). Importantly, the activity of the aged catalyst was superior over the unaged catalyst. Aging Nd1 and Al i Bu 3 in the presence of isoprene was beneficial to forming uniform active species and thus polyisoprenes with the narrow molecular weight distribution (M w /M n = 1.9−2.7) could be obtained. Meanwhile, the resulting polyisoprenes had the high molecular weight (M w = 210−530 kg·mol −1 ) and high cis-1,4 content (96.5%−97.6%). The structure of NHC ligand played a significant role in controlling the molecular weight of resulting polyisoprenes. Polyisoprenes with the low molecular weight (M w = 12−76 kg·mol −1 ) and narrow molecular weight distribution (M w /M n = 1.7−2.6) were obtained by using Nd complexes bearing the less sterically bulky NHC ligand (Nd2 and Nd3). Remarkably, the distinguished cis-1,4 selectivity almost kept unchanged during isoprene polymerization under broad ranges of Al/Nd molar ratio, polymerization temperature, aging time, and aging temperature. These results would provide significant insight into the design of catalyst for highly 1,4-cis selective polymerization of conjugated dienes.
2020-07-29T14:58:01.469Z
2020-07-27T00:00:00.000
{ "year": 2020, "sha1": "7e155f9d1b33284031b591895e650185adf3e9f3", "oa_license": "CCBY", "oa_url": "http://www.cjps.org/article/doi/10.1007/s10118-020-2460-4", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "7e155f9d1b33284031b591895e650185adf3e9f3", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
256282103
pes2o/s2orc
v3-fos-license
Effects of an antimicrobial stewardship intervention on perioperative antibiotic prophylaxis in pediatrics This study aims to determine the effectiveness of an Antimicrobial Stewardship Program based on a Clinical Pathway (CP) to improve appropriateness in perioperative antibiotic prophylaxis (PAP). This pre-post quasi-experimental study was conducted in a 12 month period (six months before and six months after CP implementation), in a tertiary Pediatric Surgical Centre. All patients from 1 month to 15 years of age receiving one or more surgical procedures were eligible for inclusion. PAP was defined appropriate according to clinical practice guidelines. Seven hundred sixty-six children were included in the study, 394 in pre-intervention and 372 in post-intervention. After CP implementation, there was an increase in appropriate PAP administration, as well as in the selection of the appropriate antibiotic for prophylaxis, both for monotherapy (p = 0.02) and combination therapy (p = 0.004). Even the duration of prophylaxis decreased during the post-intervention period, with an increase of correct PAP discontinuation from 45.1 to 66.7% (p < 0.001). Despite the greater use of narrow-spectrum antibiotic for fewer days, there was no increase in treatment failures (10/394 (2.5%) pre vs 7/372 (1.9%) post, p = 0.54). CPs can be a useful tool to improve the choice of antibiotic and the duration of PAP in pediatric patients. Background Surgical Site Infection (SSI) is the second most common healthcare-associated infection [1] and Centres for Disease Control and Prevention (CDC) showed that it complicates approximately 5% [2] of surgical operations each year. SSIs account for more than 16% [3] in adults and 17-18% [4,5] in children of all hospital-acquired infections recorded in the National Healthcare Surveillance Safety Network of the Centres for Disease Control and Prevention (CDC) and 38% of nosocomial infections in surgical patients [2]. So far, only four studies focused on antimicrobial stewardship projects (ASP) for perioperative prophylaxis in children. Three of these studies showed an improvement of antimicrobial prescriptions after the implementation of perioperative guidelines [3,6,7], while Putnam et al. reported no improvement despite multiple interventions, such as creation of a targeted preincisional checklist and of a computerized order entry module [8]. These few data limit the conclusions that can be drawn about efficacy and safety of these strategies and represents important space for improvement for ASP in pediatric surgical prophylaxis on both side of Atlantic [3,[6][7][8]. The aim of this study is to determine the effectiveness of an ASP based on a Clinical Pathway (CP) to improve the adherence to perioperative antibiotic prophylaxis (PAP) guidelines [9] in a Pediatric Surgical Centre. A secondary aim is to evaluate the effect CP implementation on SSIs. To our knowledge, no specific guidelines on antimicrobial prophylaxis in pediatric surgery have been published so far, hence our CP has been developed according to the main guidelines for adult patients, that were published jointly by the American Society of Health-System Pharmacists (ASHP), the Infectious Diseases Society of America (IDSA), the Surgical Infection Society (SIS), and the Society for Healthcare Epidemiology of America (SHEA) in 2013 [9]. Study design This is a pre-post quasi-experimental study to assess the changes in PAP appropriateness during a 6-month period preceding CP implementation (per-intervention, from 1 February 2016 to 31 July 2016) and during the six months after CP implementation (post intervention, from 1 February 2017 to 31 July 2017). The study was set at the Surgical Paediatric Unit of the Department for Women and Children Health at Padua University Hospital. Clinical pathway The clinical pathway was developed by a multidisciplinary group (paediatric infectious disease, microbiology and paediatric surgery) based on the most important international clinical guidelines [9], considering our local microbiology data, and with the supervision of the paediatric infectious diseases team of Philadelphia Children's Hospital (Figs. 1, 2, 3, and 4). The CP details all the steps needed to administer a correct PAP. Fig. 1 Perioperative Antibiotic Prophylaxis CP. These figures were included both in the lecture slides and in the pocket card that was delivered to all the medical staff of the Pediatric Surgery Unit. They include all the steps needed to administer a correct PAP The first step is to consider the surgical procedure (type, site and risk for developing SSIs), and consequently to decide whether to give PAP to the patient. The second step is to consider the patient's medical history of colonization by multi-drug resistant organisms (MDROs). If the medical history is negative for MDRO, an empiric antibiotic regimen should be administered according to the type of surgical procedure. Otherwise, the prophylaxis will be targeted to the specific MDRO. Dose and duration of administration must follow the indications detailed in the CP. The drug of choice for all surgical interventions is a first-generation cephalosporin alone. The association with metronidazole is recommended for surgical procedure with high risk for anaerobic bacteria contamination. Other molecules as clindamycin, gentamicin and ciprofloxacin should be given only to patients with proven allergy to beta-lactams antibiotic. Antibiotic first dose should be administered within 30-60 min before incision, with the exception of vancomycin and ciprofloxacin, that should be given 120 min before the incision, due to their longer half-life. An intraoperative re-dosing should be performed if the procedure extends beyond two half-lives of the antibiotic and it should be considered in the setting of excessive blood loss (> 25 mL/kg). The PAP should be discontinued within 24 h after the end of the procedure, and should not be extended longer in presence of wound drains or prosthetic implants, according to the work of Wilson and colleagues [10]. Specific recommendations for antibiotic dosages are included in the CP. Intervention On 31 January 2017 the CP for PAP was implemented. On the same day, an educational lecture was presented to all the medical staff of the Pediatric Surgery Unit. This meeting provided a review of the clinical guidelines for PAP and the potential benefits of a correct PAP, discussed the rationale for the guideline recommendations and highlighted situations where local practice in the Pediatric Surgery Unit diverged from guideline recommendations. Following the lecture, a pocket card containing the CP was delivered to all participants and, on the same day, to all other physicians and residents who were unable to attend the seminar. Study population All patients aged between one month and 15 years subjected to one or more surgical procedures were eligible to be included in our study. Exclusion criteria were: concomitant infections, ongoing antibiotic therapy, complicated abdominal infection, immunodeficiency, immunosuppressive therapy, patients who underwent neurosurgical, vascular, ORL, and ocular procedures. Data source All clinical, demographic, diagnostic and antimicrobial data were manually collected from electronic (Galileo system) or paper medical records. We used a password-protected REDCap® data collection form and we stored them in the secure server at the University of Padua. Surgical procedures were recorded using the international classification of disease, 9th revision and clinical modification (ICD 9 CM). For every patient were recorded: 1) preoperative data including gender, age, weight; 2) procedure data including type of procedure (divided for major categories, according to the ICD-9-CM), wound class (divided in Clean, Clean-Contaminated, Contaminated and Dirty/Infected, according to the CDC's classification [11]), duration of surgical procedure, urgency of procedure and length of hospital stay; 3) perioperative PAP data such as indication for PAP, administration of PAP, and, among those who received PAP, correctness of PAP (both agent and duration), correctness of antimicrobial agent, correctness of time of antibiotic discontinuing. 4) postprocedure data including date of medical evaluation for SSI. PAP was defined appropriated only if the correct antimicrobial agent for the specific surgical procedures performed had been discontinued within 24 h after completion of surgery, according to clinical practice guidelines for antimicrobial prophylaxis in surgery [9]. To evaluate the effectiveness and safety of the intervention, medical records follow-up was performed to assess for SSIs within 3 months after discharge. Privacy was guaranteed in two ways: a unique, study-specific survey number was assigned to each patient and no personally identifying data were collected. This study was approved by the Research Ethics Committee of Department for Woman and Child Health at the University of Padua. Data analysis The data were analyzed with SAS 9.4 program (SAS Institute Inc., Cary, NC, USA) for Windows. Patient's demographic and clinical data were analyzed in a descriptive way. Association between the two periods was performed with Chi-square test or Fisher test for qualitative variables, and with Rank-sum Wilcoxon test for quantitative variables. We conducted stratified analyses to assess if the effectiveness of intervention was affected by the surgical characteristics such as type of procedure, urgent surgical procedure, and duration of hospital stay. Statistical significance was considered with p < 0.05. Results During the study period, 842 children underwent surgery. Of 430 children in pre-intervention period, 11 were excluded because admitted to an intensive care ward (PICU/NICU), 18 for a complicated abdominal infection and 7 for an ongoing infectious process. For post-intervention period population, 13 were excluded because admitted in the PICU/NICU, 13 for a complicated abdominal infection and 13 for an ongoing infectious process. Indeed, 766 children were included in the study, 394 in pre-intervention period and 372 in post-intervention period. The two populations were similar in terms of sex and age, with an overall female predominance. Baseline patient and procedure characteristics in preand post-intervention periods are displayed in Table 1. No significant difference in the type of surgical procedures was reported between the pre-and post-intervention period, as 184/394 (46.7%) and 153/ 372 (41.1%) patients received a PAP during pre-and post-intervention period respectively (Table 2). In Table 1 Patients' main characteristics (gender, age, weight) and preoperative data (wound class, type of procedure) pre-and postintervention periods In the post-intervention period, there was an increase of correct PAP administration with 90/184 (48.9%) in pre-versus 93/153 (60.0%) in post-intervention period (p = 0.03) ( Table 3). In the post-intervention period, there was an increase of cefazolin use from 78.8 to 87.0% (p = 0.0001) with a decrease of ampicillin/sulbactam from 20.1 to 5.4% (p = 0.003) as suggested by the CPs (Table 3). Indeed, we found that the selection of the appropriate antibiotic for prophylaxis improved in the post-intervention period, both for monotherapy from 81.0 to 91.9% (p = 0.02) and combination therapy from 65.9%) to 100% (p = 0.004) ( Table 4). The stratification of the population by type and characteristics of the surgical procedures showed how CP was significantly effective especially for emergency procedures and for all surgical procedures involving head/ neck and thorax (Table 5). Discussion Perioperative antibiotic prophylaxis is the most effective intervention to prevent SSIs [1]. The most recent guidelines [9] define procedures requiring PAP, recommending narrow spectrum antibiotics as first choice for less than 24 h for all procedures (with the exception of cardiac surgery). So far, few studies developed an antimicrobial stewardship program to improve antibiotic prescriptions on PAP in children. Three of these studies showed an improvement of antimicrobial prescriptions after the implementation of perioperative guidelines [3,6,7] while Putnam et al. reported no improvement despite multiple interventions [8]. Despite the availability of consensus guidelines designed to facilitate the appropriate use of PAP, a significant variation in this practice has been found for the most commonly performed operations in pediatric surgery [12]. On 31 January 2017 the CP (Figs. 2, 3, and 4) for PAP was implemented and on the same day, an educational lecture was presented. After the lecture, a pocket card was delivered to all participants. As reported by the studies above mentioned [3,6,7], also in our Centre the compliance to PAP guideline improved after CP implementation. Correct PAP significantly increased from 48.9 to 60.1%, with a change both in first choice antibiotics and in duration of prophylaxis. The choice of correct monotherapy accounted for 81% in pre-intervention period reaching 91.9% after CP implementation. Cefazolin, the most prescribed antibiotic in both periods, definitely became the first choice in post-intervention period with a concomitant decrease of ampicillin/sulbactam. This change affected especially head/neck and thorax procedures, where ampicillin/sulbactam was the drug of choice before the intervention. Indeed, PAP CP recommends cefazolin as the first-line antibiotic for all the procedures due to its activity against S. aureus (MSSA) and Gram-negative bacteria, its narrow-spectrum and its low cost. Ampicillin/sulbactam should be considered an alternative only for its broader spectrum [9]. Moreover, the use of correct combination therapy increased. Again, an important contribution was given by the reduction of ampicillin/sulbactam prescriptions especially in association with metronidazole. Indeed, this combination should be avoided due to their overlapping spectrum of activity against anaerobic bacteria. In the post intervention period, the combination of choice was cefazolin and metronidazole. Also the number of patients with a PAP discontinued within 24 h increased The procedures which have benefitted the most from the intervention were emergency procedures. Usually, patients who undergo emergency surgical evaluation are a severely ill and for this reason surgeons are more prone to exceed the 24 h. Indeed, this represents one of the most difficult points of implementation for an antimicrobial stewardship program. Many are the barriers identified in stopping PAP, the most common being the complexity and duration of surgical procedure, diagnostic uncertainty, inexperienced clinicians, extended in-hospital stay, patient preferences and the fear of SSIs are the most common [3,13]. The persistence of urinary catheter represents another point of discussion. Even though all the guidelines recommend stopping PAP despite the presence of a urinary catheter, many surgeons are still reluctant. This could be the reason why we have not seen, for urologic procedures, the same improvement we have seen for others. Moreover, many of the current guidelines and specialty-specific recommendations for the pediatric population are based on adult clinical data. It is possible that physicians may not find those guidelines relevant to their pediatric patients. Finally, confusion may exist when indication from adult guidelines are not in line with pediatric observational studies (e.g. inguinal hernia repair) [13]. For a further improvement in PAP compliance rate some authors suggested to enforce guidelines' effect with and periodic audit by a surgeon trained in antimicrobial stewardship [3]. This physician would monitor the choice, time and dose of PAP administration and would ensure the guidelines adherence. Moreover, Prado et al. [14] demonstrated how a hospital pharmacist could have a key role, participating in education activities as part of the discussion groups and in managerial actions that optimized the process of ordering, dispensing, administering, and documenting the perioperative antibiotic prophylaxis. Despite the higher use of narrow-spectrum antibiotic for fewer days, there was no increase in treatment failures between the two analyzed periods. This study has strengths and limitations. This is the first study that evaluates the effectiveness of antimicrobial stewardship through clinical pathways in an Italian hospital. This intervention was designed to be feasible, generalizable and was developed by a multidisciplinary team to guarantee the best quality and a high level of coordination of interventions. The primary limitation of our study is the retrospective nature of the analysis. Another limit was the analysis of treatment failure: we collected SSIs information only trough electronic medical records of our centre. Hence, if a patient had been admitted to another one we would miss that information. Conclusion CPs with a proper educational intervention can be a useful tool to improve the choice of first-line antibiotic and the duration of PAP in pediatric patients.
2023-01-27T14:18:19.143Z
2019-01-15T00:00:00.000
{ "year": 2019, "sha1": "e59baa2c4f91dce8748e67a395a1463ec9ae06c0", "oa_license": "CCBY", "oa_url": "https://aricjournal.biomedcentral.com/track/pdf/10.1186/s13756-019-0464-z", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "e59baa2c4f91dce8748e67a395a1463ec9ae06c0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
38652520
pes2o/s2orc
v3-fos-license
Introduction to co-split Lie algebras In this work, we introduce a new concept which is obtained by defining a new compatibility condition between Lie algebras and Lie coalgebras. With this terminology, we describe the interrelation between the Killing form and the adjoint representation in a new perspective. Introduction During the past decade, there have appeared a number of papers on the study of Lie bialgebras (see [EK], [ES] and references therein, etc). It is well-known that a Lie bialgebra is a vector space endowed simultaneously with a Lie algebra structure and a Lie coalgebra structure, together with a certain compatibility condition, which was suggested by a study of Hamiltonian mechanics and Poisson Lie groups ( [ES]). In the present work, we consider a new [Lie algebra]-[Lie coalgebra] structure, say, a co-split Liealgebra. Using this concept, we can easily study the Lie algebra structure on the dual space of a semi-simple Lie algebra from another point of view. This paper is arranged as follows: At first we recall some concepts and study the relations between Lie algebras and Lie coalgebras. Then we give the definition of a co-split Lie algebra. In section 4, we prove that sl n+1 (C) is a co-split Lie algebra. Then we discuss the interrelation of the Killing form and the adjoint representation of sl n+1 (C). Finally, the results are proved to hold for all finite dimensional complex semi-simple Lie algebras. Basics In this section, we mainly recall the definitions of Lie algebras, Lie coalgebras and Lie bialgebras, and also their relationship. For more information, one can see [EK], [ES] and references therein. A Lie algebra is a pair (L, [, ]), where L is a linear space and [, ] : L × L −→ L is a bilinear map (in fact, it is a linear map from L ⊗ L to L) satisfying: A Lie coalgebra is a pair (L, δ), where L is a linear space and δ : L −→ L ⊗ L is a linear map satisfying: For any x, y ∈ L, δ([x, y]) = x · δ(y) − y · δ(x). The compatibility condition (Lb3) shows that δ is a derivation map. In the following lemmas, c is an arbitrary constant. Lemma 2.1 For any finite dimensional Lie algebra (L, [, ]), the dual space L * has a Lie coalgebra structure defined by Lemma 2.2 For any finite dimensional Lie coalgebra (L, δ), the dual space L * has a Lie algebra structure defined by These two lemmas are natural conclusions and easy to be verified. If in the compatibility condition, id L is replaced by a non-degenerate diagonal matrix, then (L, [, ], δ) is called a weak co-split Liealgebra and δ is called a weak co-splitting . for all x, y ∈ L and f, g ∈ L * . This follows from the fact that V −→ V * is a contravariant functor. Co-split Lie algebras of type A Suppose that L is a complex simple Lie algebra of type A n , then it can be realized as the special linear Lie algebra sl n+1 (C) with basis The Lie bracket is the commutator Define a linear map δ : Hence δ is well-defined. Proof. At first, it is clear that (1 + τ ) • δ = 0. By a direct calculation, we have that is, δ satisfies the anti-symmetriy property and the Jacobi identity. Then (sl n+1 (C), δ) is a Lie coalgebra. Proof. For i = j, it is easy to check that that is, [·, ·] • δ = id, also by Theorem 4.1. So, the theorem holds. Dual Lie algebras, Killing form and adjoint representation In this section, we discuss the interrelation of the Killing form and the adjoint representation for the Lie algebra of type A within our new terminology. Theorem 5.1 ((sl n ) * , −2nδ * ) is a Lie algebra isomorphic to sl n , the isomorphism is given by where {f i,j | 1 ≤ i, j ≤ n} forms a basis of (gl n ) * ⊃ (sl n ) * , and Proof. By definition, we have then (sl n ) * is a Lie algebra under bracket −2nδ * , and B is an isomorphism. Theorem 5.2 (, ) B is just a non-zero scalar of the Killing form. Proof. This result is direct. Now we can consider the following maps: Theorem 5.3 For the adjoint representation ad : sl n −→ End(sl n ), Remark 5.1 For convenience, many computations are made in gl n or (gl n ) * , but the results always hold in sl n or (sl n ) * . Co-splitting Theorem In this section, we prove the following theorem: Theorem 6.1 Any finite dimensional complex simple Lie algebra has a co-split Liestructure. Proof. For a simple Lie algebra L of type X l rather than of type A, our proof is divided into following steps. Step 1: Suppose that V is a non-trivial irreducible X l -module of dimension n. Then there is an injection ρ : L −→ sl n ⊂ End(V ), and it is easy to check that the bilinear form (, ) B of sl n is still non-degenerate over ρ(L). Step 2: Let M be the orthogonal complement of ρ(L) with respect to (, ) B , that is, Then M is a ρ(L)-submodule and sl n = ρ(L) M . Proof. At first, it is easy to show that δ is an injective map of sl n -module, hence of ρ(L)-modules. Furthermore, we have it is obvious by the contained relation Step 4: Lemma 6.2 [, ] • δ res = a non-zero scalar of id ρ(L) . Proof. Suppose that ∆ + is the positive root system of X l and γ is the highest root. It is easy to find a basis of ρ(L) Since γ is the highest root, then for any α ∈ ∆ + , [E γ , E α ] = 0. By the property of δ (Theorem 5.3) and definition of δ res , we have , the second assertion holds by ρ(L) ∼ = ρ(L) * . Clearly, then [, ] • δ res (X γ ) = 0. Secondly, δ res (X γ ) is a highest weight vector of L-module ρ(L) ⊗ ρ(L) ∼ = L ⊗ L, thus the equation in this lemma holds. Up to now, we have completed the proof of Theorem 6.1. We also obtain the following result. So, the claim is true. Remark 6.1 This work shows that for any finite dimensional semi-simple Lie algebra L over the complex field C (or, equivalently, over any algebraically closed field with characteristic zero), there exists some important relation between its Killing form and adjoint action. Hence our new algebraic structure is proved to be very useful. However, much more problems about it need to be solved.
2010-08-15T08:49:51.000Z
2010-08-15T00:00:00.000
{ "year": 2010, "sha1": "bc5ca7cbe7ed3a15f7502a2bcb6a0ead086d9798", "oa_license": "CCBY", "oa_url": "http://arxiv.org/pdf/1008.2505", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "bc5ca7cbe7ed3a15f7502a2bcb6a0ead086d9798", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
264120454
pes2o/s2orc
v3-fos-license
Immunological role and prognostic value of somatostatin receptor family members in colon adenocarcinoma Colon adenocarcinoma (COAD) is among the most prevalent cancers worldwide, ranking as the third most prevalent malignancy in incidence and mortality. The somatostatin receptor (SSTR) family comprises G-protein-coupled receptors (GPCRs), which couple to inhibitory G proteins (Gi and Go) upon binding to somatostatin (SST) analogs. GPCRs are involved in hormone release, neurotransmission, cell growth inhibition, and cancer suppression. However, their roles in COAD remain unclear. This study used bioinformatics to investigate the expression, prognosis, gene alterations, functional enrichment, and immunoregulatory effects of the SSTR family members in COAD. SSTR1-4 are differentially downregulated in COAD, and low SSTR2 expression indicates poor survival. Biological processes and gene expression enrichment of the SSTR family in COAD were further analyzed using the Kyoto Encyclopedia of Genes and Genomes and Gene Ontology. A strong correlation was observed between SSTR expression and immune cell infiltration. We also quantified SSTR2 expression in 25 COAD samples and adjacent normal tissues using quantitative real-time polymerase chain reaction. We analyzed its correlation with the dendritic cell–integrin subunit alpha X marker gene. The biomarker exploration of the solid tumors portal was used to confirm the correlation between SSTR2 with immunomodulators and immunotherapy responses. Our results identify SSTR2 as a promising target for COAD immunotherapy. Our findings provide new insights into the biological functions of the SSTR family and their implications for the prognosis of COAD. Introduction Colorectal cancer (CRC) is a common malignancy worldwide, ranking third among all malignancies in incidence and mortality (Shaukat and Levin, 2022).Colonic adenocarcinoma (COAD) is the most common form of CRC (Mutch, 2007).COAD progresses through several stages, from normal mucosa to adenoma, and finally to cancer (Riihimäki et al., 2016;Arnold et al., 2017).The usual presentation of COAD during medical treatment includes changes in bowel movements; in addition to the onset of rectal bleeding, irondeficiency anemia, abdominal pain, weight loss, and loss of appetite (Thanikachalam and Khan, 2019); there are no typical or specific clinical signs.However, comprehensive testing is lacking in most areas.Consequently, patients with COAD are commonly diagnosed with advanced cancer, making treatment more challenging and affecting their prognosis (Raza et al., 2022).Therefore, the identification of new biomarkers and therapeutic targets to improve the survival of patients with COAD is urgently needed. Somatostatin (SST) is an inhibitory peptide hormone produced by neuroendocrine, inflammatory, and immune cells found in the central nervous system (CNS) and in several peripheral tissues in response to cytokines, growth factors, thyroid and steroid hormones, neurotransmitters, neuropeptides, nutrients, and ions (Patel, 1999;Wu et al., 2020).SST binds to certain cell surface receptors to suppress exocrine and endocrine secretions as well as tumor cell growth (Priyadarshini et al., 2022).Five different subtypes of SST receptors (SSTRs) (SSTR1-5) have been identified.SSTR is a member of the family of G protein-coupled receptors.The five isoforms are coupled with the Gi protein, which can affect the concentration of intracellular cyclic AMP (cAMP) by regulating the activity of adenylate cyclase and transmitting exogenous signals to cells (Bo et al., 2022;Liguz-Lecznar et al., 2022).SSTRs, which help release hormones, transmit nerve signals, arrest cell growth, and inhibit cancer, are found in abundance in the CNS and associated malignant cells as well as within the peripheral organs, pancreas, and gut (Rorsman and Huising, 2018;Harda et al., 2020).Neuroendocrine tumors (NETs) are a diverse category of tumors that can arise in the digestive tract, lungs, and pituitary among other organs (Klöppel, 2017).The expressions of SSTRs, which act as therapeutic targets for SST analogs, which can slow tumor growth and suppress hormone overproduction, are one inherent characteristic of NETs.Moreover, the degree of SSTR expression in several NETs has predictive significance for treatment response (Rogoza et al., 2022).Moreover, researchers found that the internalization capacity of SSTR and the development of radiolabeled somatostatin analogs have improved cancer diagnosis and treatment (Fani et al., 2017;Delpassand et al., 2022). However, limited research has examined the expression, prognosis, and immune characteristics of the SSTR family members in COAD.Using research databases and bioinformatic methods, we examined SSTR family gene expressions and their link to clinical features in COAD.Our findings offer new knowledge about the prognosis and biological roles of SSTRs in COAD. Tumor immune estimation resource database The Tumor Immune Estimation Resource version 2.0 (TIMER2.0)database (https://cistrome.shinyapps.io/timer/) has three main components: vaccination, exploration, and assessment.TIMER2.0 can analyze relationships between genes and infiltrating cells, compare genetic expression in tumors and normal tissues across different malignant tumor types, and provide easy-to-use interactive visualizations to help explore the data (Li et al., 2020).Using the TIMER2.0database, we analyzed SSTR family expression in 41 normal and 457 COAD samples.Additionally, TIMER2.0 was employed to examine associations between the mRNA expression of SSTR family members and cells of the COAD immune infiltrate, including CD4 + and CD8 + T cells, neutrophils, macrophages, dendritic cells (DCs), and B cells.The R-value of the correlation was determined by Spearman's algorithm with adjustment for tumor purity.Values of p < 0.05 were considered significant. Human protein atlas Protein expression information for different cancer types based on immunohistochemical (IHC) analysis is available on the Human Protein Atlas (HPA) portal (http://www.proteinatlas.org).It is a critical resource for many biomedical research projects (Pontén et al., 2011).In this investigation, we conducted IHC to analyze the SSTR family members' protein expressions in normal and COAD tissues. University of Alabama at birmingham cancer data analysis portal University of Alabama at Birmingham Cancer Data Analysis Portal (UALCAN) (https://ualcan.path.uab.edu)makes it simple to perform Kaplan-Meier survival analyses depending on tumor subgroups, promoter DNA methylation status, and pre-calculated gene/protein expression (Chandrashekar et al., 2017).A stratified analysis was performed based on the individual patient's cancer stage and nodal status.Student's t-test was used, and values of p < 0. 05 were considered to be statistically significant. CBioPortal As a tool for interactively exploring multi-dimensional cancer genomic data sets, cBio Cancer Genomics Portal (http://cbioportal.org) is freely available (Cerami et al., 2012).We used cBioPortal to retrieve a dataset of 594 patients with COAD and conducted coexpression and gene alteration analyses of the SSTR family members. STRING database Known and predicted protein-protein association data of a large number of species are present in the STRING database (https://cn.string-db.org/).The database includes the physical interactions, functional links, and confidence levels that indicate their reliability.The STRING database was used to evaluate the SSTR gene correlations. Cytoscape database High-throughput expression data, other molecular states, and biomolecule interaction networks may be combined using the Cytoscape (http://cbioportal.org)open-source software project (Otasek et al., 2019).A total of 178 commonly mutated SSTR family genes were functionally integrated using Cytoscape after being screened from the cBioPortal database.Node size represents the degree values between these interacting proteins.Larger circles indicate a higher degree of interaction. Metascape database By leveraging more than 40 independent knowledge sources in one integrated platform, Metascape (https://metascape.org)integrates membership search, gene annotation, interactome analysis, and functional enrichment factors (Zhou et al., 2019).The SSTR family-related Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis was conducted using Metascape. PrognoScan The correlation between SSTR expression and survival in COAD was analyzed using the PrognoScan database (http://www.abren.net/PrognoScan/) (Mizuno et al., 2009).PrognoScan offers a huge collection of freely accessible, clinically annotated cancer microarray datasets that may be applied to assess the biological links between gene expression and patient prognosis.Values of p < 0. 05 were considered statistically significant. Biomarker exploration of solid tumors Validation was conducted using the Biomarker Exploration of Solid Tumors (BEST) portal (https://rookieutopia.com/app_direct/BEST/).BEST was used to analyze the association between the SSTR family and immunotherapy response and prognosis in COAD. Tissue samples Twenty-five pairs of matched adjacent normal tissue samples and paraffin-embedded archival colon cancer specimens were obtained from Xiangya Hospital (Changsha, P. R. China).None of the patients had received any form of therapy, such as chemotherapy, radiotherapy, or immunotherapy, before the resection.The Xiangya Hospital of Central South University's research ethics committee approved the collection of clinical colon cancer specimens. Isolation of RNA from formalin-fixed and paraffin-embedded samples After the deparaffinization of formalin-fixed paraffin-embedded (FFPE) colon cancer or normal samples with xylene, total RNA was extracted using total RNA AmoyDx ® FFPE RNA Extraction Kit (Cat.#8.02.0019;AmoyDx, Xiamen, P. R. China). Quantitative real-time polymerase chain reaction As discussed previously, quantitative real-time polymerase chain reaction (qRT-PCR) was performed following the RNA extraction and amplification (Ou et al., 2020;He et al., 2021).Table 1 displays the qRT-PCR primer sequences. Statistical analysis The statistics for the survival analysis were obtained using the log-rank test, and the associations of the SSTR family with immune infiltration and markers of immune cell type were evaluated using Spearman's correlation.Student's t-test was conducted to contrast data from two independent samples.Values of p < 0.05 were considered statistically significant. Aberrant expression of SSTR family members in COAD To investigate the changes in SSTR expression levels in various malignant tissues compared to normal tissues, we employed the TIMER2.0database to assess the SSTR transcript levels.In COAD tissues, SSTR1-4 mRNA expression levels were remarkably downregulated.However, SSTR5 expression was upregulated (Figure 1A).Easy access to pre-calculated gene and protein expression based on tumor subgroups was provided by UALCAN (https://ualcan.path.uab.edu).This was applied to investigate the SSTR family gene mRNA expression. Our findings revealed that all SSTRs except SSTR5 were significantly downregulated in COAD versus normal controls (p < 0.05) (Figure 1B). To verify these results, we examined the SSTR family immunohistochemistry (IHC) results from the Human Protein Atlas database.SSTR1-4 protein levels were lower in COAD versus normal tissues (Figures 2A-D).These outcomes agreed with our earlier mRNA expression research findings. Correlation of SSTR family expression with clinicopathological features of COAD First, we investigated whether COAD staging and lymph node metastases were correlated with SSTR family member expression levels.In tumors with lymph node metastases at the N0-N2 stage, the mRNA expression levels of the four SSTR family genes (all but SSTR5) were lower (Figure 3A).Furthermore, compared to normal tissue, tumor stage 1-4 subgroups exhibited lower SSTR1-4 mRNA expression levels.A correlation occurred between SSTR1-2 expression and the different COAD stages (Figure 3B).These results suggested that SSTRs (SSTR2 in particular) contribute to COAD development. Next, we examined the association between the clinicopathological characteristics of COAD and SSTR family gene expressions in The Cancer Genome Atlas Program database.Statistical analysis of the clinicopathological characteristics of 237 patients with COAD showed a correlation between SSTR1 expression and sex.SSTR4 expression was linked to age, N stage, and clinical stage, while SSTR1/2 expressions were substantially linked to lymphatic invasion (p < 0.05; Table 2). Genetic alteration and functional analysis of SSTR family in patients with COAD DNA methylation of COAD genes is a potential epigenetic biomarker for the early detection of COAD.The UALCAN database was used to determine the methylation levels of the SSTR genes in patients with COAD.In contrast to normal tissues, COAD samples had considerably lower levels of SSTR1/ 5 DNA methylation but remarkably higher SSTR2/4 methylation levels.For SSTR3, there were no remarkable variations between the normal and malignant tissues (Figure 5A).We subsequently used the cBioPortal dataset to investigate genetic alterations in each SSTR family member.All five SSTR family members were altered in patients with COAD, with alteration rates of 6%, 4%, 3%, 10%, and 5%, respectively (Figure 5B).The most prevalent SSTR family abnormalities in patients with COAD were mRNA alterations and mutations (Figure 5C). Next, we used the Cytoscape v.3.9.0 database to identify coexpressed genes with a cutoff point of |log2 fold-change| ≥ 0.7 and p < 0.05.The co-expression network of key genes linked to the SSTR family was generated using the cBioPortal database (Figure 6A; Supplementary Table S1).The biological functions of the SSTR members and their co-expressed genes were assessed via GO annotation and KEGG pathway analyses using the Metascape database.For the co-expressed genes, KEGG pathway analysis was performed for cell adhesion molecules, the cAMP signaling pathway, and Staphylococcus aureus infection (Figure 6B).The GO findings illustrated that the co-expressed genes were primarily correlated with the pattern specification process, cell-cell signaling mediated by Bold font indicates significant difference. Frontiers in Pharmacology frontiersin.orga cell surface receptor pathway, and signaling receptor regulatory activity (Figure 6C).These genes were primarily involved in pattern specification, epithelial morphogenesis, and MARK cascade regulation (Figure 6D).A molecular function analysis revealed the primary involvement of these genes in signaling receptor regulatory activity, DNA-binding transcription activator activity, and ligand-gated monoatomic ion channel activity (Figure 6E).According to the analysis of cellular components, these genes were often linked to the extracellular matrix and the apical region of the cell (Figure 6F). We analyzed the marker types of DCs, CD8 + T cells, neutrophils, and tumor-associated macrophages in COAD using the TIMER2.0database to further investigate the relationship between SSTR family expression and different immune cells (Table 3).We observed a correlation between SSTR1 and CD8 + T cells.We also observed a strong correlation between SSTR2 and CD8 + T cells, B cells, T cells, tumor-associated macrophages (TAMs), M2 macrophages, neutrophils, DCs, T helper type 1 (Th1) cells, Tfh cells, regulatory T cells (Tregs), exhausted T cells, and monocytes.A strong association occurred between CD8 + T cells, B cells, FIGURE 6 SSTRs and SSTR-associated molecules co-expressed in COAD and their predicted functions and signaling pathways.(A) The cBioPortal database was used to identify the 178 SSTR-associated co-expressed molecules that are most frequently altered in COAD.SSTR family members and their associated co-expressed genes were used to generate the PPI network using the Cytoscape database.(B-F) Functional enrichment analysis was used to analyze the biological functions of the SSTR family members and their co-expressed genes.COAD, colon adenocarcinoma; PPI, protein-protein interaction; SSTRs, somatostatin receptors. T cells, TAMs, M1 and M2 macrophages, neutrophils, DCs, natural killer (NK) cells, Th1, Th2, Tfh, Th17, Tregs, T-exhausted cells, and monocytes.SSTR4 expression was moderately correlated with B cells.A moderately strong correlation was observed between SSTR5 and M1 macrophage markers in patients with COAD.Furthermore, these results indicate that SSTR family members are likely to contribute to the immune infiltration of COAD. Analysis and experimental verification of the potential value of SSTR2 expression in CRC immunotherapy We performed preliminary experiments to verify the differential expression of SSTR2 in COAD and its correlation with immune cell-associated molecules to further investigate the role of SSTR2 in COAD.To characterize SSTR2 expression in COAD tissues, qRT- PCR analyses revealed that the relative levels of SSTR2 expression in 25 COAD tissue samples were significantly lower than those in the matched adjacent non-tumor tissue samples (p = 0.0087; Figure 8A).We also demonstrated an association between SSTR and integrin subunit alpha X (ITGAX), a marker gene of DC.The qRT-PCR analysis showed that the relative expression levels of ITGAX were significantly lower in the 25 COAD samples than in the matched para-cancerous normal samples (p = 0.0086; Figure 8B).Finally, we found a positive correlation (r = 0.4248; p = 0.0343; Figure 8C) between SSTR2 and ITGAX expressions in the 25 COAD samples. Based on the Gene Set Enrichment Analysis findings, SSTR2 was primarily positively enriched using the BEST tool of the following data.These genes are responsible for the regulation of antigen processing and presentation, inflammasomal complex assembly, regulation of cytotoxicity, and antigen processing and presentation of DCs (Figure 8D); cytokine receptor interaction; an intestinal immune network for IGA production; the chemokine signaling pathway; NK cell-mediated cytotoxicity; the Jak stat signaling pathway; the T cell receptor signaling pathway; and the Toll-like receptor signaling pathway in the KEGG analysis (Figure 8E).These results are consistent with the involvement of SSTR2 in functional immune networks in COAD.Furthermore, we analyzed the correlation between SSTR2 and immunomodulators among 10 datasets using BEST analysis, including antigen presentations, immune inhibitors, immunostimulators, chemokines, and chemokine receptors, to better understand the impact of SSTR2 on immunological responses.A substantial positive correlation between SSTR2 expression and the chemokine receptor CX3CR1; immunostimulators TNFRSFI3C and KLRKI; chemokines CCL16 and CCL1; and immune inhibitors IL10, BTLA, and KIR2DL1 are shown in Figure 9A.To further test the correlation between SSTR2 and immunotherapy, we examined whether aberrant SSTR2 expression influenced the response to immunotherapy in CRC.As shown in Figure 9B, a positive correlation was observed between the mRNA expression levels of SSTR2 and those of PDCD1 [programmed cell death protein 1 (PD-1), CD274 (programmed cell death protein ligand 1 (PD-L1)] (Huang et al., 2017), and cytotoxic T-lymphocyte-associated antigen-4 (CTLA-4) in The Cancer Genome Atlas Program dataset.SSTR2 expression was upregulated in chimeric antigen receptor T cell (CAR-T) responders in the Lauss cohort and anti-PD-1/PD-L1 responders in the Cho cohort (Figure 9C).The areas under the receiver operating characteristic curves for the Lauss and Cho cohorts were 0.933 and 0.782, respectively.This indicates that SSTR2 could discriminate between CAR-T and anti-PD-1/PD-L1 responders and nonresponders (Figure 9D).High SSTR2 expression correlated with better OS in CRC patients receiving CAR-T cells in the Lauss cohort (p < 0.05; Figure 9E). Discussion SSTRs have great potential for the diagnosis and treatment of cancer, as extensive research has illustrated the aberrant expressions of SSTR family members in several malignancies and their involvement in cancer cell proliferation and development (Rogoza et al., 2022;Aboagye et al., 2023).In vitro and in vivo studies suggest that SSTR1, SSTR2, and SSTR5 suppress the growth of pancreatic cancer (Li et al., 2005).High SSTR1 expression can silence and inhibit the proliferative rate of CRC stem cells by reducing the proliferation of acetaldehyde dehydrogenase-positive cells (Modarai et al., 2016).One preliminary study (Sun et al., 2021) confirmed that SSTR1 is a target gene affecting renal cell carcinoma (RCC) metastasis and the associated immune response and could be a prognostic biological marker of and viable therapeutic target for RCC.Another study (Zhou et al., 2009) found that the antiproliferative effects of SSTR2 were both cytostatic (growth suppression) and cytotoxic (apoptosis) by affecting the cellular apoptotic level, MAPK, and angiogenic signaling molecules in SSTR2-positive and -negative cancers.In the field of drug targeting, the delivery of nanoparticles through receptor-mediated cell interactions has received considerable attention.SSTRs are promising targets for various nanoparticles facilitated by modifying nanoparticles with specific ligands or coatings for better binding (Abdellatif et al., 2018).However, the function of SSTRs in COAD has not yet been thoroughly investigated.The five different perspectives of interest are mRNA and protein expression levels, clinical characteristics and disease prognosis, genetic mutations, pathway analysis, and immune infiltration.We extensively explained each SSTR family member's biological impacts on COAD.Compared to nontumor cells, we discovered that SSTR1-4 mRNA and protein levels were downregulated in COAD cells.Another novel finding was that the SSTR family is strongly associated with individual clinicopathological stages and nodal involvement in COAD.These data suggest that SSTR might be correlated with COAD progression and that SSTR family members are potential diagnostic markers for COAD.Additionally, the SSTR gene family is frequently altered in COAD.Changes in mRNA expression are among the prevalent mutations.These findings strongly indicate that the differential expression of SSTR family members may be essential for COAD. As a receptor of the G protein-coupled signal transduction pathway, SSTR can prevent proto-oncogene activation, inhibit cell proliferation, and inactivate tyrosine kinases through the MAPK pathway, thus preventing cellular proliferation (Chen et al., 2022;Sáez-Martínez et al., 2022).Next, we investigated the molecular and biological functions of SSTR family members.KEGG pathway analysis of SSTRs and their co-expressed genes revealed that cell adhesion molecules and the cAMP signaling pathway were significantly enriched.According to the GO pathway analysis, the cell surface receptor signaling pathway involved in cell-cell signaling and signaling receptor regulator activity was especially associated with SSTRs.The combination of SSTR2 or other subtypes with SST may inhibit DNA synthesis and play an antitumor role through the cAMP, MAPK, and other information pathways.This provides a fundamental principle for the development of multi-receptor SST analogs and combined therapy with signal-targeting agents such as mammalian (or mechanistic) target inhibitors of rapamycin (Robertson et al., 2022).Our results indicate that SSTR-related signaling pathways have enormous potential for antitumor immunity.Mounting data illustrate that the infiltration of immunocompetent cells may contribute to tumor development and recurrence as well as the determination of the immunotherapy response and clinical outcome (Gajewski et al., 2013;Han et al., 2022;Zou et al., 2023).SSTRs were considerably linked to six types of immune cell infiltrates: CD4 + T cells, CD8 + T cells, DCs, macrophages, neutrophils, and B cells. SSTR2 is one of the most abundant SSTRs and a member of the GPCR family (Wu et al., 2020).Studies have shown that SSTR2 is associated with tumorigenesis in stomach and breast cancer and overexpressed in neuroendocrine tumors (Schulz et al., 2000;Skog et al., 2008;Tchoghandjian et al., 2016;Hope et al., 2018;Mohamed and Strosberg, 2019;Yin et al., 2020).SSTR2 reportedly interacts with the Wnt pathway protein Dvl1 in a ligand-independent way.SSTR2 is then targeted by Dvl1 for lysosomal degradation.SSTR2-targeted therapies may become more effective if this pathway is interfered with and SSTR2 expression in NETs is increased (Carr et al., 2023).Wildemberg showed that low SSTR2 expression can predict the failure of somatotropinomas to respond biochemically to SST analog treatment (Wildemberg et al., 2013).SSTR2 is an Epstein-Barr virus-induced druggable target in nasopharyngeal cancer, and Lechner and others demonstrated the preclinical effectiveness of targeted treatment (Lechner et al., 2021).The researchers found that the combination of SST and SSTR2 inhibits cytokine release from immune cells and impacts the tumor microenvironment (TME) (Patel, 1999;Cordova and Kurz, 2020).However, the relationship between SSTR2 and TME in COAD has not been reported.To further confirm the differential expression of SSTR2 in COAD and its association with immune cells.We collected 25 pairs of paraffin-embedded archived COAD specimens and matched adjacent normal tissues to confirm the differential expression of SSTR2 in COAD and its correlation with immune cells.In patients with COAD, SSTR2 expression is significantly downregulated.SSTR2 expression is also positively correlated with ITGAX, a gene associated with DCs.These results suggest that SSTR2 may exert antitumor effects by interacting with DCs in COAD. Surgical treatment supplemented with postoperative adjuvant chemotherapy can treat mid-to early-stage COAD.Chemotherapy alone or combined with targeted therapy is the primary treatment for patients with advanced COAD.With only a 15% 5-year survival rate, patients with advanced COAD have a poor prognosis.Therefore, new treatments are urgently required to prolong patient survival (Auclin et al., 2017;Dekker et al., 2019).As a novel and potent anticancer therapy, immunotherapy is anticipated to become an alternative treatment option for CRC patients.The treatment of CRC has entered a new era with the advent of immunotherapy as a ground-breaking intervention following surgery in combination with radiotherapy, targeted therapy, and chemotherapy.ICI therapy is the most crucial immunotherapy in the fight against CRC (Zou et al., 2021;Pang et al., 2023). Immunotherapies other than ICI have emerged rapidly in recent years, including CAR-T and oncolytic virus-based immunotherapy.The low immune cell infiltration level is the primary cause of the poor immune response to CRC (Hege et al., 2017;Zhao et al., 2022).The most crucial and effective therapeutic method for resolving this issue is CAR-T cell therapy.Zhang et al. (2017) performed a phase I clinical study of CAR-T therapy for CRC with high carcinoembryonic antigen expression.Of 10 patients, seven progressed while receiving prior treatment and remained stable.After CAR-T treatment, the condition of each remained unchanged.Two patients showed tumor shrinkage.In addition, Mandriani et al. showed that anti-SSTR CAR-T cells are highly effective target-dependent cytotoxic agents against a range of NET cell lines with different SSTR2/5 expression levels (Mandriani et al., 2022).This study aimed to determine whether abnormal SSTR expression affects the immunotherapy response of COAD.Our results suggest that SSTR2 may serve as an immunotherapeutic target in the treatment of COAD.The Lauss cohort showed a significant upregulation of SSTR2 expression in CRC patients receiving CAR-T cells.Differences were considered statistically significant.In patients with CRC receiving CAR-T and anti-PD-1/PD-L1 immunotherapy, SSTR2 could better discriminate between immune responders and non-responders (areas under the receiver operating characteristic curve, 0.933 and 0.782, respectively), and patients receiving CAR-T cells had a better OS, with statistically significant differences.By demonstrating a strong correlation between SSTR2 and immune molecules in CRC, our results support an immunoenhancing function of SSTR2.Additionally, our results suggest that SSTR2 contributes to tumor immunity.Therefore, it is a potential biological marker for anticipating the prognosis and effectiveness of immunotherapy in CRC patients. Our study provides the first analysis of the relationship between SSTR family expression, tumor immune infiltration, and COAD prognosis to increase our understanding of the critical functions of these genes as drivers of tumor development and the immune system in patients with COAD.Our study identified the SSTR family as useful biomarkers and therapeutic targets that can be used to develop diagnostic and prognostic approaches to improve treatment outcomes.However, this study has some important limitations that require consideration.Furthermore, in vitro clinical studies are required to validate the likely processes underlying the action of multiple SSTR genes in COAD, the molecular links between them, and their clinical applications. FIGURE 2 FIGURE 2 Protein expression levels of SSTR family members in COAD.(A-D) Protein expression levels of SSTR1-4 in COAD versus non-cancerous tissues.COAD, colon adenocarcinoma; SSTR, somatostatin receptor. FIGURE 3 FIGURE 3 Relationship between stage and lymph node metastasis of COAD and SSTR family members.(A) Relationship between SSTR family mRNA expression levels and lymph node metastases of patients with COAD.Relationship between SSTR family mRNA expression levels and cancer stage of patients with COAD.(B) Relationship between SSTR family mRNA expression levels and cancer stage of patients with COAD.*p < 0.05, **p < 0.01, ***p < 0.001 compared with controls.COAD, Colon adenocarcinoma; SSTR, somatostatin receptor; UALCAN, University of Alabama at Birmingham Cancer Data Analysis Portal. FIGURE 4 FIGURE 4 Prognostic value of mRNA expression levels of SSTR family members in patients with COAD.(A,B) The PrognoScan database was used to analyze the OS and DFS of the SSTR family in patients with COAD.COAD, colon adenocarcinoma; DFS, disease-free survival; OS, overall survival; SSTR, somatostatin receptor. FIGURE 5 FIGURE 5 Genetic alterations and DNA methylation levels of the SSTR family members in patients with COAD.(A) DNA methylation changes in the SSTR family members in patients with COAD were assessed by the UALCAN database.(B,C) Summary of the rate of alteration for the SSTR family in the COAD (cBioPortal).*p < 0.05, **p < 0.01, ***p < 0.001 compared with control.COAD, colon adenocarcinoma; SSTR, somatostatin receptor; UALCAN, University of Alabama at Birmingham Cancer Data Analysis Portal. FIGURE 7 FIGURE 7Association between SSTR mRNA expression with immune cell infiltration.(A-E) The TIMER2.0 database was used to evaluate the associations between SSTR family members and immune cell infiltration.SSTR, somatostatin receptor; TIMER, Tumor Immune Estimation Resource. TABLE 1 Primer sequence for qRT-PCR. TABLE 2 Clinicopathologic parameters of and SSTR family member expressions in COAD. TABLE 3 Correlations between SSTR family member expressions and immune cell markers. TABLE 3 ( Continued) Correlations between SSTR family member expressions and immune cell markers.
2023-10-15T15:16:35.930Z
2023-10-13T00:00:00.000
{ "year": 2023, "sha1": "88e570f09070e45a127a74c813695452ca8086ed", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2023.1255809/pdf?isPublishedV2=False", "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "f9787fb94e86209f51345858bb5a7ccc14286b5f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
247284083
pes2o/s2orc
v3-fos-license
Medical Student Portfolios: A Systematic Scoping Review Phenomenon Medical Student Portfolios (MSP)s allow medical students to reflect and better appreciate their clinical, research and academic experiences which promotes their individual personal and professional development. However, differences in adoption rate, content design and practice setting create significant variability in their employ. With MSPs increasingly used to evaluate professional competencies and the student's professional identity formation (PIF), this has become an area of concern. Approach We adopt Krishna’s Systematic Evidence-Based Approach to carry out a Systematic Scoping Review (SSR in SEBA) on MSPs. The structured search process of six databases, concurrent use of thematic and content analysis in the Split Approach and comparisons of the themes and categories with the tabulated summaries of included articles in the Jigsaw Perspective and Funnelling Process offers enhanced transparency and reproducibility to this review. Findings The research team retrieved 14501 abstracts, reviewed 779 full-text articles and included 96 articles. Similarities between the themes, categories and tabulated summaries allowed the identification of the following funnelled domains: Purpose of MSPs, Content and structure of MSPs, Strengths and limitations of MSPs, Methods to improve MSPs, and Use of E-portfolios. Insights Variability in the employ of MSPs arise as a result of a failure to recognise its different roles and uses. Here we propose additional roles of MSPs, in particular, building on a consistent set of content materials and assessments of milestones called micro-competencies. Whislt generalised micro-competencies assess achievement of general milestones expected of all medical students, personalised micro-competencies record attainment of particular skills, knowledge and attitudes balanced against the medical student’s abilities, context and needs. This combination of micro-competencies in a consistent framework promises a holistic, authentic and longitudinal perspective of the medical student’s development and maturing PIF. Introduction At a time when medical education is embracing a more personalised approach to knowledge attainment, skills training and development of professional behaviours, portfolios promise a means for medical students to better understand, reflect upon and actively shape their learning and development 1 . Complementing traditional assessment methods with wider longitudinal appraisals of an individual's growth, portfolios add a personalised dimension to logbooks 4,5 , by serving as a repository for written examinations, tutor-rating reports and bedside assessments 6 as well as individual reflections and analyses. Indeed, portfolios offer medical students "a self-regulated, cyclical process in which [they may] mentally revisit their actions, analyse them, cogitate alternatives, [and] try out alternatives in practice" 7 . It is this platform to showcase individual educational, research, ethical, personal and professional development 1,8 , and guide specific, holistic and timely feedback and remediation throughout the individual's medical education that underscores growing interest in portfolio use among medical students (henceforth medical student portfolios or MSPs) 4,12 . However, despite their growing traction 13 , MSPs show significant variability in their structure and content. With local, practical, sociocultural, educational and healthcare considerations prioritising different types of data, the role of MSPs remains limited. Need for the Review With MSPs representing a sustainable and effective educational undertaking that provides insight into the medical student's development, needs, values and beliefs that may guide their professional identity formation (PIF), better understanding of the principles behind their use, the key elements within them and a framework for consistent utilisation is required. Methods To determine what is known about MSPs, a systematic scoping review (SSR) is proposed to study current literature to enhance understanding of their roles and structure. These insights will also help guide the design of a consistent framework for MSPs to be used across different settings, purposes and specialities given their ability to evaluate data 14 from "various methodological and epistemological traditions" 19 . To overcome SSR's variable methodological steps, guidance and standards, this review adopts the Systematic Evidence Based Approach (SEBA) 20 . A SEBA guided SSR (henceforth SSR in SEBA) facilitates the synthesis of an evidence-based, accountable, transparent, and reproducible analysis and discussion. Steering this process and boosting accountability, oversight, and transparency, this SSR in SEBA sees an expert team involved in all stages of this review. The expert team comprised of medical librarians, local educational experts, and clinicians. SSRs in SEBA are built on a constructivist perspective acknowledging the personalised, reflective, and experiential aspect of medical education and recognising the influence of particular clinical, academic, personal, research, professional, ethical, psychosocial, emotional, legal and educational factors upon the medical student's learning journey, professional development and personal growth 27 . To operationalise the SSR in SEBA, the research team adopted the principles of interpretivist analysis to enhance reflexivity and discussions 18,32 in the six stages outlined in Figure 1. (Insert Figure 1. The SEBA Process) Stage 1 of SEBA: Systematic Approach 1. Determining the title and background of the review The expert and research teams determined the overall goals of the SSR and the population, context and concept to be evaluated. Identifying the research question Guided by the PCC (population, concept and context), the expert and research teams agreed upon the research questions. The primary research question was "what is known about medical student portfolios?". The secondary questions were "what are the components of MSPs?", "how are MSPs implemented?" and "what are the strengths and weaknesses of MSPs?". Inclusion criteria All peer reviewed articles, reviews and grey literature published from first January 2000 to 31 st June 2021 were included in the PCC and a PICOS format was adopted to guide the research processes 35,36 . The PICOS format is found in Table 1. Searching A search on six bibliographic databases (PubMed, Embase, PsycINFO, ERIC, Google Scholar and Scopus) was carried out between first to 10 th September 2021. Limiting the inclusion criteria was in keeping with Pham et al's (2014) approach to ensuring a sustainable research process 37 . The search process adopted was structured along the processes set out by systematic reviews. Extracting and charting Using an abstract screening tool, members of the research team independently reviewed the titles and abstracts identified by each database to identify the final list of articles to be reviewed. Sambunjak et al's (2010) approach to 'negotiated consensual validation' was used to achieve consensus on the Stage 2 of SEBA: Split Approach Three teams of researchers simultaneously and independently reviewed the included full-text articles. Here, the combination of independent reviews by the various members of the research teams using two different methods of analysis provided triangulation 41 , while detailing the analytical process improved audits and enhanced the authenticity of the research 42 . The first team summarised and tabulated the included fulltext articles in keeping with recommendations drawn from Wong et al's (2013) "RAMESES publication standards: metanarrative reviews" 43 and Popay et al's (2006) "Guidance on the conduct of narrative synthesis in systematic reviews" 44 . The tabulated summaries served to ensure that key aspects of the included articles were not lost (Supplementary File 1). Concurrently, the second team of three trained reviewers analysed the included articles using Braun & Clarke's (2006) approach to thematic analysis 45 . In phase one, the research team carried out independent reviews, actively reading the included articles to find meaning and patterns in the data. In phase two, 'codes' were constructed from the 'surface' meaning and collated into a code book to code and analyse the rest of the articles using an iterative step-by-step process. As new codes emerged, these were associated with previous codes and concepts. In phase three, the categories were organised into themes that best depict the data. An inductive approach allowed themes to be "defined from the raw data without any predetermined classification". In phase four, the themes were refined to best represent the whole data set. In phase five, the research team discussed the results of their independent analysis online and at reviewer meetings. 'Negotiated consensual validation' was used to determine a final list of themes. A third team of three trained researchers employed Hsieh & Shannon's approach to directed content analysis and independently analysed the included articles 46 . This analysis using involved "identifying and operationalising a priori coding categories". The first stage saw the research team draw categories from Davis et al.'s (2001) "AMEE Medical Education Guide No. 24: Portfolios as a method of student assessment" 47 to guide the coding of the articles. Data not captured by these codes were assigned a new code in keeping with deductive category application. Categories were reviewed and revised as required. In the third stage, they discussed their findings online to achieve consensus on the final codes. Stage 3 of SEBA: Jigsaw Perspective As part of the reiterative process, the themes and categories identified were discussed with the expert team. Here, the themes and categories were viewed as pieces of a jigsaw puzzle and areas of overlap allowed these pieces to be combined to create a wider/holistic view of the overlying data. The combined themes and categories are referred to as themes/categories. Creating themes/categories relied on use of Phases 4 to 6 of France et al.'s (2016) adaptation 48 of Noblit and Hare's (1998) seven phases of meta-ethnography 52 . To begin, the themes and categories were contextualised by reviewing them against the primary codes and subcategories and/or subthemes they were drawn from. Reciprocal translation was used to determine if the themes and categories could be used interchangeably. Stage 4 of SEBA: Funnelling Process To provide structure to the Funnelling Process, we employed Phases 3 to 5 of the adaptation. We described the nature, main findings, and conclusions of the articles. These descriptions were compared with the tabulated summaries. Adapting Phase 5, reciprocal translation was used to juxtapose the themes/categories identified in the Jigsaw Perspective with the key messages identified in the summaries. These verified themes/categories then form the line of argument in the discussion synthesis. Table 2 for ease of review. Funnelled Domain 2: Content and structure of MSPs Content in MSPs Similarly, discussions on the contents of MSPs are limited and have been summarised in Table 3. The content can be broadly categorised into content provided by the institution, medical students, and feedback/assessments by other stakeholders. Structure of MSPs Standardisation within and across portfolios may be achieved through the use of a clear template 4 or set of guidelines 53 . MSPs with clear delineation of contents required 54 were found to boost student receptivity 55,56 and enhanced reliability and validity during portfolio assessment 47,55,57 . However, a flexible approach allowing medical students to personalise their MSPs 58 and express themselves more freely 59 facilitates portfolio student-centricity 60,61 and ownership 53 . By encouraging students to incorporate their own content, such as reflective diary entries 55 , reflective essays 57 , video recordings 58 , audio recordings 59 , poetry or art 62 , improvements may be seen in the quantity and quality of their reflections 56 . Funnelled Domain 3: Strengths and Limitations of MSPs Given the lack of elaboration, much of the data for this domain is summarised in tables to aid easy review. Strengths Strengths of MSPs are highlighted in Table 4. Limitations The limitations of MSPs are highlighted in Table 5. Funnelled Domain 4: Methods to Improve MSPs The potential methods to improve MSPs are highlighted in Table 6. Stage 5 of SEBA: Analysis of Evidence-Based and Non-Data Driven Literature Evidence-based data from bibliographic databases were separated from grey literature such as opinion pieces, perspectives, editorial, letters and non-data based articles drawn from bibliographic databases and both groups were thematically analysed separately. The themes from both groups were compared to determine if there were additional themes in the non-data driven sources that could influence the narrative. In this review, the themes from the two data sources overlap, suggesting no undue influence upon the findings of this review. Stage 6 of SEBA: Synthesis of SSR in SEBA The narrative produced from consolidation of the funnelled domains was guided by the Best Evidence Medical Education (BEME) Collaboration guide 89 Discussion In answering its primary and secondary research questions, this SSR in SEBA reveals that MSPs have expanded beyond merely repositories of assessments and are now seen as a means of triangulating and contextualising assessments and their impact upon individual medical students. MSPs also allow students, faculty, and institutions to better understand the medical student's needs, abilities, expectations, and aspirations, aiding the provision of personalised mentoring and remediation. However, to meet these wider roles, manageable 87 and "authentic" portfolios that improve levels of engagement 91 are key. Here, authenticity refers to the "extent to which the outcomes measured represent appropriate, meaningful, significant and worthwhile forms of human accomplishments" 47 and serves to enhance the trustworthiness of what is largely qualitative data, and the validity of longitudinal assessments that help to map the development of their clinical competency 4 and professional identity formation 4,12,92 . However, current MSPs lack a consistent structure. While broad commonalities including learning objectives and professional expectations and roles to be met, and reflections, learning activities, self-assessments, achievements, and other evidence of competencies, MSPs vary significantly in their focus and content. Yet, these variations and particularities are unsurprising given the different practice settings, structure and program goals established by the host institution. These differences underpin the presence of different types, "depth" and nature of content prioritised. Inherent variability brought about by personalisation of longitudinal data, "choice of materials by the student" 54 and "individualised selection of evidence" 47 , ultimately limits the use of portfolios beyond the confines of a specific institution. This lack of consistency raises concerns about the efficacy of MSPs in providing a holistic perspective of the medical student's personal, academic, clinical, and professional development. We believe that these concerns may be bridged in part by harnessing the ability of current MSPs to capture education and assessment in specific areas of practice. Our findings suggest that current MSPs encapsulate several entrustable professional activities (EPA)s 94 . Each EPA however shares common aspects of other EPAs that may not be directly contained within a particular MSP. We believe that it is possible to harness these overlapping aspects to make MSPs more widely applicable. Here, we build upon the notion that micro-credentialling that incorporates "circumscribed assessments" of a specific EPA, such as "interpreting and communicating results of common diagnostic and screening tests", may be extrapolated to other EPAs such as "[communicating] in difficult situations" in a different practice setting 97 Guides remediation plans for underperforming students 1,62,91,105,111,116,135,140,142 • Specific to summative portfolio assessment: ○ Ensures that students take the portfolio exercise seriously 57,114 ○ Students will be spurred on to improve themselves should they receive negative feedback 75 ○ Better demonstrates achievement in competencies such as professionalism, teamwork, and communication skills 111 • Specific to formative portfolio assessment: ○ Enables constant improvement through feedback and reflection 6,7,60,71,75,105,116,127,133,140 ○ Fosters self-motivation 5,69 and intrinsic motivation to reflect 91,115 . Others • Encourages students to discuss their private thoughts 103 • Prepares students for postgraduate work ○ Easily transferable when needed in the future 80 to facilitate job applications 103,104 or acquisition of letters of recommendation for future training 80 ○ Helps to ease transition to postgraduate educational practice 74 as portfolios and portfolio assessment are often utilised at postgraduate level 55 • Improves teaching within undergraduate programs ○ Improves faculty's understanding of students Better understand students' thinking and attitudes 65 Directs discussion during meetings with advisees 65,74 ○ Identifies gaps in the curriculum 56,101 such as through providing opportunities for students to evaluate teaching activities 56 • Helping students to develop better rapport with others including patients 62,118,122 , clinical teams 62 and other students 132 8 Journal of Medical Education and Curricular Development small, professional learning milestones that all students need to attain before proceeding to the next competency-based stage. These are requisite knowledge, skills and attitudes all soon-to-be clinicians must have. Personalised micro-competencies, in turn, are determined by the individual's particular goals, training, abilities, skills and experiences. They are determined by the medical student and tutors and must be consistent with institutional codes of conduct and expectations. Inauthentic Provide only vignettes of a student's journey 59 , and students may hide evidence of their weaknesses 54,59,63,70,104,126 , fail to express their authentic views 63 or even fabricate reflections 78 They may also perform poorly under stress during assessments included in their portfolios such as directly observed work-based assessments 59,137 Students tend to have a poor self-assessment capacity 72,111,151 Perceived quality of portfolio relies heavily on the individual's reflective ability 55,105,121 which is unfavourable for students with poor reflective skills ○ Subjective Students may create their portfolios differently based on their own interpretation of the purpose of the portfolio 59 Student's portfolios may unknowingly be judged on irrelevant aspects such as layout and format 4 This may be amplified if student identity is not anonymised to examiners evaluating the portfolios 119 ○ Overly structured 47,53,57,59,62,64,119 Highly structured portfolios with a rigid format can lead to students including less of their personal observations and reflections, which diminishes the portfolio's capacity for authentic assessment of the student and their development • Problematic assessment process ○ Poor student understanding 11,53,62,63,73,104,116 ○ Time consuming There may be insufficient time for comprehensive assessments in the clinical setting as taking time to assess students must be balanced with providing quality patient care 59 Time consuming for assessors 1,5,11,13,53,55,60,63,65,68,74,104,112,116,140 Human resource intensive 6,112,137,140 Excessive paperwork 1,55,74,106 ○ Lack of standardisation among examiners Poorly standardised assessment procedure leads to poor consensus among assessors 117 • Lack of training for assessors limits the use of work-based assessments within portfolios for assessing student competence 137 Portfolio Implementation 64 Failure to understand role as portfolio mentors 64,110 Did not engage in reflection personally 64 Difficulty finding methods to help students 78 Poor impression of portfolios and their role in education 66,78 Poor relationship with student 103 Table 6. Methods to improve MSPs. Increase Mentorship Mentorship refers to a system where students are assigned to faculty throughout their training and portfolio creation to coach them 54,57,101 , engage them in supportive dialogue 63,64,108,118,148 , provide feedback 1,61,63,64,133 and encourage them to fully engage with their portfolios 74,78,103,131,146 . Benefits of Mentorship • Crucial to portfolio success 4,7,63,64,78,79,87,104,131 because it helps guide the students' reflective process 57,65,131,146 , enhances learning 1,57,74,135 and increases student receptivity towards their use 7,64,103 Improving quality of mentorship • Train mentors 66,78,87,123 and utilise verified teaching methods that foster reflection 152 and ensure mentors are able to stretch their students in their reflective practice 78 • Recruit good mentors ○ Willing to engage students 108 ○ Understands reflection 129 and their responsibility to teach students how to utilise reflections purposefully 79 ○ Able to build trust and rapport with students 64 Having a structured mentoring programme to guide portfolio use • Some institutions encourage frequent weekly meetings with mentees 108 , while others believe that mentorship can occur as infrequently as two to three times a year 4,57,64 • Keep the student to mentor ratio small such as having one-to-one interactions 6,70,79 Encourage portfolio uptake 4,7,53,54,56,57,60,64,65,70,91,102,114,121,123 Increase Exposure • Students who had been exposed to them for some time 6,91 had more positive attitudes towards portfolios. ○ Embed portfolio into the curriculum 54,64,72,104 and encourage faculty and department staff to reference it in daily practice 77 ○ Early portfolio introduction 54,129 Structure portfolio appropriately Organise portfolio based on its purpose • Organise the portfolio based on its purpose 125 . ○ For a portfolio focused on enhancing learning, the portfolio should include more self-reflection 54,56 and reasoned tasks that demonstrate student learning 56 . Improving portfolio assessment process Enhance learning through assessment process • Focus assessment on promoting student development 88 through providing useful feedback 121,124 • Enhance reflective learning • Ensure assessment does not compromise reflection 54 • Assess students based on the authenticity of their reflections 53 • Institute a central committee to review assessments and ensure ample learning experiences and assessment evidence exist to guide student learning 70 Journal of Medical Education and Curricular Development They underscore the importance of assessing the student's individual needs and circumstances which influence which in turn shape the kind of training and support proffered. With expectations differing across practice settings and levels of training, both generalised andpersonalised micro-competencies must be clearly conveyed to the medical student and tutors in a timelyand structured manner. To encapture their learning and attainment, MSPs must forward clear learning plans to align expectations with evidence of diverse learning activities, reflective prompts and diaries, multisource formative and summative evaluations via standardised assessment tools and constructive feedback. These standardised baseline guidelines will lend clarity to portfolio developers and users. This may boost the latter's trust and receptivity towards regular portfolio use 55,56 . We believe that structured and consistent micro-certification of micro-competencies could be extrapolated beyond the initial goals of the MSPs and could provide a longitudinal perspective of the medical student's development. This is especially useful when considering competencies such as interpersonal, communication skills and systems-based practices. Perhaps here, too, the silver lining to changes in medical education practices due to the COVID-19 pandemic can be harnessed. With many institutions incorporating online learning, e-portfolios should be institutionally sanctioned 85 with a dedicated team of portfolio developers and invested faculty members onboarding and overseeing their implementation. These considerations foreground the need for orientation sessions 10,62,64,67,104 to educate students and faculty on the identified EPAs as well as the use of generalised and personalised micro-competencies to ensure learning and assessment congruity and objectivity 91,105,106 . Embedding the portfolios into the formal curricula, assigning students mentors trained in reflective engagement, and establishing protected time for regular portfolio reviews would help to facilitate their consistent usage. Concurrently, portfolio use must be part of a continuous quality improvement process, building on feedback 107 and lessons learnt to promote further improvement to MSPs and portfolio assessment 10,11,47,62,78 . Indeed, both forms of micro-competencies underline the need for effective recording and oversight. This is especially important when micro-competencies provide a holistic appraisal of the medical student's progress and achievements, needs and abilities and provides insights into their professional identity formation. Capturing this data in a comprehensive, longitudinal manner replete with the medical student's reflections reveals a new dimension to portfolio use. Limitations Firstly, the review is limited by the omission of articles not published in English. This creates the risk of missing key papers. Furthermore, the focus on papers published in English led to focus on studies in North America and Europe. Secondly, while the articles comment on the sentiment of users including medical students on the effectiveness of portfolios for learning and assessment, there are a limited number of articles highlighting the perspectives of doctors who previously undertook the task of undergraduate portfolios. Hence, the review is limited by its inability to assess the long-term effectiveness and acceptability of portfolio usage after medical students enter the workforce as practicing medical professionals. 129 • Increase number of assessment points such as by adopting more work-based assessments within the portfolio 137 • Reduce subjectivity of assessment ○ Create and validate clear rubrics to assist assessors in their grading of students 121 ○ Increase number of assessors to achieve better inter-rater reliability 62,72,112,121 ○ Provide training to assessors 4,53,62,64,67,68,74,85,87,104,111,121,124,135 ○ Providing opportunities for discussion or feedback between assessors 4,8,63,72,105,111,116,117,124 • Introduce portfolio interviews where students can discuss and elaborate upon their portfolios personally 4,8,53,72,105,116,140 or even assess their own portfolios 5,55 Improve self-assessment process • Encourage students to include evidence to support their self-assessments to reduce inaccurate selfassessments 111 Conclusion This SSR in SEBA reveals that if portfolios are to remain relevant and maintain their user-friendliness and accessibility, the future of MSPs must lie in improving assessments and in enhancing the manner in which they are designed. While it is clear that assessments tools need to be enhanced to meet new perspectives of education and training, it is perhaps timely that this SSR in SEBA suggests key changes to portfolio use. In adopting e-portfolios for its accessible and expansive potential, it is clear that a robust and well-supported platform is critical. This platform ought to accommodate all manner of data and assessment results and remain a comprehensive repository of data. Categorised into different, sometimes overlapping, domains, data from this repository may be drawn to populate different designs of MSPs. Changing from one goal to another should therefore be simple. Such flexibility will still allow medical students to personalise their e-portfolios in a manner that they feel best represents their development without compromising faculty evaluation. A flexible yet robust e-portfolio such as this will also enable collaborations and facilitate input of corroborative data from third parties where required. Moving forward, further research may be undertaken to identify the long-term effects of portfolio usage, the manner that portfolios are evaluated, and the impact it has on professional identity formation throughout and beyond medical school. Glossary Terms Professional Identity Formation An adaptive developmental process that involves the psychological development of an individual, and the socialisation of the individual into appropriate roles and participation at work. Krishna's Systematic Evidence-Based Approach (SEBA) A structured and accountable approach used to guide analyses to ensure reproducible and robust data. Split Approach Combines content and thematic analysis of data to enhance the trustworthiness and depth of an analysis. Jigsaw Perspective Comparing overlaps between the themes and categories delineated by content and thematic analysis are considered in tandem, like complementary 'pieces of the jigsaw'. This allows for holistic perspective of data.
2022-03-08T16:47:55.233Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "e5e5fd85b12322ac1bd55c0967e356ed080b7687", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/23821205221076022", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "294933c7dccfb0db0600dea66b5e4d318214af87", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16455159
pes2o/s2orc
v3-fos-license
Urine Monocyte Chemoattractant Protein-1 and Lupus Nephritis Disease Activity: Preliminary Report of a Prospective Longitudinal Study Objective. This longitudinal study aimed to determine the urine monocyte chemoattractant protein-1 (uMCP-1) levels in patients with biopsy-proven lupus nephritis (LN) at various stages of renal disease activity and to compare them to current standard markers. Methods. Patients with LN—active or inactive—had their uMCP-1 levels and standard disease activity markers measured at baseline and 2 and 4 months. Urinary parameters, renal function test, serological markers, and renal SLE disease activity index-2K (renal SLEDAI-2K) were analyzed to determine their associations with uMCP-1. Results. A hundred patients completed the study. At each visit, uMCP-1 levels (pg/mg creatinine) were significantly higher in the active group especially with relapses and were significantly associated with proteinuria and renal SLEDAI-2K. Receiver operating characteristic (ROC) curves showed that uMCP-1 was a potential biomarker for LN. Whereas multiple logistic regression analysis showed that only proteinuria and serum albumin and not uMCP-1 were independent predictors of LN activity. Conclusion. uMCP-1 was increased in active LN. Although uMCP-1 was not an independent predictor for LN activity, it could serve as an adjunctive marker when the clinical diagnosis of LN especially early relapse remains uncertain. Larger and longer studies are indicated. Introduction Lupus nephritis (LN) contributes to significant morbidity and mortality in patients with systemic lupus erythematosus (SLE) [1,2]. Renal biopsy is the gold standard for diagnosis of LN. However, repeated biopsies are not always practical in real life practice especially in patients with frequent relapses or in those with associated severe haematologic or cerebral manifestations. Moreover, renal biopsy is a relatively invasive procedure and is associated with a significant albeit small risk, particularly in those patients who may have undiagnosed coagulopathy, for example, presence of antiphospholipid antibodies/antiphospholipid syndrome, or are on anticoagulants [3]. Active LN especially early flares/relapses often respond to appropriate treatment with immunosuppressive agents. However, these drugs are themselves associated with significant morbidity and even mortality whilst uncontrolled LN activity leads to chronic or end stage kidney disease (ESRD) and even death. Current standard laboratory markers such as proteinuria cannot always distinguish between active and inactive renal disease especially in patients with a recent history of LN [4]. These tests also lack sensitivity and specificity for the monitoring of LN activity especially early flares. Hence, it is essential to identify noninvasive new biomarkers that are able to predict renal flares/relapses as well as reflect the severity of LN activity. These biomarkers could be followed serially and may enable timely institution of 2 Autoimmune Diseases appropriate treatment before the development of significant inflammatory injury in the kidney. Monocyte chemoattractant protein-1 (MCP1) is a chemokine that attracts monocytes/macrophages to sites of inflammation [5]. MCP-1 is produced by mesangial, podocyte, and monocyte cells in response to various proinflammatory stimuli such as tumor necrosis factor alpha (TNF-). These inflammatory cells and substances subsequently mediate tissue injury and contribute to the development of renal dysfunction. Moreover, MCP-1 binding has been shown to reduce levels of nephrin, an important protector of kidney cell function [6] whereas antagonists to MCP-1 prevent renal disease progression in murine models. Marks et al. [7] showed that the presence of MCP-1 within the glomerulus correlated with a poor renal prognosis and could identify more severe histological classes of LN in paediatric patients. Several studies have shown that the urine levels of MCP-1 were significantly greater in patients undergoing a renal flare than in patients with stable renal disease or healthy controls [8][9][10]. We have previously reported that, in a cross-sectional study of 100 adult SLE patients with LN, uMCP-1 levels did reflect LN activity [11]. In this paper, we present the preliminary results of our prospective follow-up study which evaluated uMCP-1 as a potential marker for LN response to treatment and/or early relapse in this same LN patient cohort. Methods The same 100 LN patients whose baseline data had been previously reported by us [11,12] were followed in a prospective longitudinal fashion at 2 and 4 months. All patients fulfilled the ACR classification criteria for SLE [13] and eligibility included all those with biopsy-proven LN regardless of activity status at recruitment. We excluded LN patients with ESRD or who required chronic dialysis or had undergone renal transplantation and those with clinical LN in whom a renal biopsy could not be performed as well as pregnant patients. The patients were divided into two groups based on the presence or absence of LN activity as detailed below. The active LN group included those with active renal disease or nonremission (NR) or who had a relapse/flare. The inactive LN group included those in complete or partial remission (CR/PR). The calculated sample size was 100 patients [11]. Informed consent was obtained from all recruited subjects. The study protocol was approved by the Medical Research and Ethics Committee of the Universiti Kebangsaan Malaysia Medical Centre (UKMMC). Definition of LN Activity (A) Active LN was defined by the presence of one or more of the following criteria. (I) Proteinuria with or without any of the following features [14]: (a) presence of haematuria and/or red cell casts, (b) increase in serum creatinine and/or decline in eGFR. Proteinuria was measured as a spot morning urine protein creatinine index (uPCI) and was positive if the value was >100 mg/mmol creatinine (NR ≤ 20). (B) Relapse/flare of LN was defined in two ways. (I) At recruitment, relapse was defined as recurrence of renal disease activity after a period of remission ≥3 months [14]. (II) During this study period with only 4 months of observation (due to time constraints), relapse was defined as an increase in proteinuria and/or haematuria and/or serum creatinine level after 4 weeks of CR/PR or decrease in serum albumin level after 4 weeks of CR/PR [14]. (C) Remission was also defined in two ways. (I) At recruitment, remission was defined as absence or reduction of renal disease activity and no change in immunosuppressive therapy for at least 3 months [14]. (II) In this study period with only 4 months of observation, remission was defined as absence or reduction of renal disease activity and no change in immunosuppressive therapy for at least 4 weeks [14]. (D) Inactive LN was defined by the presence of one or more of the following criteria. The Disease Course of LN. The disease course of LN was categorized at each visit using the definitions modified from Yamaji et al. [16] and Ruiz-Irastorza et al. [17] (Table 1). SLE Disease Activity Index and Laboratory Assessment. SLE Disease Activity Index (SLEDAI-2K) was used to assess lupus disease activity [15]. This index consists of three components: global (score range 0-150), renal (score range 0-16), and extrarenal (score range 0-63). The renal score corresponds to the presence of any one of the following on urinalysis: proteinuria, haematuria, leukocyturia, or urinary red cell casts after exclusion of stones or concurrent urinary tract infection or other causes of proteinuria [18]. Laboratory assessment included the following: full blood count, renal function test, estimated glomerular filtration rate (eGFR) using the Modification of Diet in Renal Disease (MDRD) formula, urinalysis, urine microscopy, urine protein creatinine index (uPCI), and serological tests (serum complement 3 and 4 levels (C3, C4) and anti-dsDNA antibody titres (anti-dsDNA Ab)). Urine Sample Collection. A fresh urine sample (midstream) from each patient was collected in a sterile container. The urine was then transferred to 3 × 10 mL tubes. These were transported directly to the laboratory where they were centrifuged for 15 minutes at 1500 g to remove sediments then frozen in aliquots at −80 ∘ C for later uMCP-1 testing. Method of Measuring Urinary MCP-1. CCL2/MCP-1 Quantikine ELISA KIT (R&D Systems USA) for urinary MCP-1 measurement was used. The Quantikine Human MCP-1 Immunoassay is a 3.5-4.5-hour solid phase ELISA designed to measure MCP-1 in cell culture supernates, serum, plasma, and urine. It contains E. coli expressed recombinant human MCP-1 and antibodies against the recombinant factor. It accurately quantifies recombinant human MCP-1. Results obtained show linear curves that are parallel to the standard curves obtained using the Quantikine kit standards. Statistical Analysis Categorical variables are presented as counts (percent). Continuous variables are presented as mean (±standard deviation (SD)) if normally distributed or median (interquartile range (IQR)) if nonnormally distributed. Pearson's chi-square test ( 2 ) was used to compare categorical variables and a two-sided independent-sample -test was used to compare normally distributed variables. Nonparametric tests (Mann-Whitney and Kruskal-Wallis tests) were used for nonnormally distributed variables. Spearman's correlation coefficient was used to assess the association between uMCP-1 levels with standard laboratory parameters. Receiver operating characteristic (ROC) curves were constructed to determine the performance characteristics of uMCP-1 levels for detection and prediction of LN activity. The best cutoff value for uMCP-1 was calculated based on maximization of the Youden index (sensitivity + specificity -1) [19]. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of uMCP-1 as predictor of LN activity were also calculated. Binary logistic regression analysis was performed to explore for independent predictors of LN activity. uMCP-1 and all standard markers of LN activity of the preceding visit with a < 0.05 were included in the regression model. Data was analyzed using SPSS software version 18.0. Probability ( ) values of <0.05 were considered significant. Characteristics of the Study Population. A total of 100 SLE patients with biopsy proven LN were recruited and all completed the 4-month observation period. The sociodemographic, clinical, and laboratory data between active and inactive LN groups are as shown in Table 2. Course of LN in the Overall Study Population. At baseline, there were 47 patients with active LN (42 NR, 5 relapses) and 53 with inactive LN. The number with active LN decreased to 29 (27 NR, 2 relapses) at 2 months and to 22 (16 NR, 6 relapses) at 4 months, respectively, whereas the number of patients with inactive LN increased progressively from 53 at baseline to 71 (61 CR, 10 PR) at 2 months to 78 (59 CR, 19 PR) at 4 months, respectively. In summary, with time on treatment, the majority of patients with active LN achieved CR/PR although a few relapses occurred at each follow-up. At each time point, there were significant differences between the active and inactive LN groups with regard to serum albumin ( < 0.01), proteinuria (uPCI, < 0.001), SLEDAI-2K (global) ( < 0.001), and SLEDAI-2K (renal) ( < 0.001). At end study, serum creatinine had increased There were no differences between both groups in terms of anti-dsDNA Ab and serum complements (C3 and C4). The detailed comparisons between active and inactive LN groups at each time point are summarized in Table 3. At all time points, uMCP-1 levels were significantly higher in the active group compared to those in the inactive LN group (Table 3) At each visit, uMCP-1 levels were highest in those patients with relapsed LN followed by the NR group and the lowest levels occurred in the remission group (CR/PR). Association between uMCP-1 with Parameters of LN Activity on Follow-Up. The associations of uMCP-1 with parameters of LN activity are summarized in Table 4. (Table 5). Course of LN in the Group Two patients were subjected to repeat renal biopsy. In both, the histopathological findings had deteriorated from class II + V (Case 1) and class IV (Case 2) six months earlier to class III + V (both). uMCP-1 Levels and LN Activity on Follow-Up. The uMCP-1 levels decreased progressively from baseline to 2 months to end of study in response to treatment especially in those patients who achieved remission (Table 5). uMCP-1 levels were significantly lower in those who attained remission than in those with active LN ( < 0.001 in both). Lupus Nephritis Relapses. On follow-up, 13 patients in the overall study population relapsed, five at baseline, two at 2 months, and six at 4 months, and were appropriately treated. Their median uMCP-1 levels were highest at the time of relapse compared to pre-relapse levels and decreased in response to treatment (Figure 1). Renal biopsy was repeated in 1/5 who relapsed at baseline and 1/2 at 2 months but in none of the six relapsers at 4 months. Their histological findings had deteriorated from class IV to class V and mixed class IV + V, respectively. ROC Curve Analysis of uMCP-1 to Identify LN Activity. ROC curves were constructed to assess the potential diagnostic values of uMCP-1 compared with standard blood and urine markers at each visit to identify patients with active LN. At each visit, the area under the curve (AUC) for uMCP-1 was higher than those for serum albumin, serum creatinine, eGFR, anti-dsDNA Ab titres, C3, and C4, for detection of LN activity (Table 6), whereas it was lower than those for proteinuria (uPCI) and SLEDAI-2K renal score. Thus, uMCP-1 was superior to most of the usual markers used for the monitoring of LN activity but was not as good as those for proteinuria (uPCI) and SLEDAI-2K renal score. This is illustrated by the ROC curves at end of study ( Figure 2) which show that the AUC for uMCP-1 was very good at 0.87 (95% CI: 0.78-0.96: < 0.001). At a maximum Youden index of 0.69, the cutoff value was 3,594 pg/mg creatinine. This gave a sensitivity of 0.90 and a specificity of 0.79, respectively. A ROC curve was also constructed for uMCP-1 levels of the previous visit (i.e., at 2 months) to predict LN outcome Discussion We have previously reported in a cross-sectional study that uMCP-1 levels were significantly elevated in patients with active LN compared to those with inactive renal disease [11]. On follow-up of our cohort, uMCP-1 levels were consistently higher in patients with active LN compared to those with inactive LN. The highest uMCP-1 levels were observed in those with renal relapses ( = 13) which their uMCP-1 levels decreased progressively with treatment. These findings are consistent with those reported to date from the few other longitudinal studies in the literature [8,20,21]. The Ohio SLE study followed 80 patients with SLE with and without LN and 28 healthy controls [8]. uMCP-1 levels were significantly higher in patients with renal flares ( = 25) than those with nonrenal flares ( = 22), SLE renal disease control subjects ( = 15), SLE nonrenal flare control subjects ( = 18), and healthy individuals ( = 28). uMCP-1 levels decreased over several months in patients who responded to treatment but were persistently high in nonresponders [8]. In another longitudinal study ( = 20), Singh et al. [20] reported that uMCP-1 could distinguish those patients with active LN from those with inactive renal disease or stable SLE. During followup, uMCP-1 levels decreased significantly in those patients who achieved remission (CR/PR) but did not change in nonresponders [20]. Torabinejad et al. [21] assessed the role of uMCP-1 and urinary transforming growth factor-2 (uTGF-2) in a longitudinal study involving 70 SLE patients and 10 healthy controls. They divided the SLE patients into 4 groups: 25 with active LN, 10 with remission LN, 25 with clinically active SLE and without LN, and 10 with SLE in remission and without LN. They demonstrated that the levels of both uMCP-1 and uTGF-2 were significantly different in these groups. The highest levels were observed in the active LN group while the lowest were found in the controls. Both biomarkers decreased in response to treatment [21]. In our study patients, uMCP-1 levels correlated directly with proteinuria and inversely with serum albumin at recruitment and on follow-up. These findings corroborate with those reported in cross-sectional studies by Tucci et al. [22], Chan et al. [23], and Alzawawy et al. [24] and in a longitudinal study by Watson et al. [25]. Whereas Noris et al. [26] did not find this association. At baseline and at 2 months, we found no association between uMCP-1 levels and serum creatinine or eGFR. Contradictory results have been reported in both crosssectional studies [22,23,27,28] and a longitudinal study by Rovin et al. [8]. However, at end study, uMCP-1 levels in our patients were found to be associated with serum creatinine and eGFR. Several reasons can account for this last observation: "mild" CKD progression, use of reninangiotensin system (RAS) blockers in those patients with CR/PR, and relapse of LN ( = 13) which is often associated with an element of acute kidney injury (AKI). We also found significant correlations between uMCP-1 with global SLEDAI-2K and renal SLEDAI-2K scores at all time points. Many authors had previously reported these findings in both cross-sectional studies [23,27,28] and in the longitudinal study by Rovin et al. [8]. At both follow-up visits, there were no associations between uMCP-1 levels and anti-dsDNA Ab titres. These findings concur with those reported by Watson et al. [25]. At 2 months, uMCP-1 levels were significantly associated with serum complements (C3, C4). The associations between uMCP-1 levels and serological markers remain controversial. El-Shehaby et al. [27] found uMCP-1 levels to be associated 8 Autoimmune Diseases with serum complements C3 and C4 but not with anti-dsDNA Ab titres. Alzawawy et al. [24] (cross-sectional study, 30 SLE patients) and Kiani et al. [10] (longitudinal study, 87 SLE patients) reported that uMCP-1 levels and anti-dsDNA positivity were highly associated, whereas Watson et al. [25] (longitudinal study, 64 paediatric SLE patients) reported an association between uMCP-1 and serum C3. At all time points, the ROC curves for uMCP-1 showed it to be a good consistent noninvasive marker for detection of LN activity. AUCs at all three visits were very good and ranged from 0.82 to 0.87 with sensitivities of 0.87-0.90 and specificities of 0.61-0.79. Torabinejad et al. [21] in their mixed SLE/LN cohort reported that uMCP-1 had an AUC of 0.90 with a sensitivity of 0.94 and specificity of 0.80 for diagnosis of LN regardless of SLE activity at baseline. In our study, uMCP-1 consistently outperformed the usual blood and urinary markers as well as the serological markers, that is, anti-dsDNA Ab titres and serum complements. However, uMCP-1 was not superior to proteinuria and SLEDAI-2K renal score for detection of LN activity. This may be due to the fact that both proteinuria and SLEDAI-2K renal score were included as major criteria in the definition of LN activity. We also examined the ROC curve for uMCP-1 of the preceding visit which showed that a cutoff value of 3,175 pg/mg creatinine had a good sensitivity but lowish specificity for discriminating between active and inactive LN. Given the rather poor positive predictive value of 0.37, a uMCP-1 cutoff level of 3,175 pg/mg creatinine did not have the potential to predict LN activity. However, uMCP-1 levels of less than 3,175 pg/mg creatinine had the potential to predict the absence of LN activity with a negative predictive value of 94%. In patients with LN active at baseline ( = 47), uMCP-1 levels fell significantly in response to treatment in all patients initially. In those who achieved CR/PR at end study ( = 28), uMCP-1 levels continued to decrease further, whereas, in those with persistent NR ( = 19), the uMCP-1 which fell initially rose again at end of study. In the 13 patients with LN relapse, uMCP-1 levels not only increased concurrently with the relapse but also achieved the highest levels and then decreased progressively with treatment. In the one patient with NR at baseline who relapsed at 2 months with increasing proteinuria and rising serum creatinine levels despite increased treatment, her uMCP-1 levels rose in tandem. Interestingly, one patient who The AUC for proteinuria was 0.89 ( < 0.001) and those for haematuria and leukocyturia were 0.62 ( = 0.07) and 0.62 ( = 0.08), respectively. The AUC for SLEDAI-2K was 0.85 ( < 0.001). Thus, uMCP-1 was better than haematuria and leukocyturia and essentially similar to proteinuria (uPCI) and SLEDAI-2K renal score for detection of LN activity at 4 months. was initially in remission but relapsed at end study showed undetectable uMCP-1 levels throughout. This can perhaps be explained by MCP-1 gene polymorphism with her lacking the MCP-1 gene just like the MCP-1 knockout mice of the MRL/lpr lupus model [29] or her MCP-1 gene could have undergone mutation. Kim et al. [30] and Tucci et al. [22] had earlier reported MCP-1 gene polymorphism in SLE patients with LN except these authors had reported on the dominant allele and its predisposition to LN. Kim et al. [30] reported that a genetic polymorphism in the 5 flanking region of the MCP-1 gene is associated with LN in SLE patients. Tucci et al. [22] reported that SLE patients with an A/G or G/G MCP-1-2518 genotype have a higher risk of developing LN. Multiple logistic regression analysis showed that only proteinuria and serum albumin were independent predictors of LN activity or relapse but not uMCP-1. This may again be due to the fact that both proteinuria and serum albumin were included in the definition of LN activity. We hypothesize that had the definition also incorporated the histological class as well as the activity index (AI) and chronicity index (CI) of recent renal biopsies and these parameters entered into the regression model, uMCP-1 could well have emerged as an independent predictor. In this context, Chan et al. [23] found that uMCP-1 mRNA was significantly higher in patients with active LN than in those with inactive LN, or those with inactive nonrenal SLE and healthy controls. uMCP-1 mRNA correlated significantly with SLE disease activity indices and with the histological AI. However, uMCP-1 as measured by ELISA did not correlate with the histological AI. Alternatively, the diagnostic performance of uMCP-1 could be improved when measured by the conventional assay method (ELISA) in combination with other urine proteins as demonstrated by Susianti et al. [31] or by using Multiplex bead assays (Luminex) which are able to detect a large panel of different cytokines in a single blood or urine sample [32]. In the literature, there are some data available on the use of multiplex bead assays for blood cytokine levels but very little data for urine cytokine levels [33]. Further studies are needed to validate this approach for the measurement of both blood and, particularly, urine cytokines [33]. Susianti et al. [31] assessed the role of urinary TGF-1, MCP-1, NGAL, and IL-17 in adults with LN ( = 70). The patients were divided into 3 groups: 38 with severe LN (class III-IV LN patients), 12 with mild LN (class I-II LN patients), and 20 healthy controls. All biomarkers were measured by ELISA using a human kit for each biomarker. The authors found that all four biomarkers had good diagnostic performances. uNGAL had the best sensitivity and specificity followed by uMCP-1, uIL-17, and uTGF-1. The best sensitivities and specificities were shown by the combination of uTGF-1 and uNGAL followed by uMCP-1 and uNGAL. As part of the overall thesis project, we have also compared uNGAL and uMCP1 in this patient cohort and found that both biomarkers showed good performances for detection of LN activity (data not shown and not previously published) [34]. However, the AUC values as well as sensitivities and specificities for uMCP-1 were greater than those for uNGAL. Thus, uMCP-1 appears superior to uNGAL as a noninvasive diagnostic marker for active LN. Nonetheless, these markers in combination may be superior to either use in isolation. The performance of uMCP-1 can also conceivably be improved by using one of the system biology approaches "omics. " In general, these approaches are used for the universal detection of genes (genomics), mRNA (transcriptomics), proteins (proteomics), and metabolites (metabolomics) in a specific biological sample in a nontargeted and nonbiased manner [35]. He et al. [36] recently described the application of omics-based methodology for the study of kidney diseases. They discussed omics data integration in terms of improving early detection, predicting disease progression, and monitoring treatment response. Additionally, the omics tools may also improve our understanding of LN renal regulatory events and help identify new biomarkers and therapeutic targets [37]. In this modern era with the establishment of specialized SLE/LN centres, relapses and/or reactivation and/or NR have reduced in frequency and severity. Nonetheless, these remain a major issue in the management of LN patients. One reason for this is that the natural course of LN is typified by relapseremission and to perform repeated renal biopsies for each LN relapse not only is highly traumatic but may lead to complications and is probably unethical beyond a certain maximal number in a given time frame. Thus, serial uMCP-1 monitoring in conjunction with the usual clinical parameters can obviate repeated "invasive" renal biopsies. The other main reason for LN and/or reactivation and/or NR is that patient noncompliance which has only been recently recognized. Many studies have shown that significant nonadherence to medications occurs not only in renal transplant patients [38][39][40] but also in lupus patients leading to adverse outcomes [41][42][43]. In our cohort, several patients were nonadherent to the prescribed dose of corticosteroids or immunosuppressive drugs. In addition, they were also taking herbal and/or traditional medications. These included 3/13 of the relapsers and several with NR at recruitment. Despite repeated counseling on the importance of adherence to prescribed medications, one patient with NR at recruitment remained recalcitrant and suffered a relapse at end of study. The main limitation of this study was the time lag between urine collections for uMCP-1 with initial renal biopsy. Thus, it was not possible to correlate uMCP-1 with the histological classes of LN. Another limitation was the (still) relatively small number of patients recruited and the short followup of only 4 months due to cost (predominantly) and time constraints. In conclusion, uMCP-1 levels were markedly increased in those patients with active LN in particular those with renal relapse and correlated significantly with LN activity. uMCP-1 was able to distinguish active LN and/or relapse from inactive renal disease. It had consistently good diagnostic performances with a good sensitivity and moderate specificity for detection of LN activity and/or relapse. It also had a good sensitivity albeit lowish specificity for prediction of LN activity and/or relapse. Perhaps the usefulness of this biomarker could be improved by incorporating several other new markers currently also under study into a panel for assessing LN activity, somewhat similar to that recently validated by the FDA (USA) for acute kidney injury (AKI, NephroCheck). NephroCheck identifies the presence of 2 proteins (insulin-like growth-factor binding protein 7 (IGFBP7) and tissue inhibitor of metalloproteinases (TIMP-2)) in the urine of AKI patients. Although uMCP-1 was not an independent predictor for LN activity, it could serve as an adjunctive marker if the clinical diagnosis of LN activity remains uncertain. Additionally, it may identify early relapse of LN, thus facilitating improved grading of LN activity in this complex disease leading to earlier treatment and better outcome. A larger, prospective, longitudinal study for a longer follow-up for at least of 2-3 years recruiting patients at the time of their renal biopsies is indicated. Disclosure The paper has been seen and approved by all authors and it is not under consideration for publication elsewhere in a similar form.
2018-04-03T04:11:23.374Z
2015-07-12T00:00:00.000
{ "year": 2015, "sha1": "535aa412f06f6f69bfcbf34e0860a100c9693a57", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/ad/2015/962046.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d688ea7d2fc249ee2bb12756fb05fd80490e55dd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253249969
pes2o/s2orc
v3-fos-license
Genetic determinants for the racial disparities in the risk of prostate and testicular cancers Background A worldwide higher incidence of prostate cancer and lower incidence of testicular cancer in men of African ancestry compared to European ancestry has been observed previously. However, underlying mechanisms accounting for these observations are largely unknown. Methods The current study analyzed previously reported SNPs associated with either prostate cancer or testicular cancer to examine whether the risk allele frequency could help us understand the observed incidence disparities in men of African ancestry and European ancestry. Both t-test and regression analysis were performed. Results Here we show that men of African ancestry are more likely to have risk alleles of prostate cancer and less likely to have risk alleles of testicular cancer compared to men of European ancestry. Conclusions Our findings suggest that genetic factors may play an important role in the racial disparities in the risk of prostate and testicular cancers. It has been observed that men of African ancestry have a higher incidence of prostate cancer and lower incidence of testicular cancer compared to men of European ancestry. However, little is known about underlying mechanisms accounting for these observations. The current study compares frequencies of all genetic alterations associated with risks of prostate cancer or testicular cancer between the two racial groups. Our findings suggest that differences in the frequencies of genetic alterations between the groups may help to explain the racial disparities in the risk of prostate and testicular cancers. P rostate cancer (Pca) and testicular cancer, also known as testicular germ cell tumor (TGCT), are common cancers diagnosed in men within the United States and globally 1,2 . The patterns of incidence between these two cancer types differ greatly between men of different geographical origin, age group, race and ethnicity 1 . In regard to race and ethnicity, interestingly, prostate cancer and testicular cancer display opposite trends in incidence in men of European ancestry and African ancestry. Studies have shown that men of African ancestry have disproportionately higher incidence and mortality rates compared with men of European ancestry in prostate cancer 3 . Whereas men of European ancestry are more likely to be diagnosed and have higher rates of mortality than men of African ancestry in testicular cancer 4 . However, little is known about underlying mechanisms accounting for these observations. Genetic factors play a major role in cancer etiology. In the past decade, many cancer-related genetic variants such as single nucleotide polymorphisms (SNPs) have been identified in genetic association studies, especially genome-wide association studies (GWAS). The current study analyzed previously reported SNPs associated with either prostate cancer or testicular cancer to examine whether the risk allele frequency could help us understand the observed incidence disparities in men of African ancestry and European ancestry. Our results show that men of African ancestry are more likely to have risk alleles of prostate cancer and less likely to have risk alleles of testicular cancer compared to men of European ancestry, which suggest that genetic factors may play an important role in the racial disparities in the risk of prostate and testicular cancers. Methods Data collection. Literature search was performed by using combinations of key words including, single nucleotide polymorphism (SNP), Genome Wide Associate Study (GWAS), genetic risk variant, prostate cancer, and testicular cancer. Genome-wide association studies and meta-analysis of genome wide association studies were used to collect all SNPs associated with testicular cancer and prostate cancer. SNPs associated with these two cancer types at an adjusted statistically significant level (p < 0.05) were collected. Risk allele frequencies of these identified SNPs in populations of African and European ancestry were then obtained from 3 databases: 1000 Genomes Project, Allele Frequency Aggregator (ALFA), and the genome Aggregation Database (gnomAD). The 1000 Genomes Project database contains data for 2,504 individuals from 26 populations 5 . The ALFA database has over 2 million subjects from 12 diverse populations 6 . GnomAD has 125,748 exomes and 15,708 genomes from unrelated individuals sequenced, totaling 141,456 individuals 7 . The number of individuals with available allele frequency data for each SNP in these 3 databases were included in Supplementary Data 1, 2. Calculation of allele frequency. Allele frequencies in different databases may vary slightly and the frequency of each risk allele (F) was calculated by averaging frequencies in these 3 databases with justification for sample sizes. To further account for attributable risk of the risk alleles, weighted frequency (F w ) was calculated using the equation: F w = F*OR/OR max , where OR is the corresponding odds ratio of the risk allele and OR max is the largest odds ratio among risk alleles. Statistics. Average weighted frequencies of risk alleles were then compared between the 2 racial/ethnic groups using a one-tailed student's t-test because the testing hypotheses have one direction of interest for each cancer type. The t-test was conducted by using Graphpad Prism (version 8.2.1). Deming regression analysis was performed using Graphpad Prism (version 8.2.1) to find the line of best fit by accounting for frequency variations on both the xand the y-axis. Regression lines from both cancer types were compared to a standard regression line that has equal allele frequencies in both African and European populations (slope = 1). Data source for these analyses were included in Supplementary Data 3,4. To further consider the potential effect of SNPs in linkage disequilibrium (LD) on our analyses, we interrogated LD correlations for SNPs located on the same chromosome using a web-based tool (https://ldlink.nci.nih.gov) that uses subjects from all available ethnic groups and draws data from the 1000 Genomes Project. r 2 > 0.8 was used as the threshold to determine LD and all linked SNPs were indicated in Supplementary Data 1, 2. Average allele frequencies of the SNPs in LD were calculated and used in analysis. However, we did not combine allele frequencies of linked SNPs if (1) they are located in different protein coding genes, (2) they are located in functional regions (e.g., 3′-UTR), and (3) their frequencies between risk and reference alleles show opposite directions (risk allele frequency is higher in one ethnic group and lower in another). Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article. Results and discussion A total of 226 risk SNPs significantly associated with prostate cancer and 79 risk SNPs significantly associated with testicular cancer were identified (Supplementary Data 1, 2). Most SNPs included in our analysis were obtained from GWAS, replication studies, and meta-analysis studies. Two SNPs with conflicting results from two different studies were excluded from further analysis. Data of risk and reference alleles, odds ratio (OR), confidence interval (CI), adjusted p-value, gene symbol, ethnic group, number of cases and controls, and citation of original publications were collected and included in supplementary data files. When there were multiple ORs present in previous studies, we only presented the ones from meta-analysis studies. When a SNP was investigated in multiple ethnicity groups, we chose to present the result that showed the most significant OR. While some SNPs were identified and verified in multiple races/ethnicities, most study subjects are individuals of European ancestry. Due to the low power of small sample sizes, differences in allele frequencies, or different linkage disequilibrium patterns, non-European populations in many previous studies did not show significant associations. The hypothesis that a higher/lower incidence of prostate/testicular cancer could be associated with a higher/lower average frequency of risk alleles was tested by comparing risk allele frequencies between European and African populations. For prostate cancer, the average risk allele frequencies (F) was significantly higher in men of African ancestry (45%) compared to men of European ancestry (42%) (n = 226; p = 0.045). After considering the odds ratio of risk alleles, the analysis of weighted frequencies (F w ) stayed highly significant (n = 226; p = 0.025) (Fig. 1a). On the contrary, the average risk allele frequencies of testicular cancer was significantly lower in men of African ancestry (46%) compared to men of European ancestry (51%) (n = 79; p = 0.030). The weighted analysis showed the same significant difference (n = 79; p = 0.014) (Fig. 1b). Moreover, frequencies of all risk alleles in African (Y axis of Fig. 2) and European (X axis of Fig. 2) populations showed significant linear regression relationships in both prostate cancer (Y = 1.152*X-0.011, p = 0.0039) and testicular cancer (Y = 0.705*X + 0.0367, p < 0.0001). The differences between the slopes were highly significant where the regression line of prostate cancer data (slope = 1.152, p = 0.0039, Fig. 2a) skewed above the standard line towards the African population and the regression line of testicular cancer data (slope = 0.705, p < 0.0001, Fig. 2b) skewed below the standard line slope towards the European population. To further analyze if a particular locus or outlier affected our results, we removed certain data points that could potentially skew the relationships. For the secondary prostate cancer analysis, we removed 31 SNPs located at the chromosome 8q24 locus because multiple studies have suggested that variants in the 8q24 region were significantly associated with prostate cancer. After the removal, differences in risk allele frequencies between African and European ancestry were insignificant, suggesting that genetic variants at the chromosome 8q24 locus are major contributors to the racial difference of prostate cancer incidence. Similarly, we removed 4 outliers (rs995030, rs1508595, rs4474514, rs3782179) that skewed down the regression line and found no significant difference between European and African risk allele frequencies of testicular cancer (p-value = 0.157), suggesting that these 4 SNPs contribute the most to the deviation of the slope. Interestingly, 3 of the 4 SNPs (rs995030, rs4474514, rs3782179) are located in the KITLG gene, which is found by previous studies to be involved in the development of TGCTs and presents a strong specific risk factor independently from spermatogenic function 8 . However, while KITLG might play a role in TGCT development, few studies investigated the testicular cancer risk allele frequency differences between individuals of African and European ancestries, except for KITLG's significant associations with pigmentation of hair, eye and skin between Africans and Europeans 9,10 . Thus, potential functional roles of KITLG in testicular cancer, especially between different racial groups, warrant further investigation. After consideration of SNPs in LD, results from the t-test remained significant for both prostate cancer (p-value = 0.019) and testicular cancer (p-value=0.048). We also conducted the analyses without considering SNP locations (in potential functional regions). We found that the t-test result for prostate cancer remained significant (p = 0.036) and the t-test for testicular cancer became insignificant (p = 0.142). For the testicular cancer analysis, SNPs in LD are all located in the KITLG gene. After taking average allele frequencies of these SNPs, the number of data points was reduced, and the t-test result was affected. However, this finding provides further evidence of the importance of the KITLG gene in testicular cancer etiology. Results from both t-test and regression analysis are consistent with each other, which indicate that men of African ancestry are more likely to have risk alleles of prostate cancer and less likely to have risk alleles of testicular cancer compared to men of European ancestry. These findings suggest that genetic factors could partially explain the greater burden of prostate cancer on men of African ancestry and the higher incidence of testicular in men of Fig. 1 Comparison of risk allele frequencies of prostate and testicular cancers between men of African descent and European descent. a Average weighted risk allele frequency of prostate cancer was significantly higher in men of African descent compared to men of European ancestry descent (n = 226; p = 0.025). b Average weighted risk allele frequency of testicular cancer was significantly lower in men of African descent compared to men of European descent (n = 79; p = 0.014). Fig. 2 Regression lines of risk allele frequencies between African and European descent for prostate and testicular cancers. a The regression line of prostate cancer data (slope = 1.152) skewed above the standard line (slope = 1) towards African population. The differences between the slopes were highly significant (n = 226; p = 0.0039). b The regression line of testicular cancer data (slope = 0.705) skewed below the standard line slope (slope = 1) towards European population. The differences between the slopes were highly significant (n = 79; p < 0.0001). European ancestry compared to other racial/ethnic groups. Our findings are consistent with results from a newly published work on prostate cancer, which compared allele frequencies of 269 prostate cancer loci and the distribution of polygenic risks scores (PRS) for prostate cancer between African and European men 11 . Cancer is a complex disease that normally involves multiple genes and environmental factors in its etiology. Therefore, in addition to understanding the influence of genetic factors, social and environmental factors also need to be considered when analyzing the observed racial and ethnic disparities in the risk of prostate and testicular cancers. We realize that SNPs detected from association studies may be a proxy of the functional ones in the region. It is important to compare frequency data of functional SNPs between human populations for testing our hypothesis. However, this approach is not feasible for the current study due to limited information available for the functional SNPs in the LD chromosomal regions and the lack of functional data available for many SNPs identified from association studies. Future mechanistic investigations on genetic variants in the identified genes will facilitate the selection of SNPs for such comparisons between different racial groups.
2022-11-02T20:56:58.287Z
2022-11-02T00:00:00.000
{ "year": 2022, "sha1": "dca09c9d4524f592fb85bde97e05fa8720fab876", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "dca09c9d4524f592fb85bde97e05fa8720fab876", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
228906107
pes2o/s2orc
v3-fos-license
Ticks Prevalence and Possible Risk Factors Assessment on Domestic Dogs in Quetta District Balochistan, Pakistan TICKS and tick-borne diseases have always been a problem for animals and humans. This study aimed at the effect of risk factors based on univariable analysis affecting the number of ticks parasitized on domestic dogs. This research began in April and ended in July 2019. Most ticks recognized based on morphology were belonged to Rhipicephalus sanguineus (45.76%) followed by Rhipicephalus (Boophilus) microplus (32.85%), Hyalomma dromedarii (10.15%), Haemaphysalis spp. (7.01%), Hyalomma anatolicum (4.24%) respectively. Through the questionnaire, answers to various risk factors associated with tick infestation were discussed. It was revealed in the questionnaire results that most of the risk factors were recorded as nonsignificant (p>0.05) except tick infestation on the host animal. The paper is extracted from the first author’s M.Phil Thesis. Introduction Ticks are considered to be one of the arthropod vectors, which transmit diseases at the medical level as well as veterinary levels causing a detrimental impact on human beings in terms of their health-associated issues [1]. Hard ticks are the hematophagous ectoparasites almost all of the vertebrates worldwide. Their medical value is increasing day by day due to the transmission of viral, bacterial and protozoan infections which are known as Tick-Borne Diseases (TBDs) [2,3]. They are mostly found to be attached with certain body parts of its host like head, neck, ear, abdominal region, perineal region or inside the parts of fore-limbs and hind-limbs [4]. Ticks, particularly belonging to the family Ixodidae that are being globally important as they direct attack on the skin and its tissues causing great damage to its host [5]. Ticks of the Ixodidae family infect a large number of hosts and their population size is dependent upon temperature, humidity and hostsearching ability [6]. The reproduction and life stages of ticks are dependent upon certain factors such as favorable environment and accessibility to its host. Moreover, some ticks are generally recognized for their adaptability to different types of climatic conditions and habitats, such adaptive features are responsible for their survival and successful reproduction [7,8]. Ticks are regarded as the source of vector for pathogenic diseases of both humans and animals according to the previous study conducted in Pakistan [9]. In Pakistan, TBDs have a deleterious effect on both humans and animals including Crimean-Congo hemorrhagic fever (CCHF) [10], CCHF a fever caused by the biting of a tick-borne virus. Tick-borne diseases such as Theileriosis and babesiosis which are known to be the dreadful blood parasites and their occurrence Egypt in water buffaloes and cattle have been reported in Pakistan [11]. Dogs are the most commonly owned companion pets throughout the world. They are considered to have a close relationship with humans or with their territory and are adapted to human habitation and may contribute to the physical, social and emotional well-being of their owners [12]. It is a possible fact that infested dogs carry ticks in the environment surrounding them and can transmit these to humans which contributes to the major public concerns and health issues [13]. The brown dog tick, Rhipicephalus sanguineus is the most prevalent tick species reported from Mediterranean countries, Latin America, Africa and most of the Asian countries [14,15,16,17]. Hyalomma anatolicum has been reported from Iran and Pakistan [18]. Haemaphysalis are ixodid tick most common in temperate areas particularly in Asian countries [19]. The present study aimed to determine the tick species that are associated with the detrimental impact of tick-borne pathogens causing TBDs. Besides, risk factors associated with tick infestation along with seasonality were recorded and analyzed. Furthermore, the study motivates to understand the importance of implementation for effective tick eradication program and control strategies for domestic dogs. Preliminary Studies We used different online software such as Mendeley and Google Scholar to search relevant research articles published on the topic of prevalence, population, seasonal dynamics, tick infestation, and risk factor assessments. We focused our attention on those articles published recently between the year 2010 and 2019. We have studied about 200 research articles, of these, only 35 research articles are mentioned in this study. Area search and samples collection Quetta is the most populous district of Balochistan. It consists of 1,352 sq mi and surrounded by a series of mountains. Quetta is 5,510 feet high above sea level. Four different areas were chosen because dogs were readily available in every household. All house data were estimated nearest neighbor method. The tick's collection was continued for four months beginning in April-May and ending in June-July, 2019. The collection of samples was started at 11 am and continued till 6 pm from selected localities. A total of 69 domestic dogs were clinically examined by expert veterinarians for the presence of ticks and their possible infestation. The dogs were handled during ticks collection in compliance with Pakistan's prevention of cruelty to animal act, 1890 The present 10-20 ticks or more per dogs were designated as highly infested, while below this range was characterized as low level infested animal respectively. A fine forceps was used to captured ticks from the attachment site and put them into a 50 mL falcon tube containing 70% ethanol. Identification of ticks This research is the first attempt on dog tick from this region therefore, co-authors of this research focused on the identification of adult ticks only, while manuscript on other stages is under preparation stage. Taxonomic identification was completed in two phases; In the first phase, the similar ticks were pooled into a separate tube using a stereoscopic compound microscope (Olympus CH-10, Japan). In the next phase, their permanent slides were made. Then, according to the morphological features such as basis capitula, small punctuation and based on scutum were identified using the available taxonomic keys [14,20] under Lecia DM4000B microscope (Leica Microsystems GmbH, Wetzlar, Germany) furnished along with a digital camera (Lecia) at 40X magnification. The ticks were dehydrated after passing from different grades of alcohol (i.e., 20 %, 50 %, 75 %, and 100%) and prepared for Scanning Electron Microscope (SEM, Hitachi S3400-N, Type-II) in Centre of Excellence in Vaccinology and Biotechnology (CASVAB), Quetta. Male and female hard ticks were separated on after identification of each genera based on their scutum on the anterior dorsum. Sample size and the percentage of infestation were calculated using the formula, Statistical Analysis The monthly prevalence of tick species was estimated on the Chi-square The summary of possible risk factor assessment was given in Table 3. Currently, no dog vaccine is available in Balochistan province, therefore the calculated values for this parameter were recorded as non-significant (p > 0.33, OR = 0.93). Animal husbandry Department is present in Quetta but its policies are not fully implemented to control the tick on live stocks (p > 0.08, OR = 0.47). The role of NGOs is also not significant in our studies (p > 0.45, R = 1.75). The dog owner do not get their vaccination was found as another non-significant parameter (p > 0.33, OR = 0.93). The tick infestation (burden) was found as the statistically important parameter (p < 0.00, OR = 17.68). All these factors indicate that tick prevalence is increasing rapidly in the Quetta district. Figure 3 showed the tick prevalence in each month. Rhipicephalus sanguineous (n=21) and Rhipicephalus (Boophilus) microplus (n=21) were collected higher in number during April compared to other species. Rhipicephalus (Boophilus) microplus (n=77) was captured during May, while the lowest number seen in Hyalomma anatolicum (n=2). Likewise, Rhipicephalus sanguineous was collected in the highest number (n=75). Peak infestation of the tick species was observed during July, where Rhipicephalus sanguineous was again caught in a large number (n=116) from the domestic dog. Discussion In this study, the most prevalent tick species was the Rhipicephalus sanguineus (45.76 %) found on dogs. As a host, the dogs favor the life-cycle of this brown colored tick. Our study correlates with the previous studies, where Rhipicephalus sanguineus was found to be one of the most abundant dogs tick with 92.5% prevalence [21]. According to recent updates [22,23,24] on the brown dog tick (i.e. Rhipicephalus sanguineus), it has been realized as the key vector for the rapid spread of Babesia vogeli and Babesia gibsoni in Taiwan. Moreover, Dantas-Torres [25] observed that Rhipicephalus sanguineus is the most abundant tick species throughout the globe and is considered to be one of the most prevalent ectoparasites on dogs. The present study is also in agreement with Changbunjong et al. [17] who has been reported that Rhipicephalus sanguineus as one of the leading and dominant ectoparasites of dogs in different countries such as Africa, Asian Countries, Latin America, and Mediterranean Countries. Rhipicephalus (Boophilus) microplus and Hyalomma dromedarii were reported as the second and third dominant species followed by Haemaphysalis spp. and Hyalomma anatolicum. Our study aligns with one of the previous studies [26], which reveals that Rhipicephalus (Boophilus) microplus is the most prevalent tick species in China. Our study also correlates with the observations of Diab et al. [9], who recognized Hyalomma dromedarii as one of the abundant tick species in Saudi Arabia. According to Sofizadeh et al. [27] Hyalomma dromedarii usually causes tick infection in camels but it can also attack other hosts such as sheep, goat, cattle, horses and donkeys. Apart from these hosts [28,29] observed that dogs, wild rodents and many other animals can act as the occasional host for Hyalomma dromedarii. Sahu et al. [30] recognized that about 46.39 % of dogs were affected with three different tick species infection i.e., Boophilus spp., Rhipicephalus spp., and Haemaphysalis spp. The present study shows the abundance of the Rhipicephalus spp., which aligns with the previous study [31] that the abundance of this genus is due to adaptation in harsh climatic. Rhipicephalus sanguineus is one of the important species of this genus that is present in both mountainous and plain regions with the ability to infest different domestic animals. Our result demonstrates that the tick burden is prevalent in July which means the summer season is favorable for rapid growth. This is correlated with the findings of Juvenal and Edward [32], who reported that a decrease in temperature due to heavy rainfall can cause a drop in the density of the tick population. Furthermore, the collected specimen from the present study shows the greater abundance of female tick species as compared to the ratio of male species in July which indicates that the hot season is preferable for the breeding purpose and it agrees with the findings of Shemshad et al. [31]. The result of the present study revealed the season-wise occurrence of tick infestation which is greater in July as compared to April and is consistent with the reporting of Manan et al. [33] who recorded higher tick infestation during summer (August) and lower in the winter season (December and January). The prevalence rate of tick species confirms that these ticks are the real source of health burning issues for the domestic dogs and their owners which is correlated with the previous studies [34]. The possible factor for this is unawareness of dog owners regarding vaccination, lack of knowledge about tickborne pathogens along with diseases caused by them and lack of implementation of vaccination by the local government as well as negligence of NGOs working for animals. Jones et al. [35] observed that the owners of dogs are at higher risk of tick bite leading to an infestation of a tick than the people without pet dogs. As the ticks are the source of vector transmission of vulnerable diseases and our study aligns with the observation of Dantes-Torres et al. [1] who reported that the dogs can spread the ticks to the human-beings as well as environment surrounding them and can contribute for the transmission of TBDs. Our result corresponds by the observations of Sahu et al. [30] who described that the prevalence of tick species is common in stray dogs i.e. 58.33 % versus with pet dogs. This is consistent with the previous study conducted in Greece that the dogs living outdoors are much more vulnerable to tick infection in contrast to those who live indoors due to lack of vaccination (Latrofa et al. 2017). The infestation of the tick can be reduced by the preventive measures other than the eradication of the tick population, which is an impossible factor. The ticks of the domesticated animal can be minimized by maintenance in vaccination, grooming at regular basis and application of acaricides. Besides, awareness and educating the public sector on factors associated with tick infestation and their prevention is significant. Furthermore, there is the requirement of studies emphasizing on the identification of tick species that attack humans, their life-cycle patterns, host searching behavior, the infectious stages of ticks that infest humans along with the association of TBDs and focusing studies to reach the risk factors that would assist in better knowledge of tick infestations accompanying to establish strategies for their reduction. Conclusion The Veterinary Department should make a comprehensive research on both domestic and non-domestic animals to study the interrelationship of tick and TBDs. Keeping the wide area of Balochistan, it is being proposed that the anti-tick vaccine campaign should launch for domestic dogs. It is also stressed to conduct studies on epidemiological and molecular biology to keep check and balance on the dispersal of tick species and TBDs in other districts to prohibit the illness spreading across the globe which imposes serious menace to domestic dogs as well as humans and to secure the animal welfare in terms of their health issues. Ethical statement The work is being done right before permission was taken from the ethics commission present
2020-11-19T09:11:48.374Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "d10ddefb28d6390cd7fde8109e9092c5cec29925", "oa_license": null, "oa_url": "https://ejvs.journals.ekb.eg/article_121568_da804a45fa838d8a6b15ac7791f2df63.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d0f52993a7c19b26f89391a8863743c21564c534", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
237400473
pes2o/s2orc
v3-fos-license
Correlating in situ RHEED and XRD to study growth dynamics of polytypism in nanowires † Design of novel nanowire (NW) based semiconductor devices requires deep understanding and techno-logical control of NW growth. Therefore, quantitative feedback over the structure evolution of the NW ensemble during growth is highly desirable. We analyse and compare the methodical potential of re fl ection high-energy electron di ff raction (RHEED) and X-ray di ff raction reciprocal space imaging (XRD) for in situ growth characterization during molecular-beam epitaxy (MBE). Simultaneously recorded in situ RHEED and in situ XRD intensities show strongly di ff ering temporal behaviour and provide evidence of the highly complementary information value of both di ff raction techniques. Exploiting the complementarity by a correlative data analysis presently o ff ers the most comprehensive experimental access to the growth dynamics of statistical NW ensembles under standard MBE growth conditions. In particular, the combi-nation of RHEED and XRD allows for translating quantitatively the time-resolved information into a height-resolved information on the crystalline structure without a priori assumptions on the growth model. Furthermore, we demonstrate, how careful analysis of in situ RHEED if supported by ex situ XRD and scanning electron microscopy (SEM), all usually available at conventional MBE laboratories, can also provide highly quantitative feedback on polytypism during growth allowing validation of current vapour – liquid – solid (VLS) growth models. beam has a much longer absorption length resulting in the interaction of the whole NW with the beam. The NWs with diameter D and height h NW increase in size with the axial growth rate m axial and the radial growth rate m rad . Introduction In recent years improved control over the growth of self-catalysed III-V nanowires on Si has led to substantial progress, which is mainly driven by the promise of the integration of III-V semiconductors on the cost-effective Si platform. [1][2][3][4][5][6][7][8] The integration of these dissimilar material systems is possible due to the small footprint of NWs, facilitating an epitaxial connection. NWs grown in the vapour-liquid-solid (VLS) mode 9 by metalorganic vapor-phase epitaxy (MOVPE) or MBE should avoid foreign elements such as Au as catalyst particles, because of the possibility of incorporation in the growing NWs. 10 In contrast, the self-catalysed or Ga-assisted growth 11,12 in case of GaAs NWs ensures fabrication without any risk of this possible contamination. For the growth of self-catalysed GaAs NWs, progress was achieved in control of NW yield, 2,4,6 shape 3,4,7 and density, 3,6,7 as well as in the crystal structure. 1,2,5,8 However, those studies have also shown that these properties cannot be optimized separately via growth parameters. Moreover, an increase in the number density of NWs is accompanied by changing NW diameters 3,13 and/or crystal structure. 14 The reason is inherent in the VLS growth mode, more precisely in the liquid catalyst particle, which is responsible for the axial growth of the NWs and directly determines the NW morphology, [15][16][17][18][19][20] such as the shape and the crystal structure. Self-catalysed GaAs NWs adopt mainly the cubic zinc blende (ZB), its rotational twin (TZB), and the hexagonal wurtzite (WZ) crystal structure. Their simultaneous occurrence is called polytypism. Good control over the droplet itself enables the realization of dedicated NW morphologies, e.g. tapered NW shape, 21 or the fabrication of axial heterostructures formed of different polytypes along the NW growth axis 22,23 allowing exploitation of their band structure differences. 24,25 Powerful techniques allowing in situ characterization during growth can serve as a key to understand and optimize † Electronic supplementary information (ESI) available. See DOI: 10.1039/ D1NR02320A a Laboratory for Applications of Synchrotron Radiation, Karlsruhe Institute of Technology, Kaiserstraße 12, D-76131 Karlsruhe, Germany. E-mail: julian.jakob@kit.edu the morphological properties of NWs. Within the available portfolio of in situ techniques, in situ transmission electron microscopy (TEM) during NW growth offers unrivalled spatial resolution down to the atomic scale, together with high temporal resolution, but it is restricted to special equipment not broadly available. 20,[26][27][28][29] The investigations are typically performed with pre-grown NWs 26,27 or NWs without epitaxial connection to any substrate, 20,28,29 therefore excluding a number of growth effects under standard conditions, e.g. the impact of diffusion processes on the substrate as well as material flux shadowing by the NW ensemble. 30 In situ XRD during growth 21,[31][32][33][34] probes the evolution of representative structure properties averaged over a large statistical NW ensemble. By using microfocused beams, even the properties of individual NWs can be in situ examined. 35 In both cases, NW growth close to standard growth conditions with epitaxial connection to the substrate can be monitored. However, special growth chambers equipped with X-ray windows are required as well as access to heavy-duty diffractometers at high-flux synchrotron light sources. In situ RHEED equipment, in contrast to TEM and XRD, is usually already integrated into commercial MBE systems and therefore broadly available. However, although RHEED has gained great importance for quantitative 2D layer growth studies, in case of NW growth, RHEED has mostly been restricted to qualitative conclusions. 23,[36][37][38][39][40][41] Only recently, a quantitative evaluation procedure for in situ RHEED studies of NWs has been developed. 42 Based on a two beam approximation for dynamical Laue diffraction and taking shadowing effects within the NW ensembles into account, RHEED has been used for a time-resolved height-selective crystal phase analysis during NW growth. Similar to in situ XRD, it allowed representative quantitative information to be gained for large NW ensembles with several thousands NWs, but this required a priori assumptions considering the overall growth dynamics. In the present article we report a correlative approach to measure RHEED and XRD of NW ensembles, aiming to provide a methodical base for comprehensive studies addressing the dynamics of NW growth under standard MBE conditions. Such studies of large NW ensembles have the potential to complement high-resolution growth studies of single NWs by in situ TEM. By simultaneously measuring in situ XRD and RHEED we prove the consistency of the results from both methods and demonstrate how to exploit their complementary characteristics. We experimentally determine the axial and radial growth rates and the development of the polytype crystal phase fractions as a function of time and, finally, of NW height, without the need of a priori information or any growth model. Since in situ XRD is not broadly available, we also describe the methodical potential of in situ RHEED, if supported by ex situ XRD and SEM. We demonstrated this exemplarily by studying a small sample series allowing us to experimentally validate the predicted relation of Ga droplet contact angles and the crystal phase of GaAs NW segments, recently confirmed for single NWs by an electron microscopy study, 20 for the case of large NW ensembles grown by standard MBE. In situ RHEED and XRD by NW ensembles In this section we compare the main principles of the underlying analysis of the in situ RHEED and XRD intensities from statistical NW ensembles. The time dependent intensity integrated over a RHEED spot or respectively a XRD reciprocal space map of a chosen reciprocal lattice point g h,k,l , corresponding to a certain crystal phase indexed by p, can formally be described by the expression Here, ρ NW is the NW number density per unit area, A g,p represents the scattering power, which is proportional to the magnitude or the square of the magnitude of the structure factor, depending on the validity of the kinematic or dynamic diffraction theory. The NW phase fraction is represented by f p and Ω g takes into account the influence of the NW diameter D and actual shape (the NW cross-section), and its orientation with respect to the incident beam on the NW diffraction and absorption. The illumination efficiency γ(h,t ) takes NW ensemble-shadowing effects into account, which, in the case of strong absorption or extinction, have a significant influence on the diffracted intensity. 42 All functions within the integral are strongly affected by the growth dynamics and are, therefore, functions of the NW height h NW and growth time t. For X-ray diffraction, the small ratio of NW diameters (D < 200 nm) compared to extinction and absorption lengths of dynamical X-ray diffraction (both in the range of several μm) justifies the validity of the kinematical diffraction theory. For the same reason, NW shadowing effects on the illumination efficiency are negligible (γ(h,t ) ≈ 1) and can therefore be omitted, as illustrated in Fig. 1. Consequently, the integrated intensity of eqn (1) for a phase-sensitive reciprocal lattice point (RLP) can be written as It is proportional to the product of the square magnitude of the structure factor, A g,p ∝ |F g,p | 2 and to the total crystal-phase volume of the illuminated NW ensemble V NW p (t ): with the mean crystal-phase volume-fraction f V p (t ). Therefore, temporal dynamics of the integrated intensity follows directly the growth dynamics of the NW's crystal phase volume. During the whole growth, the integrated XRD intensity of a phase-sensitive RLP monitors the evolution of the selected crystal phase volume integrated over the complete NW height h NW (t ) and over the illuminated NW ensemble, but without any spatial resolution. For RHEED, the much shorter electron extinction and absorption lengths compared to X-ray diffraction require the application of dynamical diffraction theory and give rise to self-shadowing phenomena within the individual NWs and ensemble-shadowing between different NWs 42 (see also Fig. 1). In eqn (1), the effective scattering cross section Ω g (h,t ), which describes the contribution of an infinitesimal horizontal NW slice at height h and time t, to the integrated diffraction intensity of a RLP, takes the self-shadowing into account. It causes attenuation of the forward-transmitted and diffracted-transmitted wave field amplitudes during propagation in the NWs. Therefore, Ω g (D(h,t ),Λ), becomes a function of the effective attenuation coefficient Λ, the diameter D and the geometrical shape of the NW cross-section, and of the azimuthal orientation of the NW shape with respect to the electron beam (details in ref. 42). Further ensemble-shadowing has to be considered, since a growing NW of height h NW (t ) casts a growing shadow h shad on a neighbouring NW in its geometrical beam path, in dependence of their mutual positions. By performing averaging over the whole illuminated NW ensemble, we obtain the corresponding mean values, 〈D〉(t), 〈h NW 〉(t ), and 〈h shad 〉(t ), respectively. Depending on the angle of incidence α of the electron beam with respect to the substrate surface, there always remains an illuminated upper part of the NWs beneath the NW apex. This illumination window contains the axial growth front, which therefore is the main contributor to the RHEED diffraction process. However, particularly for randomly positioned NWs the illumination window is not sharp. The ensemble-averaged shadowing is more precisely described by the illumination efficiency function γ(h,t ), varying from the NW tip down to the base from maximally complete illumination (γ(h,t ) = 1) to maximally complete shadowing (γ(h,t ) = 0). Its temporal evolution can be calculated, in some cases, analytically or otherwise numerically, e.g., by use of the Monte Carlo method. Roughly speaking, absorption/extinction mainly changes the mean 'illumination strength' of the total diffracting volume of the NW ensemble, and can be considered by introducing effective quantities describing the ensemble averaged illumination height 〈Δh lum 〉(t ), illumination volume 〈V lum 〉(t ), and the mean illuminated NW crystal-phase volume 〈V lum p 〉(t ), Based on the above considerations, the integrated RHEED diffraction signal of a given phase-sensitive reciprocal lattice point g h,k,l in eqn (1) can be estimated by a formally similar expression as eqn (2), where, in contrast to eqn (2) we must set A ED g;p (t ) ∝ |F g,p |, according to the dynamical diffraction theory. Please notice that following eqn (4), only a limited height window defined and weighted by the condition γ(h,t ) > 0 contributes to the RHEED signal. This makes RHEED height-selective for the non or less shadowed upper part of the NWs, discriminating the contribution from a shadowed lower part. Due to the timedependency of both self-shadowing in Ω g (〈D〉(h,t ),Λ) and ensemble-shadowing in γ(h,t ) the dynamics of the RHEED signal can be rather complicated, even for the most simple cases of stationary axial and radial growth conditions and stationary phase fractions. From the two eqn (2) and (6) we can easily derive the main similarities and differences of the temporal evolution of the XRD and RHEED signals: the temporal dynamics of the phaseselective XRD and RHEED signals reflect the dynamics of the related effective illuminated crystal-phase volumes, which in case of XRD corresponds to the total NW crystal-phase volume 〈V NW p 〉(t ), in case of RHEED to 〈V lum p 〉(t ), respectively, In the following we will always abbreviate the rate of change of a physical quantity by the 'physical quantity rate'. The total intensity rate of the summed up contributions of the different phases-sensitive reflections of XRD and RHEED follows as with Ĩ g,p (t ) = I g,p (t )/A g,p the structure factor calibrated phasesensitive reflection intensities. Further follows the volume-rate υ t ð Þ ¼ X p υ p t ð Þ, whereby for XRD the υ NW (t ),V NW (t ) and for RHEED υ lum (t ),V lum (t ) must be used. For XRD, the total sum over the structure factor-calibrated and further parasitic crystallite growth corrected intensities Ĩ g,p (t ) of the phase-sensitive reflections is always proportional to the whole NW crystal volume, Details of the XRD intensity correction for the crystallites can be found in the ESI. † Its time derivative becomes d dt Πm axial þ 2tΠ D 0 h im axial m rad þ 3t 2 Πm axial ðm rad Þ 2 -constant axial and radial growth rates The factor Π depends on the NW shape and is for hexagonal NWs Π = (3/2)√3 and would be for cylindrical NWs Π = π. For purely axial growth the total NW volume rate is proportional to the square of the mean initial NW nucleation diameter 〈D 0 〉 and the axial NW growth rate m axial (t ) at a given time. The total intensity increases linearly with the NW height h NW t ð Þ ¼ Ð t 0 m axial t′ ð Þdt′ (and for constant axial growth rate, assumed in Fig. 2(a), also linearly with time). For pure and constant radial facet growth, the total intensity and volume rates are a function of the initial NW diameter after nucleation and the temporal evolution of the radial growth rate For constant radial growth rates, the total volume and intensity rates will increase linearly with time and, consequently, the volume and intensity themselves develop quadratically with time ( Fig. 2(b)). Supposing simultaneous radial and axial growth, but stationary growth conditions (with temporally constant axial and radial growth rates), we obtain the time dependence of the intensity and volume rates given in the third line of eqn (10), where 〈D 0 〉 should be the mean initial NW base diameter at starting axial growth time ( Fig. 2(c)). Substituting V NW by V lum , eqn (7) holds formally for XRD and RHEED. However, the total RHEED intensity rate measures the illuminated volume rate υ lum ; d dt V lum instead υ NW . Therefore, eqn (10) cannot be applied to RHEED. But studying carefully the influence of self-shadowing and ensemble-shadowing on the evolution of V lum one can immediately derive characteristic features of the RHEED intensities: In case of purely axial growth, as shown in Fig. 2(d), in the very early growth stage till a first critical time, 0 < t < t c1 , the RHEED intensity rate corresponds directly to the axial growth rate m axial (t ), and for stationary axial growth the RHEED signal will linearly increase, similar to XRD (highlighted in orange). At this stage, the ensemble-shadowing plays little or no role, and, therefore, the mean illumination height 〈Δh lum 〉(t ) is equal to h NW (t ). Thanks to its high sensitivity to small crystal volumes RHEED shows a much better signal-to-noise ratio (SNR) than XRD. At t c1 , when the shadow footprint reaches the first NW neighbours, ensemble-shadowing begins. From this time, the illumination efficiency of the NW ensemble at the NW bottom drops from γ(h = 0,t ≤ t c1 ) = 1, down to γ(h = 0,t ≥ t c2 ) = 0, when all shadow footprints reached the NW neighbours at t c2 . Consequently, in the time interval t c1 < t < t c2 (highlighted in yellow), the prior linear increase of 〈Δh lum 〉(t ) and, accordingly of the RHEED signal, diminishes, converging until t c2 to saturation values 〈Δh lum 〉 (t c2 ) and I ED g;p (t c2 ), and stay constant during further growth t > t c2 . In other words, if above t > t c1 the NW volume V NW continues increasing, the shadowed NW volume starts increasing too and consequently the increase of V lum appears much more moderate and will come to a halt at t ≥ t c2 . Accordingly, for the evolution of the illuminated-volume rate during axial growth we find Above t c2 , the height of the illuminated window of fulfilled condition γ(h,t > t c2 ) > 0 is stationary in time, but the illuminated window moves upwards during NW growth, at a rate given by the axial growth rate (dγ/dt = m axial dγ/dh). The total RHEED signal saturates and thus becomes completely insensitive to the future evolution of the axial growth rate m axial . During purely radial growth, the temporal evolution of RHEED behaves as illustrated in Fig. 2(e): for thin wires with D < Λ RHEED is sensitive to m radial . The NW volume increases quadratically, but self-shadowing increases nearly exponentially with growing NW diameter. Both have opposite and therefore competing influences on the dynamics of the RHEED signal. Effectively, for very thin wires, initially the diffraction signal increases, but very quickly flattens out, the signal passes through a maximum and later even decreases to the extent the self-shadowing reduces the illuminated volume V lum . Finally, sufficiently beyond NW diameters (D ≈ 2Λ), the RHEED signal converges to a stationary intensity value. This means that during radial growth the RHEED signal becomes increasingly insensitive to radial growth rates. For certain azimuthal orientations the RHEED signal may nearly completely disappear, although radial and axial growth may continue (ref. 42). Therefore, a favourable choice of the azimuthal orientation of the NWs in the electron beam is crucial. For simultaneous radial and axial growth (Fig. 2(f)), the mean initial NW diameter 〈D 0 〉 and the ratio of radial and axial growth determine the dynamics of RHEED, up to the growth stages when the dynamics of the illuminated volume V lum and RHEED start to behave completely stationary. However, in contrast to XRD, the RHEED signal can behave stationary even in the presence of ongoing NW growth. Therefore to allow correct interpretation of the RHEED signal and to enable quantitative data evaluation of the whole growth cycle, it is imperative that shadowing effects are taken into account. For growth stages where the influence of axial and radial growth on the RHEED signal is negligible (which does not mean that radial and axial growth itself must be negligible!), the RHEED signal becomes nearly exclusively sensitive to changes of the crystal phase partitioning (the polytypism) within V lum . For narrow positional NW distributions, the time interval t c1 < t < t c2 decreases. For given positional NW distribution, both the critical times t c1 and t c2 decrease with increasing NW density, and the height of the NW illumination window 〈Δh lum (t c2 )〉 shrinks. All those effects are improving the height selectivity of in situ RHEED experiments. But it should be noted that based on the outlined theoretical approach, the RHEED signal, quantitatively evaluated by eqn (6), allows determination the mean height evolution of polytypism also for diluted NW ensembles. All these factors make RHEED an eminently suitable method for quantitative polytypism studies. For the investigation of polytypism it is useful to determine for both RHEED and XRD the respective intensity ratio J p (t ) of phase-sensitive RLPs with respect to the structure factor corrected sum over all RLPs, For XRD it follows from eqn (2) that J XRD p (t ) is a direct measure for the phase volume fraction f V p (t ), and consequently it is Even if the sensitivity of RHEED to any growth rate is more complicated, similarly to XRD the phase sensitive RHEED intensity ratio always directly corresponds to the phase fraction of the illuminated volume f V lum p t ð Þ. In the absence of tapering effects one gets and we obtain Considering a high NW number density resulting in a small illumination height 〈Δh lum 〉, the phase sensitive intensity ratio corresponds nearly directly to the mean phase fraction at the NW apex, or alternatively the rate of the phase-sensitive intensity ratio d dt J ED p t ð Þ corresponds nearly directly to the ensemble averaged crystal phase nucleation rate at the axial growth front. Determining the axial growth rate from XRD, we can transform the time dependence of J ED p (t ) by eqn (15) into a height dependence of f p (h,t f ). At this point we summarize that due to electron shadowing in situ RHEED height-selectively probes rather the upper part of the growing NWs. However in situ XRD always probes the full NW height, including the electron shadowed parts of RHEED, and thus can detect processes inside that shadow not visible for RHEED. Therefore, in situ XRD is characterized by a high sensitivity to the temporal evolution of volume growth rates, which, however, are simultaneously influenced by all, axial and radial (tapering and side-facet) growth rates as a result of both VLS and VS (vapour-solid) growth contribution. In contrast, RHEED exhibits particular, well suited properties in favour of VLS growth characterization, since ensemble-shadowing causes height-selectivity making this diffraction technique sensitive to the crystalline properties of the NW tip below the NW apex, and their temporal evolution. For sufficiently high NW number densities, RHEED analysis becomes highly sensitive to determine transitions in the generation probability of crystal phases at the axial growth front below the droplet. Such sensitivity to the phase purity of large NW ensembles could open a route to gain immediate feedback and good experimental control over the actual impact of the catalyst particles and therefore over the whole VLS growth process. 23 Moreover, the comparably large scattering cross section of electrons in solids creates high sensitivity to small volumes, which becomes particularly important for the crystal structure analysis during NW nucleation and early growth stages. A correlative XRD and RHEED analysis allows differentiation between axial and radial growth contributions and conclusions on the temporal evolution of VS growth of the NW side facets and of the VLS growth at the apex separately, without a priori assumptions about the growth rates and growth models. It thereby also improves the accuracy of the results compared to those of the respective individual methods. Complemented by post growth ex situ SEM, all methods together can generate a comprehensive quantitative experimental picture of the growth dynamics of the NW ensemble under chosen standard MBE growth conditions. Experimental The growth experiments on self-catalysed GaAs NWs were performed with a MBE growth chamber equipped with a RHEED gun and additional X-ray-transparent Be windows, and designed to be compatible to standard heavy duty diffractometers at high-flux synchrotron beamlines. 43 In the first experiment, we measured the evolution of RHEED and XRD intensity patterns simultaneously in situ during NW growth. Here we will compare and combine their results from a methodical point of view, demonstrating both their complementarity and consistency. Applying the methodical results, in a second purely laboratory-based study of in situ RHEED, supported by ex situ XRD and SEM, we demonstrate, how careful correlative analysis can also provide a highly quantitative feedback on temporal polytypism behaviour during growth. In particular, we aim to quantitatively verify the variation of polytype over time as a function of the wetting angle of the gallium droplet at the onset of growth for the case of large NW ensembles grown under standard MBE conditions. It will be compared to the theoretically model predicted in ref. 44 and experimentally shown at a single NW in an environmental TEM in ref. 20. The self-catalysed GaAs NWs were grown with a Ga predepositon step on n-type Si(111) substrates covered with native oxide. For the first simultaneous in situ RHEED and in situ XRD experiment (sample A), we stopped the Ga supply after t f1 A = 30 min, but continued measurements till t f2 A while the As 4 flux was kept constant to consume the liquid Ga droplet at the tip of the NWs. For the second experiment, we grew a set of five samples Characterization of NW growth Post-growth SEM analysis for sample A (see Fig. 4(b) in the ESI †) gives evidence for non-tapered NWs with identical final diameter at the bottom and the tip of 〈D NW f;b 〉 = 〈D NW f;t 〉 = (54 ± 4) nm and a final mean height of 〈h NW A 〉 = (800 ± 160) nm. Fig. 3(a) depicts close up images of the three phase sensitive ZB(311), TZB(220) and WZ(10.3) RHEED spots of sample A for four different growth times, demonstrating the large possible temporal variations of RHEED intensity patterns even during stable global growth conditions. Their structure factor calibrated temporal intensity evolution Ĩ NW p (t ) is plotted in Fig. 3(b). Fig. 3(c) shows a typical XRD reciprocal space map (RSM) with the Si(311) Bragg reflection of the substrate and the three phase sensitive GaAs ZB(311), TZB(220) and WZ(10.3) reflections of the NWs. The GaAs Bragg reflections have identical lateral scattering vector components, but are vertically well separated. They are connected by a vertical streak along Q z ( parallel to the axial growth direction), arising from the diffuse scattering of stacking faults. Further, in each reflection we observe horizontal facet streaks along Q y originating from the hexagonal cross section of the NWs. They are perpendicular to the growth axis and to two of six facet planes, respectively. The third type of inclined streaks cross the Bragg peaks perpendicular to the reciprocal lattice vectors (along the virtual Debye-Scherrer-rings). These streaks mainly represent a slight orientation distribution of parasitic crystallites (CRY) and NWs, and allow separation of their contributions (as shown in ref. 21). The temporal evolution of the structure factor calibrated XRD integrated intensities Ĩ NW p (t ) is plotted in Fig. 3(e). Following eqn (10), the measured non-linear increase of the total X-ray intensity during the first 30 min gives clear evidence for simultaneous axial and radial growth. Since no tapering has been observed by SEM, the NW diameter averaged over the NW height can be estimated from the size oscillations measured along the facet streaks. 21 Their temporal evolution contain information on the NWs radial growth dynamics, which is presented by the results in Fig. 3(d) (details are presented in the ESI †). Additionally, we show the final diameter measured by SEM after growth (in blue) and the initial diameter measured at a reference sample grown under identical growth conditions. From the temporal development of the size oscillations and of the total XRD intensity we determine an initial NW nucleation diameter of D 0 ≈ 28 nm. The in situ XRD data give clear evidence for the axial and radial growth rates staying approximately constant during the growth, up to the closure of the Ga-shutter. We determine the axial growth rate of m axial = 26.6 nm min −1 and a (much smaller) radial growth rate of m radial ≈ 0.43 nm min −1 . With only the As shutter left open, the total XRD intensity at t > 30 min continues increasing before coming to an end at about t f2 A = 33.5 min. The observed kinks of the XRD intensity curves at t f1 A indicate the expiring radial growth. Between (t f1 A < t < t f2 A ) we observe an intensity and volume increase of approximately 10%, which can be related to the continuation of axial growth. Above t f2 A = 33.5 min the XRD intensity becomes effectively constant (compare the ESI †), indicating the stop of any NW growth after complete droplet consumption, as proven by SEM. The findings confirm the expectation that the VLS growth can continue under remaining As flux by successively consuming the Ga droplet. We evaluate the experimental results applying eqn (2), (6) and (12) based on identical morphological parameter sets for the temporal evolution of RHEED and XRD. First we determine the general shape parameters based on which we will evaluate the quantitative evolution of different crystal phases, presented in the following section. Prior to the intensity simulation we can restrict the input parameter space by the post-growth SEM results of the final state of the NW and CRY sizes, heights and shapes. In particular, we extract ensemble averaged, post-growth base and top diameters of the NWs and CRY, as well as their mean fluctuation, and in addition the number densities of the NW and CRY ensembles (all tabulated in the ESI †). The attenuation length of the 20 keV RHEED electrons in the GaAs NWs is Λ ≈ The comparison between the measured and best fitting simulated RHEED and XRD intensity evolution for the respective phase-sensitive reflections curves in Fig. 4(a) and (b) allow quantitative determination of the dynamics of the volume phase fractions of the NWs and the phase fractions at their axial growth front (Fig. 4(c) and (d)). Details of the calculation are summarized in Table 1 in the ESI. † Characterization of polytypism Following section 2, the phase-sensitive XRD reflections allow monitoring the temporal development of the different crystalphase volumes integrated over the NWs, whereas the RHEED reflections allow detecting the phase changes particularly near the axial growth front. Comparing the phase sensitive XRD intensities in Fig. 3(e), the WZ phase crystal volume develops approximately linearly, whereas the zinc blende related intensities develop initially slower, but later benefit from a non-linear increase, with pro-gressing rates till t f1 A = 30 min. The WZ-ZB intensity crossover is at about t = 22 min, the WZ-ΣZB crossover already at about t = 12 min, where ΣZB = ZB + TZB. Considering the temporal evolution of the RHEED intensities in Fig. 3(b) we observe at the beginning of growth (0 min < t < 3 min) a dramatically dominating WZ intensity rate. Immediately afterwards, the WZ intensity rapidly drops (3 min < t < 10 min), later the downturn diminishes and intensity becomes effectively constant between (15 min < t < 30 min = t f1 A ). The two zinc blende related ZB and TZB intensity curves develop always similar one to another. Compared to WZ, the corresponding ΣZB curve in Fig. 3(b) starts growing with a short delay and a less strong intensity rate. Both, ΣZB and WZ intensity curves intersect at about t ≈ 7 min. Then the zinc blende phases start dominating the intensity distribution, but their intensity rate slows down, passes at t ≈ 15 min its maximum value and decreases afterwards slowly till the end of Ga supply (t f1 A = 30 min). At this point, at t f1 A , the WZ intensity again begins to strongly increase. Almost simultaneously also all zinc blende related curves show a very short rise before rapidly decreasing. Considering the overall development of the total RHEED intensity, in this time interval it also increases abruptly at t f1 A = 30 min till about t f2 A = 33.5 min. From the previous sections we have seen that RHEED has a particularly high sensitivity for phase changes at the axial due to the high NW number density. The XRD intensity fraction J XRD p (t ) are equal to the volume phase fractions f V p (t ). (e) Final height profile of the polytypism. High WZ fraction is illustrated in red shifting towards green for high ZB/TZB fraction. growth front. As discussed in section 2, the total RHEED intensity develops proportionally to the illuminated volume V lum . At the very beginning of growth, the illuminated related volume rate υ lum (t ) is therefore sensitive to the axial growth rate, whereby the phase selective illuminated volume rates υ lum p (t ) are additionally weighted by the illuminated volume phase fractions. The high NW number density of sample A causes efficient ensemble-shadowing. As can be shown by Monte Carlo simulations, in sample A the process of ensemble-shadowing starts already after t > t c1 ≈ 1 min (marked in orange in Fig. 3(b)). Within the short time interval of two more growth minutes (t c1 < t < t c2 , marked in yellow) the initially high sensitivity of the illuminated-volume rate υ lum (t ) of RHEED to the axial growth rate rapidly diminished. At about t c2 ≈ 3 min the illumination height has become stationary (〈Δh lum 〉(t > t c2 ) = Δh lum (t c2 )) and, consequently, the total RHEED signal has already completely lost its sensitivity to any further volume increase if generated by axial VLS growth, see eqn (11). The further intensity increase is related to radial growth. Changing ratios of phase-sensitive RHEED signals can be directly attributed to changes of the corresponding phase fractions of that part of growing material, which is located in the now stationary illumination window 〈Δh lum 〉(t > t c2 ) = const below the NW tip. The speed of this relative vertical window-move follows from the axial growth rate m axial (t ) (eqn (15)). The drastic decrease of the WZ phase related RHEED intensity in the time following t c2 can be explained by the concurrence of two effects: (a) a change in the nucleation probability from WZ to ΣZB, as seen by the increase of the overall zinc blende-related phase-volume fraction in the phase sensitive XRD intensities; (b) the particular sensitivity of RHEED to phase changes at the axial growth front, which at this growth stage develop in favor of ΣZB. In other words, if the change of the ΣZB related phase volume rate observed in the X-ray data arose exclusively from the changing phase generation probability at the axial growth front from WZ-rich towards ΣZB-rich growth, then, for the case of RHEED, the previously grown WZ-rich region (at the NW base) would move with time outside the rising electron illumination window 〈Δh lum 〉 into the shadowed region underneath. Since above t c2 the width of the illumination height window stays constant for a given NW number density (〈Δh lum 〉(t > t c2 ) = const), for hypothetical purely axial growth, any increase of the ΣZB intensity above t c2,A ≈ 3 min would be at the full expense of the WZ signal and lead to the opposite development of the WZ intensity rates as observed between 3 min < t < 9 min. Here it should be noticed that simultaneous negative rates of both WZ and ΣZB RHEED signals are also possible, as has been observed in the middle growth stages (10 min < t < 30 min), but this can only arise either as a result of radial growth, a hypothetical change of the droplet height (influencing ensemble-shadowing, see t f1 A = 30 min), or due to variation in the incident electron flux. Concerning the radial growth contributions it can be shown that the competition of positive volume rate and negative influence of self-shadowing damps the increase of RHEED intensity, and above a critical NW dia-meter of D ≈ 30 nm even leads to negative intensity rates. Contrary to RHEED, radial growth always contributes to positive XRD intensity rates. The measured total X-ray intensity rates confirm rather stationary overall growth conditions between 10 min < t < 30 min, whereby the NW nucleation diameter and radial growth rate obtained from X-ray analysis explain the simultaneous moderate reduction of the overall RHEED signal in this stage. The observed superimposed slight waviness is caused by instabilities of RHEED flux provoked by the less shielded beamline infrastructure and not related to the NW growth (see the ESI † for detailed discussion). Finally, the well fitting simulated RHEED curves in Fig. 4(a), all calculated with identical parameter sets as for the XRD curves in Fig. 4(b), generally confirm our previous findings. In Fig. 4(c) we show the RHEED intensity fractions J ED WZ of WZ and J ED ZB of ΣZB and the phase-generation probabilities at the axial growth front f p ≈ J ED p determined from the simulations. The XRD intensity fractions J XRD p and the corresponding volume phase fractions f V p ≈ J XRD p are plotted in Fig. 4(d). By integrating the time dependent (respectively height dependent) VLS phase generation probability determined by RHEED over the whole preceding growth time (respectively NW height h NW (t )), we are able to compare the results from RHEED directly with those of XRD. Except at the very beginning of growth, where the XRD intensities are low and the SNR weak, and where possibly insufficient crystallite correction of the signals shows a higher impact, the XRD simulation and experiment fit very well. In principle, any phase-volume change observed by XRD or RHEED could be caused (a) by phase transformation within the probed NW volume or (b) by a changing phase generation probabilities at the growth fronts. Hypothetical phase transformations and changes at the radial growth front should induce comparable changes in the corresponding intensity fractions of the involved Bragg reflections. XRD intensity fractions probe the affected total crystal volume of the NW ensemble, their growth rates are therefore sensitive to phase transformations in the whole NW and to changes at both the radial and axial growth front. In contrast, due to the large differences of the radial and axial growth rates the RHEED signal is particularly sensitive to the axial phase generation probability and therefore to the VLS growth conditions at the interface of the Gadroplet and the NW top facet. The agreement between simulation and simultaneously recorded XRD and RHEED intensity profiles confirms the opinion, which is generally widespread but little untried in the literature that volume phase transformations and phase changes during radial growth is highly improbable. From the methodical point of view it is very interesting that at (t f1 A = 30 min) both the RHEED WZ intensity as well as the total RHEED intensity rise abruptly again. This can only be explained by the increase of the mean illuminated height window 〈Δh lum (t )〉 below the NW tip. For stationary VLS conditions, 〈Δh lum (t )〉 must keep constant for t > t c2 . But a growing droplet inevitably leads to a reduction, a shrinking droplet to a proportional increase of the electron illumination window 〈Δh lum (t > t f1 A )〉 hitting the NWs (and not the droplets). Therefore, the observed increase of the total RHEED intensity between (t f1 A = 30 min and t f2 A ≈ 33.5 min) gives clear evidence for consumption of the Ga droplet during this time. Moreover it allows estimation of the Ga consumption rate. The Ga droplet under As flux provides the necessary material to maintain for a little while the axial growth. The XRD signal enables characterization of the remaining axial growth rate. At t f2 A , when the Ga reservoir of the droplet has been completely consumed, the overall growth comes to a standstill. It is interesting that the crossover from WZ to ZB at t f1 A is relatively sharp, much sharper than the increase of the illumination height fluctuation in the late NW ensemble, caused by the observed fluctuating NW height (compare Table 1 in the ESI †). This indicates that the WZ-ZB-transition is not related with the actual height but with the NW growth time (the time of the closure of the Ga-shutter). An increasing NW height fluctuation towards the final stage not considered in the evaluation leads to an increased illumination window compared to the simulation. Consequently, even if the time transition is not influenced, the final axial ZB-fraction might be overestimated and the WZ-fraction underestimated. The determined values are to be taken as an upper/lower limit. That is because the longer NWs of the ensemble have larger illumination windows and therefore illuminate for longer time the phases more distant from the growth front. The longer NWs drag the crystal phases of their past longer in the signal. Thanks to the high time resolution of RHEED, one can directly compare the phase generation rates of the NW ensemble with the experimental phase related intensity fractions of RHEED. If the axial growth rate is given (in our case it is determined by XRD and confirmed by SEM) and no phase-transformation at the radial facets or in the NW volume occurs (as has been confirmed in our case by combining RHEED and XRD), the strong ensemble shadowing allows direct translation the measured temporal evolution of the phase sensitive RHEED intensity fractions by eqn (15) into the final height profile of the corresponding phase fractions (Fig. 4(e)). Until approximately h NW A = 180 nm, the NW stem consists mainly of WZ (shown in red), followed by fast transition to mainly ZB and TZB (shown in green) until the stop of the Ga flux at a NW height of approximately h NW A = 740 nm and a subsequent WZ rich segment until h NW A = 800 nm. Concluding, we distinguish two regions during growth with a high WZ generation probability and one with ΣZB rich growth. The generation probabilities of the different polytypes in VLS grown NWs are determined by differences of the liquid catalyst's wetting angle at the NW tip. 8,15,18,20,23,46,47 Consequently for both regions the wetting angle of the liquid Ga droplet, acting as catalyst particle, may change dramatically. The impact of the wetting conditions on the catalyst particle and the geometry of the top facet on the crystal phase selection of self-catalyzed GaAs NWs has been described recently by a model introduced by Panciera et al., 20 where four regions could be identified: (i) ZB nucleation with positive tapering at wetting angle β < β min = 100°, (ii) WZ nucleation without tapering between β min and β max = 125°, (iii) ZB nucleation without tapering in a very narrow regime between 125°and 127°and (iv) ZB nucleation with negative tapering, at larger contact angles. While the explanation of the upper WZ segment is obvious due to the strong droplet variation following the stop of Ga flux and the concomitant consumption of the catalyst on top of the NWs, the explanation of the lower WZ segment needs further investigation. In order to verify the hypothesis of changing wetting angles during the early NW growth stages being responsible for the bottom WZ segment, we study the time series deduced from the samples B-F. Representative SEM images of these samples are depicted in Fig. 5(a), where we can directly observe a trend of the wetting conditions of the liquid droplet to larger angles β over the growth time. Based on the postgrowth SEM analysis of samples B-F and the axial growth rate of sample A (confirmed by the measured h NW (t B -t F )), we reconstructed the time-resolved mean polytype distribution from the in situ RHEED experiments combined with the simulations (details of the simulation parameters are listed in Table 1 in the ESI †). To support the results obtained by RHEED, additionally ex situ XRD was measured. In the samples' XRD reciprocal space maps (two examples are shown in Fig. 5(b) for sample C and F corresponding to t C = 2.5 min, h NW C (t C ) = 66 nm and t F = 30 min, h NW F (t F ) = 830 nm) we integrated the intensity along Q x and corrected for the background and the crystallite contribution as reported in ref. 34, resulting in the Q z profiles of the NWs GaAs(111) reflection. From these profiles we can determine the volume phase fractions f V p by applying a model based on the stacking sequences of different polytypes created by a Markov chain. 34,48 The experimental Q z profiles of each sample, which are corrected for the background and the crystallite contribution, as well as the best fitting results of the Markov simulation model are shown in Fig. 5(c). For the shorter NWs we observe a pronounced signal at the expected WZ position, whereas the longer grown NWs show an opposite trend with the main signal contribution located at the expected ZB position. The resulting volume phase fractions obtained by the Markov model and corresponding to t B -t F are plotted in the upper panel in Fig. 5(d) in dark blue and grey. The polytype fractions at the axial growth front f p (t ) determined by RHEED are depicted in red and light green in the upper panel. By integrating f p (t ) along h NW one obtains f V p (t ) (in brown and dark green), which is in good agreement with the polytype volume fractions determined by XRD. In the lower panel of Fig. 5(d), we present the mean wetting angle and the mean NW radius of samples B-F. As colour code in the background, the mean height resolved polytype distribution in the NW ensemble is shown. Growth starts with a high probability of WZ nucleation for the first 200 nm of NW height, the crystal structure stays dominated by WZ. It changes around t = 5 min gradually towards the ZB polytypes. SEM analysis of the samples B-D (t B -t D ) shows a constant radius within the fluctuation of the measured NWs. At larger h NW , starting with sample E, negative tapering occurs. The wetting angle β for sample B and C is close to 100°until a NW height of h NW B (t B ) = 66 nm, followed by an increase up to a constant value of approximately 140°for the last two samples E and F. At higher β, the crystal structure is ZB with concomitant inverse tapering. These findings are in full agreement with the results presented by Panciera et al., 20 where WZ is expected at wetting angle between 100°-125°and by Dursap et al., where the authors identified also this upper transition angle. 23 While Panciera et al. confirmed their model of self-catalysed VLS growth for results obtained during the later stages of nanowire growth in an environmental TEM, our results emphasize its applicability to standard growth conditions in common growth reactors at the onset of NW growth. Conclusions Studying the reasons for the different temporal evolution of the scattering intensity of in situ XRD and in situ RHEED during growth of statistical NW ensembles allows the targeted use of their complementarity. The former is particularly sensitive to the axial and radial growth rates and the volume fractions of polytypism, the latter to the quantitative nucleation probabilities at the axial growth front. Both are applicable to standard MBE growth conditions. Applying both in situ techniques simultaneously during growth gives a comprehensive quantitative experimental picture of the evolution of growth rates and of polytypism of large NW ensembles. Whereby, thanks to their complementarity, the combined data analysis of both techniques does not require a priori assumptions concerning the particular growth model. It permits the time-dependent evaluation of the growth dynamics and of the evolution of polytypism of the NW ensembles during growth, from which we can also determine the height dependent final state after growth. Even on its own, in situ RHEED is eminently suitable for quantitative determination of the evolution of the phase fraction of the main polytypes near the axial growth front, if the axial and radial growth dynamics of the ensemble is known. Supported by post-growth ex situ XRD and SEM, the methodical portfolio is sensitive to the phase generation probability defined by the VLS growth, allows access to characterize NW polytypism and to identify and quantify discontinuities of phase purity during the growth process. Assuming the phase fraction of a given height to be known or to be stationary over time, the temporal evolution of the measured phase fractions can be translated into the final height profile of the phase fractions along the NW. The experimental examples quantitatively confirm the relations between the wetting angle and the changes in phase and phase purity expected from the VLS growth model presented in ref. 20, now for the case of large, randomly positioned NW ensembles at the onset of growth. Conflicts of interest There are no conflicts to declare. SEM. We acknowledge DESY (Hamburg, Germany), a member of the Helmholtz Association HGF, for the provision of experimental facilities. Parts of this research were carried out at PETRA III and we would like to thank S. Francoual and D. Reuther for assistance in using P09. This work was funded by BMBF project 05K16PSA.
2021-08-27T17:11:37.147Z
2021-08-14T00:00:00.000
{ "year": 2021, "sha1": "f131f4dab8e6e309318863cb33a8c3a5050ee34d", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2021/nr/d1nr02320a", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "f73a0fd27cb59ae222de47f8b3df7cfb0cb1b374", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
43308552
pes2o/s2orc
v3-fos-license
Diagnostic and Prognostic Importance of the Neutrophil Lymphocyte Ratio in Breast Cancer Background: The aim of this study was to determine diagnostic and prognostic roles of the neutrophil to lymphocyte ratio ( NLR) in breast cancer patients. To date, data are limited on associations of primary breast carcinoma (PBC) and benign proliferative breast disease (BPBD) with preoperative NLR values. Materials and Methods: Our study covered of 120 female patients with PBC and 50 with BPBD. Diagnostic values of NLR were estimated using sensitivity, specificity and areas under receiver operating characteristic curves (AUC). Results: NLR values were significantly higher in the PBC patients than in those with BPBD, with an AUC of 0.668 in the PBC case. The optimal cut-off for NLR was 2.96 and this was validated in the testing set, giving a sensitivity and a specificity of 79.7% and 76.2%, respectively, in PBC patients. Conclusions: Preoperative high NLR is a significant diagnostic predictor of distinction of breast cancer from BPBD and elevated NLR is also an important prognostic marker for primary invasive breast cancer. Diagnostic and Prognostic Importance of the Neutrophil Lymphocyte Ratio in Breast Cancer Gulzade Ozyalvacli 1 *, Cemile Yesil 2 , Ertugrul Kargi 3 , Betul Kizildag 4 , Asuman Kilitci 1 , Fahri Yilmaz 1 NLR is an inexpensive and simple parameter of systemic inflammation.It is associated with prognosis in various types of cancers including gastrointestinal tract cancers, hepatocellular carcinoma, non-small cell carcinoma and cervical carcinoma (Walsh SR et al., 2005;Guthrie et al., 2013;Unal et al., 2013;Eryilmaz MK et al., 2014;Kemal et al., 2014) Preoperative NLR values can be contribute clinicians as well as diagnosis and prognosis in BPBD and breast cancer which associated with inflammation.The purpose of present study was to assess the association between primary breast carcinoma and BPBD which are related with inflammatory processes. Cases The data were retrospectively collected The data were retrospectively collected between February 2005 and June 2014) between February 2005 and June 2014.The study includes 120 female patients with primary invasive breast carcinoma and 50 female patients with BPBD. While ER and PR scores were assessed; only nuclear expression was concerned (Lester et al., 2009;Hammond et al., 2010) and for ER staining of >10% of tumor nuclei was considered positive.PR expression was considered to be positive if the nuclei of more than 1% of cells were stained positive (Diaz et al., 2004;Fisher et al., 2005).HER2 score assessment was made according to intensities and the proportions of the cells which showed membrane staining (Lester et al., 2009;Gutierrez and Schiff, 2011).The cases with the triple negative phenotype (14,4%) were assessed for the expression of CK 5/6.Tumor cells with weak or strong cytoplasmic and/or membranous positivity CK 5/6 was scored as positive (Choccalingam et al., 2012). Peripheral blood analysis Preoperative complete blood counts (leukocytes, neutrophil, and lymphocytes) of the patients were analyzed and then neutrophil/lymphocyte ratio (NLR) was calculated.Patients with active infection, active bleeding and hematological disorders, acute-chronic inflammatory or autoimmune disease, splenectomy and steroid therapy excluded from the study. Statistical analysis Data analysis was performed by using SPSS for Windows, version 17.0 (SPSS Inc., Chicago, IL, United States).While, the continuous and ordinal data were shown as mean±standard deviation, otherwise, number of cases or percentages was used for categorical variables.Whether the differences among groups regarding for continuous and ordinal variables were statistically significant or not was evaluated by Kruskal Wallis test.When the p value from the Kruskal Wallis test statistics are statistically significant Conover's non-parametric multiple comparison test was used to know which group differ from which others.Categorical data were analyzed by Pearson's Chi-square or Fisher's exact test, where applicable.The diagnostic value of NLR was assessed using receiver operating characteristic (ROC) curve analysis.Logistic regression models were fitted to calculate the risk (odds ratio [OR] and 95% confidence interval [CI]) A p value less than 0.05 was considered statistically significant. In the group with the breast tumors and BPBD the value of NLR was 4.08±1.54and 3.13±1.27.There were statistically significant differences between these two groups according to NLR (p<0.001).There were a significant positive correlation between NLR and age, grade, lymph node metastasis and tumor size (Table 2) (Figure 1).In the tumors with lymphovascular invasion the value of NLR w as significantly higher (p<0.05). Discussion In this current study, we have demonstrated that elevated preoperative NLR is a significant factor predicting breast cancer diagnosis.Preoperative NLR was significantly higher in patients with breast cancer than in BPBD.Our study showed that high values of preoperative NLR in patients with suspicious breast mass can be used as a predictor for malignancy and it can be useful for clinicians and pathologist for management of breast masses. Increasing evidence supports the inflammation plays a major role in the development and progression of cancer.Absolute counts of neutrophil and lymphocyte could be altered by various physiological, pathological and physical events but NLR is not affected by these factors (Proctor et al., 2012).Tumors are infiltrated by leucocytes and produce cytokines and chemokines.These cytokines and chemokines have the potential to stimulate tumor cell proliferation and may contribute directly malignant progression.Many cytokines and chemokines are alertable by hypoxia and oxidative stress, which is a most important physiological difference between tumor and normal tissue (Balkwill and Mantovani, 2001) .TNF is a major mediator of inflammation and can be detected in malignant or stromal cells in human cancers especially in breast, ovarian, prostate, bladder and colorectal cancer (Naylor et al., 1993;Burke et al., 1996).In breast cancer infiltrating leucocytes are a major source of TNF (Leek et al., 1998).On the other hand lymphocytic response is the main component in the control of cancer progression. Tumor infiltrating lymphocytes especially natural killer and T helper type 1 which produce interferon gamma are effective against cancer growth and/or metastasis in several cancers (Ohashi et al., 2006).Cellular immune response decreased as a consequence of lymphocytopenia. Recent studies showed that decreased tumor infiltrating T cells have been associated with poor prognosis in some cancers (Walsh et al., 2005;Fogar et al., 2006;Bhatti et al., 2010;Dou et al., 2013).Due to all these effects of inflammation lead to the NLR increase in peripheral blood.NLR is convenient, inexpensive and reproducible method which could be show association with inflammation and tumour.Previous studies clearly suggested the diagnostic and prognostic importance of NLR on different types of cancer patient (Guthrie et al., 2013).Highlights of these studies we want to determine the sensitivity and specivity power of NLR on breast cancer.The cut of value for NLR was determined as 2.96 in this study and the specificity and sensitivity are 79.7%, 76.2%, respectively.To our knowledge in the literature this is the first study evaluating the preoperative NLR for distinguishing breast cancers from fibrocystic disease.Similar this study Kemal et al showed that lung cancer had high levels of NLR compared to the healthy control group and they found that NLR could be useful in lung cancer diagnosis (Kemal et al., 2014).Additionally older age, high histological grade, nodal involvement and larger tumors were shown to be associated with high NLR.These results are consistent with the results of previous studies (Azab et al., 2012;Dirican et al., 2014).Breast cancer is considered a highly heterogeneous disease.Different types of this cancer exhibit variable histopathological features and different outcome.Based on a high degree of heterogeneity, recently these tumors classify according to molecular characteristics (Perou et al., 2000;Sorlie et al., 2003;Viale, 2012).In the literature several studies have shown that triple negative (ER, PR and HER2 negative; basal like type) and HER2 subtypes were associated with poor prognosis compared with the luminal A subtype (Parise et al., 2009;Zhao et al., 2009;Su et al., 2011;Lv et al., 2011;Rao et al., 2013),.In the present study, positive HER 2 (IHC 3+) and negative ER, PR and HER2 (positive with CK 5/6, basal like type) were associated with high NLR while positive ER and PR related with lower NLR.Similar to the current study reported that positive HER2 associated with high NLR and negative ER and PR related with lower NLR (Dirican et al., 2014).Differently from this study, we found that both positive HER2 and triple negative type associated with high NLR.Due to higher NLR values in basal like and HER2 types in this study NLR could be an independent significant predictor of prognosis in breast cancer patients. In conclusion, we thought that preoperative higher NLR may be used a distinctive predictor of breast cancer diagnosis and prognosis.Also, preoperative NLR could be a predictive marker for clinicians and pathologist before evaluating of histopathological slides. Figure Figure 1.The Relationship between Age, Tumor Size and NLR with Breast Cancer
2018-04-03T05:32:57.038Z
2015-01-06T00:00:00.000
{ "year": 2014, "sha1": "68a8661dd456c54b427998f065ab0656046b70c5", "oa_license": "CCBY", "oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201505458144633&method=download", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "8990e22260995fe1849a76dbfd09d4b32a051e3c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
213712599
pes2o/s2orc
v3-fos-license
The golden age of social science Social science is entering a golden age, marked by the confluence of explosive growth in new data and analytic methods, interdisciplinary approaches, and a recognition that these ingredients are necessary to solve the more challenging problems facing our world. We discuss how developing a “lingua franca” can encourage more interdisciplinary research, providing two case studies (social networks and behavioral economics) to illustrate this theme. Several exemplar studies from the past 12 y are also provided. We conclude by addressing the challenges that accompany these positive trends, such as career incentives and the search for unifying frameworks, and associated best practices that can be employed in response. Social science is entering a golden age, marked by the confluence of explosive growth in new data and analytic methods, interdisciplinary approaches, and a recognition that these ingredients are necessary to solve the more challenging problems facing our world. We discuss how developing a "lingua franca" can encourage more interdisciplinary research, providing two case studies (social networks and behavioral economics) to illustrate this theme. Several exemplar studies from the past 12 y are also provided. We conclude by addressing the challenges that accompany these positive trends, such as career incentives and the search for unifying frameworks, and associated best practices that can be employed in response. interdisciplinarity | diverse teams | new data | difficult challenges Social science is entering a golden age (1). A rise in interdisciplinary teams working together to address pressing social challenges, leveraging the explosive growth of available data and computational power, defines this moment. Each of these trends has been written about individually-the "big data revolution" has been transforming social science for several years (1), and the benefits of diverse teams are increasingly recognized and quantified (2, 3). We argue that it is the confluence of data, diverse teams, and difficult challenges which makes this a unique and exciting time for social scientists to tackle important research questions. Of course, there have been large team efforts in previous decades (4), but their frequency and breadth have increased recently. Funding agencies have, in turn, recognized the need to support interdisciplinary teams. Fig. 1 presents evidence from multiinvestigator grants funded by the NSF of how interdisciplinary research is on the rise in social science. Given the difficulty in defining interdisciplinary work, federal agencies have chosen to use the number of grants provided to projects with multiple principal investigators as a proxy (5,6). These data resonate with our idea of what interdisciplinarity means in this golden age: active collaboration among scientists with different training-meaning a diversity of perspectives is influencing the research-as opposed to one researcher passively borrowing ideas from other fields. We hope our perspective will encourage scientists to take advantage of new datasets and form diverse collaborations to answer pressing questions. We direct these ideas especially to funding agencies and academic institutions, to convince them to provide more funding for this type of work. Ultimately, we wish to see an acceleration in work that addresses difficult challenges. For instance, the COVID-19 pandemic illustrates how large-scale problems will only be solved by many scientists contributing what they know best. The Need for a Lingua Franca The opening of disciplinary borders is akin to an increasing trade of methods, language, and knowledge across fields. This concept of trade is built on the premise that, like people and countries, each social science discipline has a different endowment (i.e., a historical mastery of tools and accumulated knowledge) and comparative advantage. Defining how the social science disciplines differ is difficult, but even a thumbnail sketch can clarify our ideas about comparative advantages and the value of trade. Hoping that the reader will appreciate that we overemphasize differences in fields (and ignore variation within them), we define them as follows. Anthropology seeks to understand cultural differences in human societies using ethnography, unearthing physical details of human development and exploring mathematical a models of coevolution of culture and genes. Economics uses math-heavy methods to understand systemic (general equilibrium) outcomes of optimization of allocation of scarce resources, particularly money, in trading goods and services. Its main methods include theories rooted in preferences, beliefs, and constraints and analyses of field data. Political science studies formal systems of government, voting, juries, and law, which influence how people make consequential decisions collectively in different systems. Ideology is a central construct, with polls and surveys being a cornerstone method, although media and financial contributions data are increasingly used. Psychology seeks regularity in how people think and behave, with an emphasis on mechanisms and constructs such as memory, attention, and emotion. The main methods are laboratory experiments and psychometric or psychophysiological measures (though cognitive neuroscience uses a greater variety of newer methods). Finally, sociology investigates how the social world is created by and influences how people act in social groups at different levels of formal and informal aggregation. General ideas about functions of social structure are central but are not mathematized as in economics (e.g., economists might focus on allocative efficiency defined mathematically while sociologists might focus on social reproduction of elite success measured statistically or qualitatively). Readers may view these highly reduced descriptions of their own fields as overly simplified, while perhaps believing that the descriptions of the other fields are not too bad. That perception itself illustrates why communication is a challenge for interdisciplinary work. Complicating trade is the fact that many words like "rationality," "trust," "discrimination," "hierarchy," "salience," and "power" are used across the social sciences, but in different ways. Their local meanings are understood by "native speakers" but often baffling to "traders" arriving from foreign scientific lands. Interdisciplinarity needs a common trade language across disciplines, a "lingua franca." In a useful lingua franca, all disciplines adopt the "best" language from whichever discipline has described an idea most effectively. In order for teams of researchers to effectively tackle the complex research questions of our time, they will need to work together to build a common vocabulary that enhances the efficiency of their trade and collaboration. Examples of lingua franca which originated in individual disciplines include an understanding of culture from anthropology, rational choice theory from economics, ideology from political science, laboratory experimental methods from psychology, and social networks from sociology. Besides these central constructs, powerful tools for quasi-experimental causal inference-which originated in psychology (8), created a boom through more sophisticated use of instrumental variables in economics starting in the 1990s (9), a little later in political science (10), and somewhat in parallel in computer science and statistics around 1995 (11)-have evolved as a methodological lingua franca across the social sciences. A useful lingua franca, one which is to be a truly unifying framework, will need to cut through the technical jargon specific to any one field of origin in order to be widely accepted and used. Taking the time to build such a lingua franca will enable diverse teams to tackle multidimensional problems and create innovations for better health, wealth, and well-being (12). Drug addiction, obesity, sustainability and climate change, technologydriven changes in sociopolitical discourse, "fake news," and how artificial intelligence will change our world will never be fully understood by any one discipline working alone. Instead, making progress on these challenges will require understanding the institutional incentives, cultural norms, cognitive mechanisms, and social network effects that create and sustain these phenomena. Interdisciplinary work has already helped make progress in fields including poverty, health epidemics, and mental health. Learning from Case Studies In the next section, we present two "case studies" of successful interdisciplinarity: social network science and behavioral economics. In both cases, interdisciplinary research led to the creation of new cross-disciplinary fields of inquiry built on the comparative advantages of contributing fields, inspiring a shared lingua franca, generating insights about human nature, and improving social outcomes. These cases originated decades ago, so they are not meant to illustrate the three features that we take to characterize the golden age. While the original research was not particularly propelled forward by large, diverse datasets or by a desire to tackle global challenges, recent research has moved in those directions (Figs. 2 C and D and 3C). Social Networks Social networks are our first case study of a successful interdisciplinary enterprise. Network analysis uses methods from physics, computer science, and applied math to analyze questions often studied by sociologists, anthropologists, and psychologists regarding how interpersonal relationships are formed and how behaviors, beliefs, and emotions are transmitted across connected individuals (13). One striking feature of network analysis is the diversity of scholars who have been active in researching this field from the beginning, and who continue to contribute to intellectual progress (see Fig. 2 for some examples). People from different fields, traditions, and countries have worked together on related research questions (14). Network analysis has been significantly enabled by the availability of novel datasets, such as social media connections, and data from increasingly "connected" devices such as fitness trackers with social aspects (15). Notable contributors to the field of network analysis are Watts and Strogatz (16), who brought to light several key network properties, including that real-world networks are neither totally ordered (there are not always clear rankings between nodes) nor completely random (with all nodes having unequal probabilities of being connected with other nodes). Their work was important in getting the statistical physics community to recognize that their techniques could be applied to social settings, thus catalyzing an interdisciplinary turning point. It is worth noting that subsequent research, which flourished primarily in sociology, economics, and applied mathematics, did not necessarily follow directly from this original paper. One attractive feature of network science is that simple mathematical models capture the core features of complex networks, allowing the study of network dynamics across a variety of phenomena. The seemingly unrelated affiliations between actors, power grid transmission lines, and the neural network of Caenorhabditis elegans can all be captured via a simple "small-world" network model, a mathematical graph in which the nodes (individuals) are not neighbors with most of the other nodes and yet all other nodes can be reached in a small number of steps (17)(18)(19). Example 1: Revisiting Influence and Information Transmission. Collective behaviors are often studied at a static point in time, implicitly assuming that all individuals simultaneously make independent decisions. However, the heterogeneous process of information accumulation and integration prior to decisionmaking suggests that many decisions are actually made sequentially and that beliefs can be "transmitted" from one individual to the next. Given how many behaviors-from smoking to divorce to employment-are in fact "contagious" across individual groups, the dynamics of such contagion are of immense interest to social scientists. The field of cultural evolution has been modeling information transmission for several decades, using both epidemiological and social network models in their approach (24). Broadly, social contagion models allow simulating the speed at which individuals receive information and how past interactions influence their future behavior (13). These models focus on a handful of key parameters, which can be grouped as 1) degree centrality, 2) eigenvector centrality, 3) diffusion centrality, and 4) betweenness centrality/bridging (18). While one might not wish to be central in an HIV infection network, centrality is viewed as an advantage in most social networks and is correlated with financial success (25) and well-being (26). Degree centrality captures "popularity," the sheer number of connections an individual might have, and the speed at which these individuals can easily transmit information to a wide group at once. Eigenvector centrality, which captures how many well-connected others one is connected to, has been used to study social status and scapegoats (27). Diffusion centrality is a measure of "reach," showing how well-positioned an individual is to spread and hear about information. Finally, betweenness centrality, or bridging, captures "social chameleons" who connect otherwise disparate groups. Interestingly, all of these positions appear to be context general: If an individual is central in one network, they are likely to be central in another, and so forth (18). Each of these four "centralities" has different disciplinary origins: the idea of degree centrality began with sociologist and philosopher Georg Simmel (28); eigenvector centrality is a concept from graph theory, first used by mathematician Edmund Landau in an 1895 paper on chess tournaments (29); diffusion centrality became popular in recent literature by economists interested in the speed of information transmission (30); and betweenness centrality, or bridging, comes from sociology literature analyzing the creation and upkeep of social capital (31). In other words, the development of these social contagion models was itself an interdisciplinary enterprise from the beginning. Since its creation, network analysis has allowed researchers to apply new tools while revisiting old questions about social influence. For example, researchers have investigated the types of individuals in a network to whom people gravitate, and hence may be more influential at spreading information of various types (26). Computational modeling methods have been used to show quicker consolidation of majority opinion and more successful spread of initially unpopular beliefs in populations characterized by greater susceptibility to social influence (32). Other work using standard economic games has found that people give less money to those who are more socially distant (33). This has important implications when combined with the role that homophily plays in social networks, with many schools being heavily segregated by race, for example (34). Given the racebased economic disparity in many countries, this analysis has taught us that increasing the transfer and exchange of capital between people of different backgrounds must accompany efforts to interlink their social networks better. Example 2: The Spread of Infectious Disease. Sociologists have been integral to guiding the development of network models, given how ubiquitously they help explain the spread of anything from disease to innovation (35). For example, most infectious diseases spread through human contact, making the study of infection a natural place to apply network analysis. One of the first and longest-used models of disease spread, known as the SIR model, was introduced by Kermack and McKendrink in 1927 (36). This simple model assumes three "types" of people in the population of interest: susceptibles ("S"), infected ("I"), and recovered ("R"). The model has a number of necessarily simplifying assumptions, including that people can only be infected once before they move into the "R" group and are thereafter considered forever immune to the disease, and that only two people can come into contact at one point in time. More recently, Kretzschmar and Morris, following discussions with people who described how disease was "actually spreading" during a trip to Uganda, worked to create better ways to model the spread of HIV. Specifically, their new model handled multiple connections (multiple sexual partners) at once-something which is still closer to the norm than the exception in several societies (37). The model confirmed that small variations in concurrency (simultaneous sexual partners) can have dramatic effects on a population's vulnerability to HIV (38). Morris's team continues to collaborate across disciplines (with sociologists and statisticians, she is a professor of both), as well as across geographies (with several collaborators in Africa), to improve models of the spread of infection and apply them to new and better datasets. Epidemiological models are at the scientific center of the current COVID-19 pandemic, and many versions have been proposed. One interdisciplinary group developed a "risk source" model that uses population flow from a disease epicenter to predict infections in other locations, controlling for gross domestic product and population size. Using Chinese cell phone geolocation data, they found that, over time, the spreading pattern of severe acute respiratory syndrome coronavirus 2 can be associated with the pattern of population outflow from Wuhan. The model led a daily risk score to identify high-transmission areas at a very early stage (39). More recently, another interdisciplinary team compared three epidemic models in fitting time-series government data. They found that an SIR model best fits the data going into the peak of the disease and that all three models show the importance of social distancing in mitigating the negative effects of the pandemic (40). Summary. Network science would have been less successful without scientists from different disciplines borrowing ideas and communicating in a shared language about constructs and methods. Innovation in network science has benefited from the wide network of researchers who share a lingua franca, transmit high-fidelity information, and bring diverse perspectives to the table. Networks and their properties are fundamentally interesting because they underpin such a wide range of phenomena. Unlike behavioral economics, there was less conflict among those studying networks because the concept of a network was so obviously appealing and useful from the start (i.e., there was no interdisciplinary conflict about whether people "were networked" as occurred about whether people "were rational"). Furthermore, while sociologists studied networks first (14), the difficult question of what networks arise when people have scarce social bandwidth and can choose network links was cracked by economists (41). Moreover, the increasing availability of large, novel datasets that capture connections between individuals, such as social media and online communication data, has truly turbo-charged network science. Behavioral Economics Economics has arguably shown the most dramatic shift toward a golden age in terms of citation patterns (42). It has been increasingly citing other social sciences ("importing") and is also being cited more ("exporting" citations) by other social science fields, particularly political science and sociology from 1970 to . This finding reflects trade between behavioral economics, evolutionary psychology, and cultural anthropology. Reprinted from ref. 69, with permission from Elsevier. comes from behavioral economics, which uses evidence and methods from other social sciences-psychology in particular-to analyze natural limits on human computation, willpower, and selfishness (43). These analyses make new predictions about field data, leading to novel suggestions about how markets work and what policies might be effective. Analyzing such limits is of interest because conventional rational choice theory assumes maximization of subjective values ("utilities") and Bayesian integration of information, often over a long time horizon and accounting correctly for risks. However, research over the past few decades has shown that, in reality, people often do not act that rationally. Granted, rational choice theory was always intended to be useful rather than realistic. Behavioral economists aimed to have theories that are more realistic and more useful. At first, there was substantial hostility toward the behavioral approach, largely because it was not clear how models using only preferences, beliefs, and constraints could incorporate psychology (44,45). Thaler and others (46) used an "insider" approach (47). They took rational choice theory as a simple benchmark, identified important empirical "anomalies" that could not be sensibly explained by that benchmark and added extra ingredients sparingly to explain the anomalies and make new predictions. The first step was to begin with highly controlled laboratory experimental evidence to convince skeptics and establish plausible alternative theories. The researchers then explained and predicted field data. Alternative theories with a small number of added parameters were developed so that rational and behavioral predictions could be compared (48). Example 1: Loss Aversion. Conventional economic analysis typically relies on expected utility theory, a model which assumes that people choose risks by weighing subjective utility of prospective outcomes of each risk by their probabilities and choosing the risk with the greatest expected utility. In their influential "prospect theory," Kahneman and Tversky proposed a more psychologically plausible alternative: that outcomes are subjectively valued by their gains and losses relative to a reference point (49). In addition to reference dependence, prospect theory incorporated the idea that potential losses may be weighted disproportionately more than gains. This "loss aversion" is measured by a parameter, λ, the ratio of gain utilities to loss utilities (or to marginal utilities), which is around 1.9. Loss aversion has been used to explain different phenomena, including 1) taking financial risks in laboratory experiments (50), 2) why stocks historically return so much more than bonds (51), and 3) why there is a gap between high prices demanded to sell goods and lower prices paid to buy the same goods, an "endowment effect" (52). Psychologists also found effects of emotions (53), cognitive sequencing (54), and attention (55) on endowment effects. Cognitive neuroscientists have found evidence for loss aversion in neural circuitry (56), including dissociations between circuitry valuing gains and losses (57) and an unusual tolerance of losses in patients with amygdala damage (58). Political economists have used loss aversion to understand bargaining concessions (59), elections (60), and trade policy (61). Fig. 3 illustrates estimates of loss aversion using large datasets (marathon running times) and interdisciplinary perspective (the cultural anthropology concept of evolutionary salience correlating with the strength of loss aversion). While behavioral economists have not been keenly interested in the evolutionary and cultural origins of phenomena like loss aversion (62), there is evidence that loss-aversion and endowment effects are present in monkeys (63) and great apes (64)-though only for food and not for other valued goods (e.g., tools). Others found an unusual lack of endowment effects among marketisolated Hadza villagers in Tanzania (an example of behavioral economics trading with anthropology) (65). These data indicate that loss aversion or its behavioral implications are not universal and show why a wider scope of data are needed. Loss aversion contributes to a "status quo bias," an exaggerated tendency to choose a suggested default or stick with a status quo (70). This insight has impacted public policy. Countries in which organ donation is the default and people must "opt out" have higher donation rates than those with opt-in donation (71). The first impactful application of default bias is the "Save More Tomorrow" (SMART) plan (72). In this plan, companies autoenroll workers into tax-advantaged 401(k) plans (unless they opt out) and invest a fraction of their next pay raise into the plan (so their paycheck does not go down and create a subjective loss). These plans have increased savings substantially (73). The SMART plan became a poster child for many types of "nudges," designed choices that help some people make better decisions at a low cost to others who are fine on their own (74,75). Example 2: Social Preferences. Humans are the most prosocial species of all, often helping genetically unrelated individuals at a cost to ourselves. Psychological theories of comparisons between self and others, beginning in the 1960s (76), planted the seed for studying social preferences in other disciplines. Behavioral economics later contributed new mathematical functions and data. Game theory is a lingua franca for this understanding by offering canonical strategic interactions that can be used to dissect elements of prosociality (76). For example, in the "ultimatum game" a proposer offers a share of a known amount of resources, such as $10, to a responder (77). If the responder accepts the offer, they collect their money, and the proposer keeps the rest, but if the responder rejects the offer, everyone gets nothing. Rejecting an offer shows negative reciprocity-a willingness to sacrifice resources to harm an unfair person. Negative reciprocity can also be collective: In one study, police effectively solved fewer criminal cases after losing a wage arbitration (78). As the ultimatum game caught on across social sciences, other games quickly followed, highlighting different psychological motives (1, 79, 80): 1) dictator allocations, in which the responder must accept the offer (measuring altruism and norm sensitivity but not reciprocity); 2) trust games, in which a first mover invests money that is multiplied, taking a social risk to potentially benefit both parties, gambling that the second mover will share the total gain (81,82); and 3) many-person gift-exchange labor markets in which firms prepay wages and hope that workers exert effort which is costly to workers but benefits the firms (83). These economic games are now widely used across social sciences. An interdisciplinary team, mostly anthropologists, used economic games to study cross-cultural sociality in small-scale societies (84). They learned that stronger sharing norms (which were punished by ultimatum rejections) were associated with societal cooperation, such as building houses together, and with the extent of market trading. As interest in these games grew, the sociological lingua franca of a "norm" got imported widely. Norms are informal social rules that are expected to be followed and usually informally selfenforced by social punishment for deviations (even absent legal enforcement). In dictator allocation games, for example, people have different subjective norms about what is fair to share. Their sharing is closely tied to what they think the norm is (85), reflecting "good manners" rather than altruism (86). Cognitive neuroscientists have also used these games to identify circuitry implementing prosociality (87) and associating brain lesions with abnormal social preference (88). Knowing more about social preferences has not contributed immediately to solving social problems at the scale that "nudging" has. However, experiments have suggested social forces that could enhance prosociality. For example, allowing people to punish others who have behaved antisocially seems to increase cooperation (89), although the results vary cross-culturally (90). New evidence has also invigorated understanding of charitable giving (91). In the future, diagnostic tools will likely emerge from a better understanding of sociality, with applications ranging from psychiatry, methods to develop empathy, and perhaps analytics matching people to jobs. Summary. Before the growth of behavioral economics, it was commonly said that moving away from rational optimization would lead to an unfalsifiable theory in which "anything can happen." However, psychology showed that what happens is captured by psychological principles; something specific-not "anything"-happens. Loss aversion originated from perceptual psychology and early prosociality theories came from social psychology. Experimental economics added more general mathematical and game-theoretic structure. In general, behavioral economists won over skeptics through the mantra that "the easiest way to win an argument is to run another experiment or another statistical regression" (43). In many areas of behavioral economics and finance, large datasets played an important role, including more recently, multisite laboratory and field experiments (90,92). A treasure trove of experimental data came about as nudges and other ideas were implemented by "behavioral insight teams" in governments on every continent, currently just over 200 (93), to create better outcomes for citizens and consumers. More could be done integrating behavioral economic methods with biological and cultural origins of preferences, norms, and cognitive limits (94) and extending beyond Western, educated, industrialized, rich, and democratic (WEIRD) societies (84), which do not represent all human activity. A Spotlight on Specific Studies This section shines a spotlight on research from the past 12 y that epitomizes the golden age of social science. We begin with one study of drug trafficking. Table 1 then presents nine other studies which are also good examples.* Each of these papers combines features of 1) active collaboration between researchers from different disciplines, 2) using new types of data, and 3) answering important and difficult questions. The Table 1 papers are about topics from exercise habits to social inequality and use diverse new datasets from genetics, brain imaging, browsing history, and more. Magliocca et al. (95) analyzed international drug trafficking in Central America (Fig. 4). The researchers tested an agent-based model against a database of estimated illicit drug flows from 2000 to 2014. The model successfully captures many of the underlying trends across time and countries in trafficking flow and interdiction. It reproduces two effects known as the "balloon" effect (when trafficking spreads into new areas) and the "cockroach" effect (when trafficking routes become fragmented after big drug busts) (95). This study illustrates practice and promise in the golden age. Their team was nine coauthors from seven universities, one government organization, and a coauthor who remained anonymous to protect confidential sources. Their affiliations span geography, politics, biology, and earth sciences. This interdisciplinarity was essential to a model that did not leave out anything crucial, by using ideas from geography of crime (which focus only on where illegal drugs are made and used) and transaction costs, since logistics and risks of shipping are crucial, and vertical integration of the value chain. Their analysis includes strength of political governance (e.g., police corruption), economic inequality that drives the poor to produce narcotics, and geographic remoteness. Their approach imports new ideas from behavioral economics about learning (96) and salience (97) of trafficking events, predicting spatial and temporal patterns of cocaine flow tested with an impressive, classified dataset. The model can be used to analyze how different policies will hypothetically change trafficking, prices, and drug use, a challenging problem of global importance. Inequality is associated with the intergenerational transmission of wealth across small-scale societies (98). Anthropology, economics Multigenerational measures of three types of wealth Greater exposure to war increases religiosity (99). Anthropology, biology, economics Surveys in postconflict societies Rwandans use the mobile phone network to transfer "mobile money" to those affected by unexpected economic shocks (100). Economics Mobile phone usage Brain responses to emotionally evocative images predict political ideology (101). Political science, neuroscience, psychiatry Functional MRI Genetic data can predict economic and political preferences (102). Political science, economics, psychology, sociology Genetic data (genome-wide association study) Musical preferences and personality traits are linked (103). Psychology, marketing Facebook likes Bystanders will help in public conflict (104). Psychology, sociology Closed-circuit television footage Social networks strongly influence exercise habits (15). Sociology Fitness tracking and social networks Predicting scientific paper impact from conventionality and novelty of citations (105). Sociology, economics, operations, physics New bibliometrics: Web of Science citation, impact data *Authors' departmental affiliations are used for disciplinary identification. *We first heard this phrase used by Adam Gurri (https://theumlaut.com/thegolden-age-of-social-science-has-begun-d7555098ac72). Table 1 presents nine other studies we believe to be similarly emblematic of the golden age of social science. Conclusion and Challenges We hope this paper encourages scholars to pursue more interdisciplinary projects. However, this type of research also presents new challenges. The following obstacles disproportionately concern teams working on questions that cut across disciplines-we review each one and provide best-practice recommendations. • The question of silos between journals, or where and how information is accumulated, can be a special challenge for teams who are used to contributing to traditionally disparate disciplines. Many journals cater solely to the readership of a specific discipline or discipline subfield, with authors citing papers predominantly from like-minded journals. While cross-citation is on the rise, it is not guaranteed that interdisciplinary work will make equal contributions across fields, presenting the possibility of losing valuable insight with relevance to one of the fields. We encourage more journals to seriously consider and publish high-quality interdisciplinary research, even when it falls outside their traditional sphere of work. In the meantime, we encourage scholars to consider that an interdisciplinary project may produce multiple papers, such that all disciplines which contributed to the research will benefit from knowledge accumulated in the project. • Closely tied is the question of career incentives and authorship. Academics are often encouraged to remain focused on contributing to their respective subject areas, which means working with other academics in the same subfield and publishing in specialized journals (see previous point). Furthermore, differences in authorship norms across disciplines (such as the strong emphasis on solo-authored papers in economics) make some young researchers reluctant to join projects where bigger teams are better. If interdisciplinary work is to continue to thrive, hiring and promotion practices will need to adjust to value contributions in large teams that reach diverse journal audiences. In training and hiring new PhDs, we encourage departments and organizations to consider ways to expose trainees to more breadth in social science and develop better ways to evaluate interdisciplinary research. • Interdisciplinarity possesses unique challenges for "open science"-that is, the sharing of procedures, data, and code intended to make research more widely accessible-because different social science disciplines often have different tools and norms. As Stodden et al. note, "Current reporting methods are often uneven, incomplete, and still evolving" (106). However, this challenge is now widely recognized, and efforts are underway to improve open science in practice. We encourage researchers, especially new PhDs, to see this as an opportunity to define best practices for how the relevant sharing of data and code should be done. • Another challenge is the creation of unifying frameworks to explain behaviors across disciplines. Better theories will constrain the number of explanations that could be derived from big data by setting appropriate priors for hypotheses. An expansion of methodological approaches alone will not increase scientific knowledge unless there is common lingua franca or, even better, genuinely unifying frameworks. Social science would benefit from evolutionarily plausible theories that provide ultimate (function) and proximate (mechanism) explanations. We encourage trade-minded scholars to be humble and open to learning from other social scientists who have long histories of concepts and methods to share. The obstacles discussed above are not to be downplayed, but there is reason to be optimistic: Our increasingly connected age means that knowledge from other disciplines is much easier to access. To that end, here are some ways we can measure success in the years to come: more respected journals will seek out and publish work from diverse teams using unique datasets, more young scientists will engage in interdisciplinary research (thanks to improved institutional practices regarding career progress and encouragement from provosts and senior faculty), and more established scientists will engage in interdisciplinary work (thanks to increased interest from funding agencies). Most importantly, scholars will increasingly focus on difficult questions-ones that may have been avoided historically because their complexity made them impossible to tackle from one discipline alone-and social science will be more impactful together than the sum of any one subdiscipline working on its own. Data Availability. There are no data underlying this work.
2019-12-05T09:24:57.383Z
2019-11-13T00:00:00.000
{ "year": 2021, "sha1": "339e4b6bb52ea27b3c46bbb85b6e0c062a6328c6", "oa_license": "CCBYNCND", "oa_url": "https://www.pnas.org/content/pnas/118/5/e2002923118.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "723c1ba1de97afd28def3a25d9b86dc6cd8a0ff2", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Sociology", "Medicine" ] }
55944
pes2o/s2orc
v3-fos-license
All Superlinear Inverse Schemes are coNP-Hard How hard is it to invert NP-problems? We show that all superlinearly certified inverses of NP problems are coNP-hard. To do so, we develop a novel proof technique that builds diagonalizations against certificates directly into a circuit. Introduction In this paper we show that all superlinear inverse schemes of NP problems are coNPhard. We develop a novel proof technique that allows us to diagonalize against all possible certificate sets. We feel that this "in-circuit diagonalization" proof technique is of interest in its own right. The class NP can be viewed as the set of all languages L such that there exist a polynomial-time computable verifier V and a polynomial q such that, for all x ∈ Σ * , x ∈ L ⇐⇒ (∃y ∈ Σ * )[|y| = q(|x|) ∧ V (x, y) accepts]. A string y such that V (x, y) accepts is called a certificate or proof for x. Verifiers can formally be defined as follows (see 2. q : N → N is a strictly monotonic, integer-coefficient polynomial such that (∀x, y ∈ Σ * )[V (x, y) = 1 =⇒ |y| = q(|x|)]. Inverting standard verification schemes can now informally be described as follows: Let (V, q) be a standard verifier. Given a set C of certificates, does there exist a string x such that C is exactly the set of certificates for x (relative to (V, q))? It is quite natural to choose a succinct representation of certificates, namely, in the form of a circuit. This leads to the following definition (see Definition 2.5) of the inverse problem, which basically asks if a set of strings specified by a circuit is such that some string has precisely those strings as its certificate set. We show that inversion for all superlinear standard verification schemes is coNP-hard. In fact we show even more, namely, that inverting any standard verification scheme (V, q) where q grows faster than all outright linear functions n + k, k ∈ N, is coNP-hard (see Theorem 3.2). So coNP-hardness in fact holds for all Invs V,q where (V, q) is a standard verification scheme and q is a polynomial of degree either greater than one or of degree one with a degree-one-coefficient a 1 > 1. The proof of our main result is based on a proof technique that can informally be described as an "in-circuit diagonalization" against possible certificate sets. In particular, our in-circuit diagonalization technique uses a circuit to diagonalize against certificate sets that are potentially accepted by the very same circuit. The need to diagonalize in such an unusual way arises from the fact that when reducing SAT to Invs V,q (as we will do in the proof of Theorem 3.2) one has to map boolean formulas to circuits such that the following holds: If the formula is satisfiable then, for all x, the set of strings accepted by the circuit is not equal to the set of certificates for x (relative to (V, q)); and if the formula is not satisfiable then there exists a string x such that the the set of strings accepted by the circuit is exactly the set of certificates for x (relative to (V, q)). Relatedly, Σ p 2 is clearly an upper bound for the complexity of inverting standard verification schemes, and we prove that this upper bound is optimal by constructing a standard verifier such that its inversion problem is Σ p 2 -complete (see Theorem 3.7). Our actual construction in fact ensures that there exist a P set A and a standard verifier (V, q) for A such that Invs V,q is Σ p 2 -complete. Our results can be extended to also hold for the one-sided variant of inversion of verification schemes, 1-Invs V,q . The difference in the definitions of Invs V,q and 1-Invs V,q (see Definition 2.5) is that instead of requiring "∃x ∈ Σ * such that the set of strings accepted by the circuit equals the set of certificates of x" as in definition of Invs V,q , we in the definition of 1-Invs V,q require "∃x ∈ L(V, q) such that the set of strings accepted by the circuit equals the set of certificates of x." In a fascinating paper by Chen [Che03], a type of inversion of NP problems is studied that is somewhat related to the above-described one-sided-inversion problem, 1-Invs V,q , and Σ p 2 results are obtained. However, the models are different; for example, in contrast to our definition, where certificates are given in a very succinct form, i.e., implicitly in form of a circuit, Chen studied one-sided inversions of NP problems where the certificates are explicitly given, i.e., in form of a set or a list and, as mentioned above, Chen's focus is on the one-sided inversion problem. Our paper is organized as follows. After formally defining the basic concepts in Section 2, in Section 3 we state and prove our main result-that all superlinearly certified inverses are coNP-hard. In Section 3 we also prove a number of related theorems, in particular the optimality of the Σ p 2 upper complexity bound for Invs V,q . In Section 4, we turn to the complexity of recognizing whether machines compute verifiers and we establish Σ 0 2completeness results on this. Without defining it formally we will make use of a nice (i.e., polynomial-time computable and polynomial-time invertible) encoding of any boolean circuit (consisting of AND, OR and NOT gates) as a word over the alphabet Σ. As is standard, we denote the outcome (0 or 1, representing reject/false and accept/true) of a circuit c on input x by c(x). Let FP denote the set of all (total) polynomial-time computable functions, where these functions can be of arbitrary finite arities. We will use the following standard complexity classes. We mention in passing that P, NP, and DP are the low levels of the boolean hierarchy [CGH + 88, CGH + 89] and that P, NP, and Σ p 2 are the low levels of the polynomial hierarchy [MS72,Sto76]. Let REC denote the set of all recursive languages. The second level of the arithmetic hierarchy Σ 0 2 is defined as follows. Definition 2.2 (see [Rog67]) A language L is in Σ 0 2 if and only if there exists a language B ∈ REC such that for all x ∈ Σ * , where ·, ·, · here is a standard, nice 3-ary pairing function. As is standard we will use ≤ m (respectively, ≤ p m ) to denote recursive many-one reductions (respectively, polynomial-time many-one reductions) between languages. In the following we will define the basic concepts that allow us to study inverse NP problems. 3. We say a 2-ary Turing machine M computes a standard verifier if there are a polynomial r and a polynomial q such that (a) M runs in r-bounded time (by which we mean that for each x, y ∈ Σ * , M (x, y) halts in at most r(|x| + |y|) steps), and (Note: Regarding types, The following two facts are immediate and standard. Fact 2.4 1. For every set A ∈ NP there exists a standard verifier (V, q) such that (V, q) is a standard verifier for A. If (V, q) is a standard verifier for a language L then L ∈ NP. We now define the inverse problem for NP languages. Definition 2.5 Let A ∈ NP and let (V, q) be a standard verifier for A. Invs It is not hard to see that for standard verifiers (V, q), Invs V,q and 1-Invs V,q are always in Σ p 2 . However, Invs V,q and 1-Invs V,q seem to differ with respect to their complexity lower bounds. Proposition 2.6 There is a set A ∈ NP such that for all standard verifiers (V, q) for A, 1-Invs V,q ∈ P. One proof is by simply choosing A to be ∅ or any other finite set. In contrast, for every standard verifier (V, q) for ∅ we have that Invs V,q is ≤ p m -complete for coNP. The claim follows from the fact that from (V, q) being a standard verifier for ∅ the set Invs V,q is essentially the set of all appropriate-number-of-inputs circuits that for no input evaluate to 1, and is easily seen to be in coNP. Also, it is straightforward to reduce the coNP-complete language SAT to Invs V,q . Inverting NP Problems is coNP-complete Before stating our main theorem we need a technical definition. Definition 3.1 A polynomial q is called miserly if and only if for all ǫ > 0 there exist infinitely many n ∈ N such that q(n) ≤ (1 + ǫ)n. Note that for strictly monotonic polynomials p, p(n) = a k n k + a k−1 n k−1 + · · · + a 1 n + a 0 , with a k > 0, we have that p is nonmiserly if and only if either (a) k ≥ 2 or (b) k = 1 and a 1 > 1. Theorem 3.2 Let A ∈ NP and (V, q) be a standard verifier for A such that q is a nonmiserly polynomial. Then Invs V,q is ≤ p m -hard for coNP. This immediately yields the following, where by "nonmiserly standard verifier" we mean a standard verifier whose second component is a nonmiserly polynomial. Corollary 3.3 No nonmiserly standard verifier for an NP set has an inverse problem belonging to NP, unless NP = coNP. Proof of Theorem 3.2: Let A ∈ NP and let (V, q) be a standard verifier for A. Suppose that q is nonmiserly. We will show that SAT ≤ p m Invs V,q . Let F be a formula and suppose that F has n variables. Our reduction g will map F to the encoding c = g(F ) of a circuit c ′ . The circuit c ′ will have q(n ′ ) inputs where n ′ is the smallest natural number such that q(n ′ ) > n + n ′ . Note that since q is nonmiserly n ′ is linearly related to n and can be found in polynomial time. On input z ∈ {0, 1} q(n ′ ) , let x, α, and r be the unique strings such that z = xαr, x ∈ {0, 1} n ′ , α ∈ {0, 1} n , and r ∈ {0, 1} q(n ′ )−n ′ −n . The circuit c ′ consists of three subcircuits that work as follows: Subcircuit 1: Subcircuit 1 simulates the work of V (x, z). Let a = V (x, z) be the output of subcircuit 1. Subcircuit 2: Subcircuit 2 is a polynomial-size-bounded circuit for F with α as its input. Let b = F (α) be the output of subcircuit 2. It is obvious that c ′ and thus also c can be constructed in time polynomial in |F |. It remains to show that for all formulas F , F ∈ SAT ⇐⇒ g(F ) ∈ Invs V,q . Suppose that F ∈ SAT. So we have for all inputs z to the circuit c ′ , b = 0. Thus, for all inputs z, c ′ (z) = 1 if and only if d = 1. By construction d = 1 if and only if V (0 n ′ , z) = 1. It follows that {z ∈ Σ q(n ′ ) | c ′ (z) = 1} = {y ∈ Σ q(n ′ ) | V (0 n ′ , y) = 1} and so (via the certificates of 0 n ′ ) c = g(F ) ∈ Invs V,q . For the other direction of the equivalence to be shown assume F / ∈ SAT. So there exists an n-bit assignment α for F such that F ( α) = 1 and consequently for all inputs z to the circuit c ′ such that z = x αr with u Since by our remark preceding Theorem 3.2 any superlinear polynomial is nonmiserly, we have the following corollary. Corollary 3.4 Let A ∈ NP and let (V, q) be a standard verifier for A such that q is a superlinear polynomial. Then Invs V,q is ≤ p m -hard for coNP. Before we can state a similar result for 1-Invs V,q , we need a technical concept. Though as far as we know it is a new concept, we feel it is also a very natural concept. We will call this notion P-producibility. (In choosing the nomenclature, we are motivated by the term and notion of "self-P-producible circuits" [Ko85, BB86, GW93].) Definition 3.5 We say a set A is P-producible if and only if there exists a function h ∈ FP, h : Σ * → Σ * , such that for all x ∈ Σ * , |h(x)| ≥ |x| and h(x) ∈ A. Our definition of P-producibility should be contrasted (especially as to what the polynomial time is in relation to-the input or the output) with the notion of tangibility introduced by Hemachandra and Rudich: A set A is called tangible if and only if there exists a total function f that can be computed in time polynomial in the size of its output such that for Theorem 3.6 Let A be any NP set that is P-producible. Let (V, q) be a standard verifier for A such that q is a nonmiserly polynomial. Then 1-Invs V,q is ≤ p m -hard for coNP. Proof: Let A be an NP set that is P-producible via a function h ∈ FP, h : Σ * → Σ * . Let (V, q) be a standard verifier for A such that q is a nonmiserly polynomial. The proof proceeds quite similarly to the proof of Theorem 3.2. Let F be a formula with n variables. Let n ′ be the smallest natural number such that q(n ′ ) > n + n ′ . The difference from the proof of Theorem 3.2 is that the constructed circuit c ′ has to be modified as follows: Let w = h(0 n ′ +1 ). c ′ will have q(|w|) inputs. On input z ∈ {0, 1} q(|w|) , let z = xαr where x ∈ {0, 1} |w| , α ∈ {0, 1} n , and r ∈ {0, 1} q(|w|)−|w|−n , the circuit works as follows (note the natural adjustment in Subcircuit 3). Subcircuit 1: Subcircuit 1 simulates the work of V (x, z). Let a = V (x, z) be the output of subcircuit 1. Subcircuit 2: Subcircuit 2 is a polynomial-size-bounded circuit for F and uses α as its input. Let b = F (α) be the output of subcircuit 2. The correctness of the reduction can be shown as in the proof of Theorem 3.2, where w now plays the role that 0 n ′ played in the proof of Theorem 3.2. u In the reminder of this section we will establish some Σ p 2 -completeness results and a result about membership in DP. As already mentioned in Section 2, Invs V,q ∈ Σ p 2 for all standard verifiers (V, q). We will now show that this upper complexity bound is optimal. Theorem 3.7 There exists a standard verifier (V, q) such that Invs V,q is Σ p 2 -complete. Proof: Since Invs V,q ∈ Σ p 2 for all standard verifiers (V, q), it suffices to show that there exists a standard verifier (V, q) such that Invs V,q is Σ p 2 -hard. Consider the language ∃∀3SAT, ∃∀3SAT = {F | F is a boolean formula in 3-DNF having 2n variables x 1 , x 2 , . . . , x n and y 1 , y 2 , . . . , y n for some n ∈ N and where F (α, β) denotes the truth value of F when using α and β as assignments for the variables x 1 , x 2 , . . . , x n and y 1 , y 2 , . . . , y n , respectively. It is not hard to see that (V, q) is a standard verifier. To show that ∃∀3SAT ≤ p m Invs V,q we will map formulas F having the required syntactic properties (3-DNF, even number of variables) to the encoding c F of a circuit-having |encode(F )| + 2n + 2 inputs-that accepts all strings of the form encode(F )01double(β) for any β ∈ {0, 1} n and rejects all other strings. All other formulas, i.e., those formulas not in 3-DNF or having an odd number of variables, are mapped to the encoding c of a circuit that accepts exactly one string, namely 0 (this ensures that if F does not have the the required syntactic properties and thus F / ∈ ∃∀3SAT, then c / ∈ Invs V,q ). The described reduction is clearly polynomial-time computable. It remains to show that for all formulas F having the above-mentioned syntactic properties (3-DNF, even number of variables) it holds that F ∈ ∃∀3SAT ⇐⇒ c F ∈ Invs V,q . Let F ∈ ∃∀3SAT. It follows that there exists a partial assignment α ∈ {0, 1} n such that for all partial assignments β ∈ {0, 1} n , F (α, β) = 1. Hence there exists u = encode(F )01double(α) such that for all v = encode(F )01double(β), V (u, v) = 1. By construction of c F we thus have c F ∈ Invs V,q . For the other implication assume F / ∈ ∃∀3SAT. Hence for all α ∈ {0, 1} n there exists β ∈ {0, 1} n such that F (α, β) = 0. It follows from the definition of V that for all u = encode(F )01double(α) there exists v = encode(F )01double(β) such that V (u, v) = 0. By construction of c F we thus have c F / ∈ Invs V,q . This completes the proof. u Note that the verifier V defined in the proof of Theorem 3.7 is a verifier for the language L of all strings w such that there exist a natural number n, a boolean formula F in 3-DNF with 2n variables x 1 , x 2 , . . . , x n , y 1 , y 2 , . . . , y n , and a string α ∈ {0, 1} n such that It is not hard to see that L ∈ P since satisfiability for 3-DNF formulas can be checked in polynomial time. Corollary 3.8 There exist a language L ∈ P and a standard verifier (V, q) for L such that In fact, looking carefully at the construction in the proof of Theorem 3.7, we see that the just-given proof also establishes the following one-sided result. Corollary (to the proof ) 3.9 There exist a language L ∈ P and a standard verifier (V, q) for L such that 1-Invs V,q is Σ p 2 -complete. So even simple sets can have very hard inverse problems (Corollaries 3.8 and 3.9). Nonetheless, all (NP) sets have at least one standard verifier whose one-sided inverse problem is not too hard, namely, it belongs to DP (note: if DP = Σ p 2 then PH collapses to DP). Theorem 3.10 Every set A ∈ NP has a standard verifier (V, q) such that 1-Invs V,q ∈ DP. Proof: Let A ∈ NP and let (R, p) be a standard verifier for A. Let q(n) = n + p(n) and define a verifier V as follows: V accepts on input (a, b) if and only if there exists a string b ′ such that b = ab ′ and R(a, b ′ ) = 1. It is not hard to see that (V, q) is a standard verifier for A. By definition we have This can be rewritten, keeping in mind the particular V we have defined, as follows. 1-Invs V,q = {c | c encodes a circuit c ′ having q(m) inputs for some m ∈ N such that: This rewritten version (keeping in mind that the quantification over m is not a "real" quantifier) makes it clear that 1-Invs V,q ∈ DP, as it is of the form A ∩ B ∩ C ∩ D, with A, B, C ∈ coNP and D ∈ NP, and so is of the form of the difference of two NP sets, namely, The Complexity of Recognizing Verifiers In this section we show that deciding whether a given machine computes a standard verifier is complete for the second level of the arithmetic hierarchy, Σ 0 2 . Before doing so, we introduce the notion of a "general verifier," in which the "hit the length exactly" restriction on the certificate size is changed to just a one-sided bound, and we prove a Σ 0 2 -completeness result for that. We do so primarily since the proof for that case is clearer and so helps introduce the related but more involved Σ 0 2 -completeness proof for the case of standard verifiers. Proof: It is not hard to see that I ver,gen ∈ Σ 0 2 since I ver,gen can be described as follows: i ∈ I ver,gen ⇐⇒ (∃k ∈ N)(∀x, y ∈ Σ * )[M i (x, y) halts within at most (|x| + |y|) k + k steps and if M i (x, y) accepts within at most (|x| + |y|) k + k steps then |y| k + k ≥ |x|]. (To see this, note that given a machine M i as well as the polynomial q and the strictly monotonic polynomial r that with respect to M i fulfill part 2 of Definition 4.1, we will choose to use a k so large that (∀n ∈ N)[n k + k > max(q(n), r(n))].) Note that the right hand side of the above " ⇐⇒ " shows membership in Σ 0 2 . It remains to show that I ver,gen is ≤ m -hard for Σ 0 2 . Since . is a fixed standard enumeration of Turing machines, e.g., that of Hopcroft-Ullman [HU79]) is ≤ m -hard (even ≤ m -complete) for Σ 0 2 it suffices to show that I f inite ≤ m I ver,gen . Given (as input to our reduction) any i ∈ N, by the nice properties of the standard enumeration, we can effectively construct from i a machine E that is an enumerator for L(N i ). We now describe a Turing machine M . M is a 2-ary Turing machine that on input (x, y) ∈ Σ * × Σ * does the following steps: 1. Simulate |x| + |y| steps of the work of E and let A be the set of all strings that are enumerated by E within those |x| + |y| steps. 2. Simulate 2(|x| + |y|) steps of the work of E and let B be the set of all strings that are enumerated by E within those 2(|x| + |y|) steps. Clearly, M is a 2-ary Turing machine. Let j be an index such that M j = M (we assume our standard enumeration is expansive enough to include all the obviously 2-ary, deterministic machines created by this construction-this is a legal assumption). Since j clearly depends only on i we have implicitly described a mapping f : N → N. Note that f is computable. It suffices to show that for all i ∈ N, i ∈ I f inite ⇐⇒ f (i) ∈ I ver,gen . Let i ∈ N and let j = f (i). Case 1: i ∈ I f inite . So L(N i ) is finite and the number of strings enumerated by E is finite as well. Note that since M j by definition runs in polynomial time and since E enumerates only a finite number of strings it follows from the construction of M j that M j accepts only a finite number of inputs and thus it holds that there exists a strictly monotonic (integer-coefficient) polynomial p such that for all x, y ∈ Σ * , if M j (x, y) outputs true then p(|y|) ≥ |x|. So (remembering also the polynomial-time claim made above) M j computes a general verifier and thus j ∈ I ver,gen . Case 2: i / ∈ I f inite . In this case, E enumerates an infinite number of strings and thus for all y ∈ Σ * , M j (x, y) outputs true for infinitely many x ∈ Σ * . So there does not exist a (strictly monotonic) polynomial p such that, for all x, y ∈ Σ * , if M j (x, y) outputs true then p(|y|) ≥ |x|. Thus, M j does not compute a general verifier and so j / ∈ I ver,gen . u Does the same classification hold for standard verifiers? Note that the "hit the length on the head"-ness of standard verifiers will be something of a technical obstacle. Nonetheless, by carefully choosing the pairs (x, y) that are accepted by the constructed machine we are able to show that deciding whether a given machine computes a standard verifier is also complete for Σ 0 2 . Theorem 4.3 The index set I ver,std = {i ∈ N | M i computes a standard verifier} is ≤ m -complete for Σ 0 2 . Proof: It is not hard to see that I ver,std ∈ Σ 0 2 since I ver,std can be described as follows: i ∈ I ver,std ⇐⇒ (∃k ∈ N)(∃ℓ ∈ N)(∃a 0 , a 1 , a 2 , . . . a ℓ ∈ Z)(∀x, y ∈ Σ * )(∀n ∈ N)[(M i (x, y) halts within at most (|x| + |y|) k + k steps and if M i (x, y) accepts within at most (|x| + |y|) k + k steps then |y| = a ℓ |x| ℓ + a ℓ−1 |x| ℓ−1 + · · · + a 1 |x| + a 0 ]) and (a ℓ n ℓ + a ℓ−1 n ℓ−1 + · · · + a 1 n + a 0 < a ℓ (n + 1) ℓ + a ℓ−1 (n + 1) ℓ−1 + · · · + a 1 (n + 1) + a 0 )]. Note that the right hand side of the above " ⇐⇒ " shows membership in Σ 0 2 . It remains to show that I ver,std is ≤ m -hard for Σ 0 2 . As in the proof of Theorem 4.2, it suffices to show that I f inite ≤ m I ver,std . We return to showing that I f inite ≤ m I ver,std . So, suppose that we are given any i ∈ N (and we wish to effectively compute a string f (i) such that i ∈ I f inite ⇐⇒ f (i) ∈ I ver,std ). By the nice properties of the standard enumeration, we can effectively construct from i a machine E that is an enumerator for L(N i ). We now describe a Turing machine M . M is a 2-ary Turing machine that on input (x, y) ∈ Σ * × Σ * does the following steps: 1. If |y| = q |x| (|x|) halt and reject the input (i.e., output false). If |y| = q |x| (|x|) continue. 2. Simulate |x| + |y| steps of the work of E and let A be the set of all strings that are enumerated by E within those |x| + |y| steps. 3. Simulate (|x| + |y|) 2 steps of the work of E and let B be the set of all strings that are enumerated by E within those (|x| + |y|) 2 steps. 1 4. Accept (i.e., output true) if B − A = ∅; otherwise reject (i.e., output false). Clearly, M is a 2-ary Turing machine. Let j be an index such that M j = M (we assume our standard enumeration is expansive enough to include all the obviously 2-ary, deterministic machines created by this construction-this is a legal assumption). Since j clearly depends only on i we have implicitly described a mapping f : N → N. Note that f is computable. It suffices to show that for all i ∈ N, i ∈ I f inite ⇐⇒ f (i) ∈ I ver,std . Let i ∈ N and let j = f (i). Case 1: i ∈ I f inite . So L(N i ) is finite and the number of strings enumerated by E is finite as well. Note that since M j by definition runs in polynomial time and since E enumerates only a finite number of strings it follows from the construction of M j that M j accepts only a finite number of inputs. So it holds that there exists a strictly monotonic (integer-coefficient) polynomial p such that for all x, y ∈ Σ * , if M j (x, y) outputs true then p(|x|) = |y|. In particular, by our definition of the polynomials q i (and remembering also the polynomial-time claim made above) we have that if n ∈ N is the largest number such that a pair (x, y), |x| = n, is accepted by M j then (|y| = q n ( n) and) M j computes a standard verifier (with q n working as the "q" of Part 3 of Definition 2.3). Case 2: i / ∈ I f inite . In this case, E enumerates an infinite number of strings. We will argue that then M j accepts an infinite number of pairs and thus there does not exist a polynomial p such that for all pairs (x, y) ∈ L(M j ) we have |y| = p(|x|). Note that the Turing machine M j described above accepts only pairs (x, y) where |y| = q |x| (|x|) and thus one might worry that even though E enumerates an infinite set, M j only accepts finitely many pairs. Indeed, observe that if we had (as in the proof of Theorem 4.2) chosen the number of steps E is simulated in steps 2 and 3 of the description of M j to be, respectively, |x| + |y| and 2(|x| + |y|), we would have left coverage "gaps," and it might happen that even though E enumerates an infinite set, M j would be "triggered" to accept pairs only a finite number of times. However, by choosing the number of steps the enumerator E is simulated by M j to be |x| + |y| and (|x| + |y|) 2 in, respectively, steps 2 and 3, it follows 2 that if E enumerates an infinite set then M j accepts infinitely many pairs. So M j (0 n , 0 qn(n) ), when 1 The reason that we use the bound (|x|+|y|) 2 , rather than 2(|x|+|y|) as we did in the proof of Theorem 4.2, will be explained later in this proof. 2 Keeping in mind that the only interesting case is when the second argument's length, call it ℓ2, is related to the first argument's length, call it ℓ1, by the equation ℓ2 = q ℓ 1 (ℓ1), what we need to show to ensure that there are only finitely many gaps in coverage is that for all but at most a finite number of n's (and only focusing in this footnote on second arguments y of the length-relation just mentioned) the simulation-step bound in Step 3 when |x| = n is greater than the simulation-step bound in Step 2 when |x| = n + 1. That is, we need it to hold that, for all sufficiently large n ∈ N, (n + qn(n)) 2 ≥ (n + 1) + qn+1(n + 1). i ∈ I f inite , outputs true for infinitely many n ∈ N. Recall that by definition we have that for all n ∈ N, n! ≤ q n (n). So there does not exist a polynomial p (whether strictly monotonic or otherwise) such that, for all x, y ∈ Σ * , if M j (x, y) outputs true then p(|x|) = |y|. Thus, M j does not compute a standard verifier, and so j / ∈ I ver,std . u Conclusions We have shown that all superlinear inversion schemes are coNP-hard. We have also shown that some inversion schemes are Σ p 2 -complete. Note that for finite sets A and any of their standard verifiers (V, q) we have that Invs V,q is coNP-complete. It is not clear whether the complexity of inverting standard verifiers for infinite NP sets is also independent of the verifier. In particular, does every infinite NP set have a standard verifier (V, q) such that Invs V,q is Σ p 2 -complete?
2004-10-11T19:38:40.000Z
2004-10-11T00:00:00.000
{ "year": 2004, "sha1": "fcca0094a130e4d91dd9e4e4e3329993af6c19f2", "oa_license": null, "oa_url": "https://urresearch.rochester.edu/fileDownloadForInstitutionalItem.action?itemFileId=1030&itemId=863", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b197c0da8a2af2e835527af45d07924fb2c1d214", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
129942981
pes2o/s2orc
v3-fos-license
STIM1 at the plasma membrane as a new target in progressive chronic lymphocytic leukemia Background Dysregulation in calcium (Ca2+) signaling is a hallmark of chronic lymphocytic leukemia (CLL). While the role of the B cell receptor (BCR) Ca2+ pathway has been associated with disease progression, the importance of the newly described constitutive Ca2+ entry (CE) pathway is less clear. In addition, we hypothesized that these differences reflect modifications of the CE pathway and Ca2+ actors such as Orai1, transient receptor potential canonical (TRPC) 1, and stromal interaction molecule 1 (STIM1), the latter being the focus of this study. Methods An extensive analysis of the Ca2+ entry (CE) pathway in CLL B cells was performed including constitutive Ca2+ entry, basal Ca2+ levels, and store operated Ca2+ entry (SOCE) activated following B cell receptor engagement or using Thapsigargin. The molecular characterization of the calcium channels Orai1 and TRPC1 and to their partner STIM1 was performed by flow cytometry and/or Western blotting. Specific siRNAs for Orai1, TRPC1 and STIM1 plus the Orai1 channel blocker Synta66 were used. CLL B cell viability was tested in the presence of an anti-STIM1 monoclonal antibody (mAb, clone GOK) coupled or not with an anti-CD20 mAb, rituximab. The Cox regression model was used to determine the optimal threshold and to stratify patients. Results Seeking to explore the CE pathway, we found in untreated CLL patients that an abnormal CE pathway was (i) highly associated with the disease outcome; (ii) positively correlated with basal Ca2+ concentrations; (iii) independent from the BCR-PLCγ2-InsP3R (SOCE) Ca2+ signaling pathway; (iv) supported by Orai1 and TRPC1 channels; (v) regulated by the pool of STIM1 located in the plasma membrane (STIM1PM); and (vi) blocked when using a mAb targeting STIM1PM. Next, we further established an association between an elevated expression of STIM1PM and clinical outcome. In addition, combining an anti-STIM1 mAb with rituximab significantly reduced in vitro CLL B cell viability within the high STIM1PM CLL subgroup. Conclusions These data establish the critical role of a newly discovered BCR independent Ca2+ entry in CLL evolution, provide new insights into CLL pathophysiology, and support innovative therapeutic perspectives such as targeting STIM1 located at the plasma membrane. Electronic supplementary material The online version of this article (10.1186/s40425-019-0591-3) contains supplementary material, which is available to authorized users. At first glance, CLL cases with indolent and stable disease present B cells that are ineffective at mobilizing Ca 2 + after BCR cross-linking, thus resembling B cells anergized in vivo after chronic antigenic stimulation [13]. For these patients, B-CLL cell incapacity to mobilize Ca 2 + was related to mutated IgHV patients, a reduced level of cell surface (s) IgM, and a defective signalosome. In contrast, CLL cases with a worse clinical outcome show an elevated basal Ca 2+ level that can be enhanced upon sIgM triggering. The elevated Ca 2+ signaling in the CLL group with progressive disease was associated with an unmutated IgHV status and an elevated level of CD38, but was not linked to any specific cytogenetic markers [14]. However, other processes are described in order to provide alternative explanations for Ca 2+ dysregulation in B-CLL cells such as a BCR autonomous signaling capacity due to an internal epitope present in the second framework of stereotyped IgHV that can be abrogated by using a BCR signaling inhibitor [15], an incapacity of the ER to release Ca 2+ due to an inhibitory interaction between Bcl-2 (overexpressed in B-CLL cells) and the endoplasmic InsP 3 R [16], and last but not least an incompletely characterized BCR independent Ca 2+ pathway recently described in B-CLL cells [17,18]. Ca 2+ deregulations in B-CLL cells and their correlation with disease evolution and severity are far from being fully understood. Reversing specific changes in deregulated Ca 2+ fluxes may also represent new therapeutic opportunities to answer unmet needs in CLL treatment. In this study we deciphered Ca 2+ entry deregulation in B-CLL cells and tested whether BCR-dependent or BCR-independent Ca 2+ entry would be relevant in CLL outcome. The latter was critical for disease progression, and we therefore analyzed and characterized a novel Ca 2 + signaling pathway, referred to as constitutive Ca 2+ entry (CE), which is triggered by STIM1 located at the plasma-membrane (STIM1 PM ). Interestingly, we demonstrated that blocking CE with an anti-STIM1 monoclonal antibody (mAb) presents innovative therapeutic perspectives in CLL. CLL population Clinical information was retrospectively obtained from 74 untreated patients diagnosed with CLL according to the World Health Organization (WHO) classification [19], and 13 healthy volunteers at the Brest University Hospital. Disease assessment included Binet stage determination, progression free survival (PFS), treatment free survival (TFS), CD38 expression, lymphocyte counts, lymphocyte doubling time (LDT), cytogenetic risk-status, and IgHV mutational status, which were performed as previously described [20]. Consent was obtained from all individuals and the protocol approved by the Ethical Board at the Brest University Hospital (clinicaltrials: NCT03294980; cohort OFICE; CRB Biobank collection 2008-2014), in accordance with the Declaration of Helsinki. Sample preparation and flow cytometry Peripheral blood mononuclear cells (PBMC) were isolated from whole blood by Ficoll-Hypaque density gradient centrifugation (Eurobio, Courtaboeuf, France) and B cells were further enriched using the Pan B-cell Isolation Kit (Miltenyi Biotec GmbH, Bergisch Gladbach, Germany). Cell purity was assessed by fluorescence-activated cell sorting (FACS) analysis and was over 95% for B cells (CD19+). Calcium entry recording For CE measurements, B cells were loaded with 2 μM Fura-2/AM dye (Molecular Probes, Leiden, Netherlands) and 2 μM Pluronic acid (Gibco, Waltham, MA) for 30 min at 37°C in a medium containing: 135 mM NaCl, 5 mM KCl, 1 mM MgCl 2 , 10 mM HEPES, 10 mM Glucose with an 7,4-adjusted pH (Buffer A) supplemented with 5 mM CaCl 2 . Cells were washed and left to attach in the same buffer on 12 mm Cell-Tak (Corning, NY) precoated coverslides for 20 min, allowing the de-esterification of the dye. Fura-2 was excited alternatively at 340 and 380 nm (Polychrome V, TILL photonics), and fluorescence emission was recorded at 510 nm using a fluorescence microscope (IX71, Olympus) equipped with a dichroic mirror (415DCLP) and a 14-bit CCD camera (ExiBlue, Qimaging). After the stabilization of basal fluorescence, the extracellular medium was replaced with Buffer A supplemented with 0.5 mM CaCl 2 for 100 s and again with the original 5 mM CaCl 2 -containing Buffer A after curve stabilization. Excitation/emission ratio (F 340nm /F 380nm ) was calculated for each time point and each cell with the Metafluor 6.3 Software (Universal Imaging, West Chester, USA). The amplitude of CE was calculated after normalization to the basal ratio (ΔF/F 0 ), as the difference between average values of basal ratio measured in 5 mM external Ca 2+ and the average ratio value in 0.5 mM Ca 2+ . For anti-IgM and Thapsigargin (TG)-induced calcium entry, B cells were loaded in Buffer A containing 1.8 mM CaCl 2 and 2 μM Fura-2/AM (Fura-2 QBT Kit, Molecular Devices) for 1 h at 37°C in Cell-Tak precoated 96-well plates, and fluorescence acquisition (excitation 340 and 380 nm; emission 510 nm) was performed on the Flexstation 3 microplate reader with SoftMax Pro 5.4.5 software (Molecular Devices, San Josa, CA). For anti-IgM induced Ca 2+ response, the extracellular medium was replaced with Buffer A supplemented with 10 mM CaCl 2 , before reading, and 10 μM of polyclonal goat anti-human IgM (Jackson Immunoresearch) were injected after 150 s. TG-induced ER Ca 2+ release, extracellular medium was replaced with Buffer A supplemented with 100 μM EGTA just before starting the reading protocol. A stimulation with 2 μM of TG was performed after 100 s of recording, and 1.8 mM CaCl 2 was added after 700 s in order to quantify SOCE entry. Ca 2+ entry were quantified after value normalization (ΔF/F 0 ) with the exception of basal Ca 2+ concentrations estimated as the average of initial F 340nm /F 380nm values. Statistical analysis Continuous data are described as mean ± standard error of the mean (SEM). Following normality and equality of variance tests, nominal values were compared to controls using the student's t test or alternatively by using a nonparametric test (Mann-Whitney rank sum test). Differences among groups were analyzed by one-way ANOVA in a non-parametric test and the Dunn's test was used for post-hoc comparisons. For categorical data the Fisher's exact test was used, and for correlation analysis the Pearson's coefficient r was calculated. The profile likelihood method using a Cox regression model of PFS was used in univariate analysis to determine the optimal threshold and stratify patients into two groups as previously described [22]. PFS, TFS and LDT analyses were next performed using Kaplan-Meier curves and prognosis differences between groups were assessed with a log-rank test. Receiver operating curves (ROC) were generated to determine the area under the curve (AUC) and the optimal cut-off values were chosen by using the upper left corner value (100% specificity). P values under 0.05 were considered significant. Statistical analyses and the correlation matrix were performed using GraphPad Prism 7.0a (La Jolla, CA). Constitutive Ca 2+ entry is higher in unstimulated B-CLL cells from patients with progressive disease As deregulation in Ca 2+ signaling is an important hallmark of B-CLL cells, and suspected to vary during CLL disease progression [2], Ca 2+ entry in the absence of BCR engagement, designated as CE, was evaluated in resting B-CLL cells. To this end 30 untreated CLL patients were selected and, as reported in Fig. 1a, CE was significantly enhanced in a subset of B-CLL cells when compared to B cells from 8 healthy controls (ΔF/F 0 : 0.10 ± 0.01 in B-CLL cells versus 0.06 ± 0.01 in controls, P = 0.03). CLL patients were further dichotomized into CE+ (high levels) versus CE-(normal/low levels) using the profile likelihood method in a Cox regression model of PFS for optimal cut-off identification (cut-off = 0.083). Next, and according to this dichotomy, the Kaplan-Meier log-rank analysis revealed, for those CE+ CLL patients (n = 16), a significant difference with regards to parameters associated with disease outcome such as PFS (P = 0.001; Fig. 1b), TFS (P = 0.003; Fig. 1c) and LDT (P = 0.02; Fig. 1d). In addition, the Binet status (p = 0.0002) and lymphocytosis (P = 0.003) were associated with an elevated CE, which was not the case for the cytogenetic risk status, IgHV mutational status, and CD38 positivity ( Table 1, left part). Constitutive Ca 2+ entry is independent from proximal BCR signaling and BCR co-activators One step further, to test BCR pathway dependence in CE+ B-CLL cells, the BCR capacity to mobilize Ca 2+ was tested within B-CLL cells from 16 CE+ CLL patients, 13 CE-CLL patients, and 13 healthy controls ( Fig. 2a and Additional file 2: Figure S2). As previously described [2,3], Ca 2+ mobilization in response to BCR engagement was reduced in B-CLL cells when compared to controls (P = 0.002 for both CE subgroups), however no difference was observed when comparing the two CE subgroups within CLL patients. Interestingly, by conducting a bivariate analysis of PFS on both CE and IgM Ca 2+ mobilization, we further observed that CLL patients with disease progression were restricted to CE+/IgM+ (n = 11) and CE+/IgM-(n = 5) CLL patients but not to CE −/IgM+ (n = 4) and CE−/IgM-(n = 9) CLL patients (P = 0.006, Fig. 2b). To dissect heterogeneity between the 4 subgroups of patients (Additional file 2: Table S1), we next examined whether these differences resulted from differential expression of the membrane surface (s) IgM, sIgD, and co-receptors (CD19, CD21, CD38, and CD5). No differences were observed between the 4 subgroups for these markers that participate or modulate the proximal BCR signaling. As well, no differences were reported when considering CE+ and CE-CLL patients. Accordingly, we concluded that there is independence of CE from proximal BCR signaling and BCR co-activators. Constitutive Ca 2+ entry is independent from an autonomous BCR pathway Since CE could be attributable to an antigen-independent autonomous BCR pathway [15], this Ca 2+ entry was recorded in the presence of two BCR signalosome inhibitors Ibrutinib, a covalent inhibitor of BTK, and LY294002, a selective inhibitor of PI3Kδ. As shown in Fig. 2c/d, CE+ B-CLL cells from 3 patients were selected and CE was unaffected by the addition of the BCR signaling inhibitors. In parallel and as a positive control, the Ibrutinib and LY294002 capacity to inhibit Ca 2+ response following BCR activation was demonstrated (Fig. 2e/f ). Such a concept was further reinforced by the analysis of basal pPLCγ2, an indicator of BCR signalosome activation, in resting B-CLL cells showing that pPLCγ2 levels were similar between the CE+ and CE-CLL subgroups (Additional file 2: Table S1). Constitutive Ca 2+ entry is correlated with basal Ca 2+ levels and independent from SOCE Next, 29 B-CLL cells (10 CE−/IgM-, 4 CE+/IgM+, 4 CE +/IgM-and 11 CE+/IgM+) were selected and a correlation matrix was performed for all in order to compare CE with (i) the basal intracellular Ca 2+ level estimated by the initial F340/380 ratio, (ii) the anti-IgM Ca 2+ response; (iii) the ER Ca 2+ release by thapsigargin (TG), an inhibitor of the ER Ca 2+ ATPase pumps, that artificially and maximally deplete Ca 2+ stores in the absence of extracellular Ca 2+ ; and (iv) the TG SOCE response observed after Ca 2+ reffiling. Results from the correlation matrix were effective to highlight two groups of Ca 2+ responses in B-CLL cells (Additional file 2: Figure S3A/B). First an association based on the correlation observed between CE and the basal Ca 2+ level (r = 0.591; P = 0.001), but not CE with anti-IgM and Tg SOCE response (data not shown). Second, a proximal BCR-InsP3R signaling pathway as the anti-IgM Ca 2+ response was correlated with both TG ER Ca 2+ release and TG SOCE (P = 3 × 10 − 4 and 1 × 10 − 7 , respectively) but not with basal Ca 2+ levels and CE. Constitutive Ca 2+ entry is regulated by STIM1and supported by Orai1 and TRPC1 channels Based on our previous work showing a role for Orai1, TRPC1 channels and STIM1 [17,23] in CE, and to better characterize the autonomous Ca 2+ channel influx in CE+ B-CLL cells, three strategies were developed using (1) the Orai1 channel blocker, Synta66 (S66); (2) specific siRNA for Orai1, TRPC1 and STIM1 to modulate CE amplitude; and (3) a quantitative analysis of Orai1, TRPC1 and STIM1 expression by Western-blot. First, specific blockade of Orai1 channels with S66 at 2.5 μM significantly reduced CE (P = 0.03), the anti-IgM Ca 2+ response (P = 0.01), and TG SOCE (P = 0.05) but not TG ER Ca 2+ release in CE+/IgM+ B-CLL cells compared to control conditions (Fig. 3a/b and data not shown). Second, and another way in which to further test our hypothesis, was to reduce the expression of Orai1, TRPC1 and/or STIM1 by transfecting specific siRNA into B-CLL cells (1 CE+/IgM-and 2 CE+/IgM+). In contrast to the negative siRNA control, a reduction was seen at the protein level when using specific siRNAs for STIM1, Orai1, and TRPC1 (FACS representations are depicted Fig. 3c). As a result, CE was reduced in the presence of siRNA to Orai1, TRPC1 and STIM1 (P < 0.05 for all) (Fig. 3d). These results suggest that Orai1 together with TRPC1 both contribute to CE regulated by STIM1. Third, Western blot (WB) was used to analyze the expression of Orai1, STIM1 and TRPC1 isoforms in B-CLL cells from 19 patients (11 CE+ and 8 CE-). When comparing CE+ and CE-patients (Fig. 4), the two different isoforms of Orai1 were increased (P = 0.04), and, although not significant, there is a trend for higher TRPC1 expression in CE+ B-CLL cells compared to CE-B-CLL cells. STIM1 analysis by WB reveals higher expression of both the 75 kDa non-glycosylated isoform and the 85 kDa glycosylated isoform that were overexpressed in CE+ B-CLL cells (P = 0.03). The pool of STIM1 located in the plasma membrane (STIM1 PM ) controls CE Since glycosylation is required for STIM1 localization at the plasma membrane [24,25], and given that the pool of STIM1 located in the plasma membrane (STIM1 PM ) regulates store-independent Ca 2+ influx [26], this raises the possibility that STIM1 PM controls CE and contributes to its enhancement in CE+ B-CLL cells. To address this issue (Fig. 5a), B-CLL cells from 28 patients (11 CE+ and 17 CE-) were tested by FACS for STIM1 expression using STIM1 mAb following permeabilization of the cells (total-STIM1 expression determination) or not (STIM1 PM quantification). In agreement with WB results, FACS analysis revealed that both total-STIM1 and STIM1 PM were increased in CE+ B-CLL cells (P = 0.01 and < 10 − 4 , respectively), and their levels correlated with CE amplitude (P = 0.01 both, Fig. 5b). A ROC analysis was performed in order to establish the cut-off for positivity (Fig. 5b left). We next sought to determine STIM1 PM involvement in CE regulation, and this was tested by exploring the capacity of the anti-STIM1 mAb (GOK, 10 μg/mL) to inhibit CE. In contrast to the IgG2a isotype control mAb that had no effect on CE (Fig. 5c), the anti-STIM1 mAb inhibits CE (P = 0.03), while no effects were reported on the anti-IgM Ca 2+ response, TG ER Ca 2+ release and TG SOCE responses ( Fig. 5d and Additional file 2: Figure S4B). This is in agreement with the observed correlation between STIM1 PM levels and basal Ca 2+ but not with TG ER Ca 2+ release and IgM/ TG SOCE results (Additional file 2: Figure S4A). Altogether this reinforces our hypothesis that CE and basal Ca 2+ are regulated by STIM1 PM and supported by Orai1 and TRPC1 channels in a unique and alternative influx pathway distinct from SOCE and downstream the BCR-InsP3R pathway. STIM1 PM as a valuable therapeutic target As CE determination is difficult to manage in routine practice, we further compared the patient's characteristics according to their plasma membrane STIM1 status in 74 untreated CLL that included those tested for Ca 2+ signaling. As depicted in the Kaplan-Meyer curves (Fig. 6a), the CLL STIM1 PM high subgroup had shorter PFS and TFS (P = 0.0007 and P = 0.02, respectively). Characteristics of STIM1 PM high and low patients are presented in Table 1 (right part) showing that lymphocytosis (p = 0.05), but not the other parameters tested, was increased in the CLL STIM1 PM high subgroup. Finally and as the initial descriptions of STIM1 PM were related to the control of cell survival [27,28], we next decided to test the neutralizing capacity of the anti-STIM1 mAb clone GOK to control B-CLL cell survival (STIM1 PM high n = 9; and STIM1 PM low n = 8) when used alone or in combination with RTX, an anti-CD20 mAb (Fig. 6b). Used alone GOK and RTX did not reduce in vitro B-CLL cell survival as compared to the controls, but in contrast the RTX + GOK combination significantly reduced cell viability in the STIM1 PM high subgroup (50.4 ± 6.4% with IgG2a versus 23.0 ± 4.7% with RTX + GOK, P = 0.03), an effect which was not significant in the STIM1 PM low subgroup (33.5 ± 6.5% with IgG2 versus 20.3 ± 4.7% with RTX + GOK). Discussion The overall data add new support to the critical role played by the Ca 2+ signaling in CLL outcome, and describe for the first time a novel STIM1 PM -dependant and constitutively active Ca 2+ entry, independent from BCR signaling, and that constitutively active CE can be modulated and targeted by an anti-STIM1 mAb. We found that both CE and STIM1 PM are clinically relevant in CLL and their determinations present important prognostic value. Several reports have demonstrated altered Ca 2+ signaling in CLL B cells and with the paradox that Ca 2+ mobilization is altered in "anergic" CLL B cells from non-progressive patients, while a response is reported in CLL B cells from patients with disease progression as observed in our study [2,3]. Moreover and based on the strong correlation observed between CE and basal cytosolic Ca 2+ concentrations in this study, we were able to extend the observation performed by Muggen and colleagues who have described elevated basal Ca 2+ concentrations in B-CLL cells in contrast to normal B cells [14]. Our study also supports that CE and the elevated level of basal Ca 2+ reported in B-CLL cells are, in fact, independent from the BCR-PLCγ2-InsP 3 R pathway and are instead related to an enhanced CE and are independent from store depletion. In contrast, Duhren-Von Minden and colleagues have associated the elevated basal Ca 2+ signaling downstream Syk phosphorylation in CLL B cells to an antigen-independent recognition of the BCR framework domains (FR2 or FR3), or alternatively through an occupation of the BCR with repetitive motifs [15]. Importantly, blocking the BCR pathway with the BTK inhibitor ibrutinib or with the PI3K inhibitor LY294002 did not alter CE or the basal Ca 2+ level (data not shown) which is in agreement with Muggen report who failed to associate the basal Ca 2+ level in CLL B cells with the FR2/3 amino-acid sequence. Based on the report of Le Roy and colleagues who detected pSyk at a basal level in IgM+ responder patients, it could be proposed that blocking pSYK controls both CE and the IgM response in CE+/IgM+ responder patients, an hypothesis that needs to be tested as well as the capacity of Syk to phosphorylate STIM1 [2,3]. STIM1 was initially identified as a plasma membrane protein [25], and more recently STIM1 PM was associated with the regulation of a store independent Ca 2+ entry pathway activated by arachidonic acid [26] and to SOCE in platelets [29]. Similarly and although STIM1 is predominantly located in the ER in normal B cells, we found that CE+ B-CLL cells express a substantial amount of STIM1 PM and Orai1 as well as an enhanced expression of TRPC1. This is important because STIM1 PM can interact with Orai1 or TRPC1, two Ca 2+ channels activated in CE+ B-CLL cells as demonstrated by using specific siRNAs and in agreement with the Chen KT et al. report [30]. STIM1 deregulation in B-CLL cells needs further exploration as it may be related to defective transcriptional control by DNA methylation and/or microRNAs [31,32], and/or is related to post-translational modifications such as glycosylation and/or phosphorylation known to affect STIM1 localization and properties [24,33], as these processes are altered during CLL evolution [34]. The clinical success of RTX in monotherapy is limited in CLL and, in order to improve its efficacy, RTX is associated with chemotherapy (RFC) or with BCR inhibitors (Ibrutinib, Idelalisib, venetoclax), however relapses and side-effects remain important suggesting a need to develop new therapeutical options and in particular to combine RTX with new drugs targeting a non BCR survival pathway [35,36]. Consistent with the notion that CE is important for disease outcome and STIM1 PM for CE, we demonstrated that pre-incubating cells with antibodies targeting STIM1 PM reverses B-CLL cell capacity for CE and in turn impairs cell survival when associated with RTX. Therefore, we propose to use anti-STIM1 mAb targeting STIM1 PM and CE as a new innovative therapeutic option for CLL. An additive/synergic effect of RTX or BCR inhibitors with CE inhibitors, such as anti-STIM1 mAb, should be addressed in future studies. Relevant limitations of our study include the following: (i) a small sample size used to analyze Ca 2+ entry in CLL B cells; (ii) the use of samples from a cross-sectional and monocentric center; and (iii) a bias due to the selection of untreated patients. However and to reduce these limitations, a large panel of approaches (e.g. Ca 2+ signaling, siRNAs, specific inhibitors, FACS, WB) has been used in order to demonstrate that STIM1 and in particular STIM1 PM controls CE in CLL B cells from patients with progressive disease. The selection of untreated patients for this study represents also an advantage as drug exposure may affect the analysis of Ca 2+ entry, as observed in vitro with ibrutinib. Future studies are however mandatory in order to study whether variations . c-Receiver operating curves (ROC) were generated to determine the area under the curve (AUC) and the optimal cut-off value to discriminate STIM1 high from STIM1 low patients. d-The effects of the anti-STIM1 mAb clone GOK on CE in CLL samples (n = 6). e-No effect of the anti-STIM1 mAb on anti-IgM Ca 2+ response in CLL samples (n = 10). The r 2 coefficient and P values are indicated when significant in Ca 2+ entry and Ca 2+ actor variations vary following treatment introduction and in those patients who relapse. Conclusion In CLL the involvement of Ca 2+ signaling deregulation in cancer cell progression is well established, but the identification of mechanisms controlling Ca 2+ entry are poorly understood. In the present work, an extensive analysis of the Ca 2+ entry in CLL cells was performed, revealing, in patients with progressive disease, the implication of a constitutive and BCR-independent Ca 2+ entry pathway. Next, it was further observed that a pool of STIM1 present in the plasma membrane characterizes tumor progression and controls constitutive Ca 2+ entry. Finally, the capacity of an anti-STIM1 mAb to block constitutive Ca 2+ entry and to reduce in vitro CLL cell viability, when associated with Rituximab, was reported within the high STIM1 PM CLL subgroup. This supports the idea that targeting STIM1 PM and therefore constitutive Ca 2+ entry represents a new 1st in class therapeutic pathway in leukemia treatment. The potential use of mAb targeting STIM1 PM in cancer therapy that can be used alone or in synergy with existing drugs needs to be further evaluated. Additional files Additional file 1: Figure S1. Two pathways control Ca 2+ signaling in B cells from patients with chronic lymphocytic leukemia. In the BCR-induced store operated Ca 2+ entry pathway, B cell receptor (BCR) interaction with the antigen results in the formation of the signalosome consisting of an active complex composed of the tyrosine kinases Lyn and Syk, B-cell linker protein (BLNK), Bruton-tyrosine-kinase (BTK), phospholipase C gamma 2 (PLCγ2), and phosphatidylinositol-4,5-bisphosphate 3-kinase δ (PI3Kδ) that phosphorylates CD19. Signalosome activation cleaves the membrane phospholipid phosphatidyl inositol 4.5-biphosphate (InsP2) into diacylglycerol (DAG) and inositol 1,4,5-triphosphate (InsP3), which subsequently, through binding to the endoplasmic reticulum (ER) IP3 receptor (InsP 3 R), mobilizes initially Ca 2+ from stores and secondarily extracellular Ca 2+ through the interaction between the multimerized reticular stromal interaction molecule 1 (STIM1 ER ) and the plasma-membrane Orai1 channel. In the constitutive Ca 2+ Fig. 6 In the whole CLL cohort (n = 74), an elevated level of STIM1 at plasma membrane (STIM1PM) is relevant for CLL clinical outcome and influence in vitro cell survival. a Kaplan-Meier plots showing progression free survival and treatment free survival for STIM1PM dichotomize into high and low levels. b Increase in the density of STIM1PM improves the efficacy of rituximab (RTX) in the STIM1 PM high CLL subgroup (n = 9) when used in combination with the anti-STIM1 mAb (both 10 μg/mL, 48 h), effect which was not observed in the STIM1 PM low CLL subgroup (n = 8). P values are indicated when significant
2019-04-24T04:39:31.859Z
2019-04-23T00:00:00.000
{ "year": 2019, "sha1": "8dff225d798806d2e7cee7ff61aad931e9366c55", "oa_license": "CCBY", "oa_url": "https://jitc.bmj.com/content/jitc/7/1/111.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8dff225d798806d2e7cee7ff61aad931e9366c55", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
1530336
pes2o/s2orc
v3-fos-license
The Semantic Linker- A New Fragment Combining Method This paper presents the Semantic Linker, the fallback component used by the the DELPHI natural language component of the BBN spoken language system HARC. The Semantic Linker is invoked when DELPHI's regular chart-based unification grammar parser is unable to parse an input; it attempts to come up with a semantic interpretation by combining the fragmentary sub-parses left over in the chart using a domain-independent method incorporating general search algorithm driven by empirically determined probabilities and parameter weights. It was used in the DARPA November 92 ATIS evaluation, where it reduced DELPHI's Weighted Error on the NL test by 30% (from 32% to 22%). INTRODUCTION An important problem for natural language interfaces, as well as for other NL applications such as message processing systems, is coping with input which cannot be handled by the system's grammar. A system which depends on its input being grammatical (or on lying within the coverage of its grammar) simply will not be robust and useful. Some sort of"fallback" component is therefore necessary as a complement to regular parsing. This paper presents the Semantic Linker, the fallback component used by the the DELPHI natural language component of the BBN spoken language system HARC. The Semantic Linker is invoked when DELPHI's regular chart-based unification grammar parser is unable to parse an input; it attempts to come up with a semantic interpretation by combining the fragmentary sub-parses left over in the chart. It was used in the DARPA November 92 ATIS evaluation, where it reduced DELPHI's Weighted Error on the NL test by 30% (from 32% to 22%). The Semantic Linker represents an important departure from previous proposals, both our own [1] and others [2], in that it casts fragment combination as a general search problem, rather than as a problem of task model template matching (as in [4]) or as an extension to the existing parsing algorithm (as in [3]). Rather than reconstruct a parse tree, the goal of the search is to combine all the fragments into the most minimal and plausible connected graph, in which the links are not syntactic descendancy, but logical binary relations from the domain, such as "AIRLINE-OF", "ORIG-OF" etc. States in the search space are partial connections of the fragments: in other words, a set of links. There a two types of "move" to reach a new state from an existing one. One adds a new link between fragments, and the other "hallucinates" an object to bridge two fragments that could not otherwise be linked (corresponding roughly to a notion of ellipsis). A success terminal state is one in which all the fragments have been linked. States have features associated with their constituent links and a system of weights on the features determines a score that is used to guide the search. The advantages of this formulation are its domainindependence, flexiblity, extensibility, and ability to make use of statistical data. In particular: • No assumption need be made about constraining task models In the next sections we turn to a more detailed description of data structures and algorithms. We first give some necesary background on semantic interpretation in the DELPHI system, and on the generation and interpretation of fragmentary sub-parses in it. Next, we show bow this framework is used to generate all possible connections between pairs of different fragment objects, and bow probabilities and other features are assigned to these connections. We then show how we efficiently search the space of combinations of such links in order to find the minimal and plausible set of connections, and bow such link combinations are turned into final interpretations. Finally, we give quantitative results, and discuss our future plans. OF FRAGMENTS The cenual notion in DELPHI's syntactic-semantic interface is the "grammatical relation". Grammatical relations include the familar deep-structure complement relations of subject, direct-object etc., as well as other various adjunct relations, such as PP-COMP in the rule below: The special grammatical relation "HEAD" denotes the head of the phrase. All other grammatical relations are said to "bind" a constituent they label -their "argument" -to this head to make a new object of the same category as the head. Here, a PP argument is bound to an NP head to make a new NP. Binding operates on the semantic interpretation and subcategorization information of the head and on the semantic interpretation of the argument to produce the semantic interpretation of the new phrase. In principle, the relationship between inputs and output is completely arbitrary. In practice, however, it most often consists of an addition of a pair (RELATION, ARG-INTERP) to what are termed the "bindings" of the head input. For example, in the case of "flight on Delta" the pair added would be In everything that follows, we will make this simplifying assumption. We can then speak of a translation R ~ r from a grammatical relation to a semantic relation. For the present example, this translation would be: PP-COMP(ON) -> FLIGHT-AIRLINE-OF where the grammatical relation PP-COMP is further subdivided by the preposition "ON" (and the requirements on semantic type are implicit from the relation FLIGHT-AIRLINE-OF). We will term such a translation a "realization rule" because it shows how the semantic relation FLIGHT-AIRLINE-OF can be syntactically realized in terms of an on-PP. The set of all such realization rules (large in number for a non-trivial domain) is stored in a knowledge base separate from the parser and interpreter code. The interpretation of any parse tree can now be represented as an isomorphic semantic tree, in which the nodes are the semantic interpretation objects of open-class lexical items and the links are the semantic relations between them. Such a structure can obviously also be represented as a set of n semantic objects and n-1 triples consisting of a semantic relation and head and argument semantic objects. For example, "Delta flies a 747 to Denver" would be represented in graph form as: where a PP such "to Denver" is represented as its NP object tagged by the preposition. When a complete parse of an utterance cannot be performed, we are left with a set of fragmentary analyses in the chart which correspond to constituent analyses of portions of the input string. The Fragment Generator (essentially the same as was reported on in [1]) extracts the most probable fragment sub-parses associated with the longest sub-strings of the input, using probabilities associated with the producing grammar rules (as in [5]. The semantic interpretations of the parse-fragments are treated in the same way as those of a complete parse: as a set of objects and triples. As a simple example, suppose we have the three fragments "to Boston", "Denver" and "Delta flights on Monday". Then the three corresponding sub-graphs are: The problem of connecting the N fragments is then reduced to finding a set of relation-links which will connect a pair of objects in N-1 different fragments. COMPUTING THE LINKS AND THEIR PROBABILITIES The Semantic Linker first computes the link database, which is the set of all possible links between all pairs of objects in all pairs of different fragments. These links are computed using the same set of realization rules that drive the parser and semantic interpreter, and depend on the semantic types of the two objects and on the preposition tag (ff any) of the second object. For the set of fragments in our example the link database is: la. where the links are grouped together in a ordered list according to the fragment-pairs they connect. Since there are three fragments there are three pairs. Links have a set of features which are established when they are computed. The most important is the relational probability of the link, or: where r is the semantic relation of the link and C1 and C2 are semantic classes of the two argument positions, where C2 may be tagged by a preposition. This is the probability that a pair of objects of type C 1 and C2 are linked by by the relation r in an interpretation (as opposed to by some different relation or by no relation at all). A corpus of interpretations generated by hand could be used to determine these probabili!Jes, but in our work we have chosen to work with a set of sentences that can be correctly parsed by the regular DELPHI parser. Since the semantic interpretations of these parses are just sets of triples the probabilities can be determined by counting. Approximately 3000 interpretations are currently used for our work in ATIS. From this corpus, we can determine that the link la has a high (.89) probability of connecting a FLIGHT and CITY:TO object when these are present, whereas the link 3a has a near zero probability, since the relation NEARBY-CITY-OF occurs very infrequently between two cities. We have found it convenient to use the log of these probabilities, scaled up and rounded to the lowest negative integer, as the actual value of the link probability feature. Additionally, maximum and minimum values of this number are imposed, so that even a highly likely link has a small negative score (-1), and a highly unlikely link has a finitely negative one (-70). Links can have other features depending on assumptions made in computing them. For example, a link can be computed by ignonng the prepositional tag of the second object, in which case the link is given the feature "IGNORES-PREP". An example would be lb above, which ignores the preposition "to". A link can also be computed by assuming a prepositional tag that is not present, giving the link the feature "ASSUMES-PREP", as in 3a, where the preposition "near" is assumed. As we shall see in the next section, these features are also assigned negative integers as penalties, balancing out any higher relational probability the link may have gained from the assumptions made by it. SEARCHING THE SPACE OF COMBINATIONS The problem of finding a connection between the N fragments is simply the problem of picking at most one link from each of the link-groups in the link database, subject to the constraints that all N fragments must be linked and that no links can be redundant. We can formalize these consU'aints as follows. Let LINKED be defined as holding between two fragments if there is a link between them (in either direction), and let TC(LINKED) be the I~ausitive closure of this relation. Then the first constraint is equivalent to the requirement that TC(LINKED) hold between all different fragments F1 and F2. To formalize the non-redundancy conslraint, let LINKED-L mean "linked except by link L". Then the non-reduudancy constraint holds ff there is no link L such that TC(LINKED) is the same as TC(LINKED-L). The problem as cast implies a search space in which each state is simply the set of links chosen so far, and a transition between states is the addition of a new link. We will find it convenient, however, to include all of the following components in a sta~: 1. suffix of the link-database list 2. chosen-links 3. combinational features 4. state score 5. fragments-linked The suffix of the link-database list consists of just the linkgroups still available to be chosen. The combinational features are those arising from the combination of particular links, rather than from individual links themselves. The state score is the judgement of how plausible the state is, based on its features and those of its links. We want to find the most plausible success state, where a success state is one which satisfies the constraints above, as recorded on the fragments-linked slot. Pre-success states reside on the state queue. The state queue initially consists of just the single state START. START has a pointer to the complete link-group list, an empty set of combinational features and links chosen and a score of zero. Search proceeds by selecting a state from the queue, and calling the function EXPAND-STATE on it to produce zero or more new states, adding these to the state queue and repeating until suitable success states are found or the queue becomes empty. Although this formulation allows the state space to be searched in any order, our implementation nor-maUy uses a best-first order choice. This simply means that at selecti(m cycle, the best pre-success states are chosen for expansion. The function EXPAND-STATE works by taking the first link-group from the link-group list suffix whose fragments are not already indirectly connected by the state and generating a new state a new state for every link L in the link-group. The links-chosen of these new states are the links-chosen of the parent state plus L, and the link-group suffix is the remainder of the parent's link-group suffix. EXPAND-STATE also generates a single new state whose link-group list suffix is the remainder but whose links-chosen are just those of the parent. This state represents the choice not to directly connect the two fragments of the link-group, and is given the feature "SKIP". The score of a state is determined by summing the weighted values of its features and the features, including the logprobabilities, of its chosen links. Since the weights and log-probabilities are always negative numbers, the score of a state always decreases monotonically from the score of its parent, even in the case of a SKIP state. At this point in our example, the state S1 has the best score, since its probability score is good (-2) and it has no "blemish" features, unlike the state $2, whose link lb has the IGNORES-PREP feature. The SKIP state $3 is also not as good as S 1, because the weight assigned to SKIP (-7) is selected so as to only be better than a link whose probability is lower than .50. Thus, the state S1 is selected for expansion, resulting in the states SI-1, S1-2 and S1-3. The feature "CLASH", which results when a link with single-valued R (R a b) is combined with a link (R a b'), is assigned to S 1-1, because it assigns the link 2a on top of la. The state S1-2 assigns the link 2b, which does not involve a clash. Both SI-1 and S1-2 are sucess states, and are therefore not expanded further. Search then returns to the SKIP state $3. Its children all have lower scores than the success state S 1-2, however, and given the guarantee that score decreases monotonically, any eventual success states resulting from them can never be as good as S 1-2. They are therefore pruned from the search. The same happens with the descendants of other expansion candidates. The queue then becomes empty, and the best success state S 1-2 is chosen as the result of fragment combination. Hallucination Suppose that instead of the example we have an utterance that does not include the word "flights": Boston to Denver on Monday Delta This utterance generates the fragments "Boston", "to Denver", "on Monday" and "Delta". Clearly, no complete set of links can be generated which would fully connect this set, without an object of semantic class FLIGHT or FARE to act as a "hub" between them. To handle these situations, the Semantic Linker has a second type of state transition in which it is able to "hallucinate" an object of one of a pre-determined set of clases, and add link-groups between that hallucinated object and the fragment structures already present. In the ATIS domain, only objects of the classes FLIGHT, FARE, and GROUND-TRANSPORTATION may be hallucinated. The hallucination operation is implemented by the function EXTEND-STATE. It is invoked when the function EXPAND-STATE returns the empty set (as will happen when input state's link-group list is empty) and returns states with the new link-groups added on, one for each of the allowed hallucination classes. These states are assigned a feature noting the hallucination, sub-categorized by the semantic class of the hallucinated object. Different penalty weights are associated with each such sub-categorized feature, based on the differences between probability of oecurence of the classes in corpora. In ATIS, FLIGHT hallucinations are penalized least of all, FARE hallucinations more, and GROUND-TRANSPORTATION hallucinations most of all. A state descended from one extended by hallucination cannot be extended again, and if it runs out of link-groups before connecting all fragments it is declared "dead" and removed from the queue. Handling Corrections and Other Features Several other combinational features influence the actions of the Semantic Linker with respect to such matters as handling speaker corrections and judging appropriate topology for the graph being built. Speaker corrections are an important type of disfluency: Tell me the flights to Denver uhh to Boston $. AFTER COMBINATION This will produce the fragments "Tell me the flights to Denver" and "to Boston". Since a flight can have only one DEST-OF the fragment "to Boston" can not be connected as is. One strategy might be to ignore the "to" preposition and attempt to link "Boston" as an ORIG-OF with the IGNORE-PREP feature. This clearly would not produce the correct interpretation, however. The Linker provides an alternative when the clashing value is to the right of the existing value in the string. In this case, the link receives the combinational feature RE-PLACEMENT, which is not penalized strongly. If the relational probability of the DEST-OF link is good, it will defeat its IGNORE-PREP rival, as it should. Related to correction is the operation of merging, in which two nodes of a common semantic type are merged into one, and the appropriate adjustments made in the link-database and links-chosen for the state. This is appropriate for certain semantic classes where it is unlikely that separate descriptions (unless they are combined in a conjunction) will appear in an interpretation for the utterance: Show me flights to Boston flights to Boston at 3 pm Another feature influences the topology of the graph the Linker constructs. Nothing in the algorithm so far requires that graph structure of connections ultimately produced remain a tree, even though the input fragment interpretations themselves are trees. It is perfectly possible, in other words, for there to be two links (R a b) and (R' a' b) in which the same node is shared by two different parents. Since we are not trying to produce a syntactic structure, but a semantic one in which the direction of relations is often irrelevant, we do not forbid this. It is discouraged, however, since it sometimes indicates an inapproriate interpretation. The combinational feature MULTI-ROLE is assigned to a state with such a combination of links, and is penalized. Finally, we point out that the log-probability perspective is useful for assigning penalties to features. If one has a link L1 that has a high relational probability but also has a penalty feature, and another link L2 with a lower relational probability but which does not have the penalty, one can decide how far apart in probability they would have to be for the two alternatives to balance -that is, to be equally plausible. The difference in log-probabilities is the appropriate value of the penalty feature. After the combination phase is complete, we have zero or more success states from which to generate the utterance interpretation. If there are zero success states, an interpretation may still be generated through the mechanisms of "scavenging" and "back-off'. The Linker will find no success states either because it has searched the state-space exhaustively and not found one, or because pre-set bounds on the size of the space have been exceeded, or because the scores of all extensible frontier states have fallen below a pre-established pruning score for plausibility. In this case, the state-space which has been built up by the previous search is treated as an ordinary tree which the Linker scans recursively to find the optimum partial connection set, both in terms of fragment-percentage covered and in state score. This technique is termed "scavenging". In some instances there may not even be partial connection states in the space. In this case, the system looks for the longest fragment to "back off" to as the interpretation. In formal evaluation of the DELPHI system conducted under DARPA auspices [6], both scavenging and back-off were aborted in cases where there were obviously important fragments that could not be included in interpretation. This was done because of the signiligant penalty attached to a wrong answer in this evaluation. If there is more than one success state, the Linker picks the the subset of them with the highest score. If there are more than a certain pre-set number of these (currently 2), the Linker concludes that it none of them are likely to be valid and aborts processing. Once a suitable set of objects and triples has been produced, whether through combination, scavenging or back-off, the Linker must still decide which of the objects are to be displayed -the "topic" of the utterance. The topic-choice module for the Semantic Linker is fairly similar to the topicchoice module of the Frame Combiner reported on in [1], and so we do not go into much detail on it here. Basically, there are a number of heuristics, including whether the determiner of a nominal object is WH, whether the sort of the the nominal is a "priority" domain (in ATIS, GROUND-TRANSPORTATION is such a domain), and whether the nominal occurs only has the second argument of the triples in which it occurs (making it an unconstrained nominal). The important new feature of the Semantic Linker's topic choice module is its ability to make of use of links between a nominal object and a verb like "show" as evidence for topic choice. RESULTS AND DISCUSSION Results from the November 1992 DARPA evaluation [6] show that the Semantic Linker reduced DELPHI's Weighted Error rate on the NL-only portion of the test by 30% (from 32% to 22%). This was achieved mostly by dramaticaly lowering the No Answer rate (from 21% to 8%). It should be noted that these results were achieved with an earlier w~rsion of the Semantic Linker than that reported here. In particular, this earlier version did not make use of empirically determined probabilities, but rather used a more ad hoe system of heuristically determined weights and features. Nevertheless, these preliminary results give us some confidence in our approach. Several areas of future work are seen. One is the use of automatic training methods to determine feature weights. A corpus pairing sentences and sets of connecting links could be used in supervised training to adjust initial values of these weights up or down. Another area, one in which we are already engaged, is using the Semantic Linker in ellipsis processing by treating the preceding utterance as a fragment-structure into which to link the present, elliptical one. A third area of future work is the use of relational probabilities and search in the generation of fragments themselves. Currently, the fragment generator component is entirely separate from the rest of the Linker, which makes it diflicdt for combination search to recover from fragment generation. Instead of trying to combine fragments, the Linker could seek to combine the semantic objects internal to them, in a process where inter-object links found by the fragment generator would have a strong but not insurmountable advantages A last area of future work is to more fully integrate the Semantic Linker into the regular parsing mechanism itself, and to investigate ways in which parsing can be viewed as similar to the linking process.
2014-07-01T00:00:00.000Z
1993-03-21T00:00:00.000
{ "year": 1993, "sha1": "935a5b6405ae34a650b478ce31c19fa0f84007e2", "oa_license": null, "oa_url": "http://dl.acm.org/ft_gateway.cfm?id=1075679&type=pdf", "oa_status": "BRONZE", "pdf_src": "ACL", "pdf_hash": "935a5b6405ae34a650b478ce31c19fa0f84007e2", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
256908548
pes2o/s2orc
v3-fos-license
Biological analysis of cancer specific microRNAs on function modeling in osteosarcoma Osteosarcoma (OS) is the most common bone tumor characterized with a high risk of amputation and malignant morbidity among teenagers and adolescents. However, relevant pathogenic/biological mechanisms underlying OS-genesis remains to be ambiguous. The aim of this study was to elucidate functional relationship about microRNAs-mRNAs networks and to identify potential molecular markers via a computational method. Gene expression profile (GSE70415) was recruited from Gene Expression Omnibus. 3856 differentially expressed genes and 250 significantly expressed microRNAs were identified by using GCBI. The results of GO and KEGG pathways associated proteomics analysis indicated that extracellular matrix organization, small molecule metabolic process, cell adhesion (GO IDs: 0030198, 0044281, 0007155) and pathways in cancer, PI3K-Akt signaling pathway, metabolic pathways (pathway IDs: 5200, 4151, 1100) were significantly enriched. In addition, CKMT2, miR-93b-5p, miR-29b-3p were found to be positively/negatively correlated with TP53, EGFR, and MMP members mediated OS development, including angiogenesis, migration and invasion. Further visualization of collective effect of 1181 microRNAs-mRNAs pairs and protein-protein interactions was realized by applying with cytosacpe. In summary, our work provided a better understanding of non-coding regulatory mechanisms of transcriptomics and unraveled essential molecular biomarkers in osteosarcoma. Osteosarcoma (OS) is the most frequent primary bone malignancy, characterized with a high potential for lung metastasis and has been the third common cancer-associated threat to adolescents 1 . It most occurs at the extremities of long bones, where osteoblasts transform into mature bone tissue. However, the putative molecular mechanisms underlying OS carcinogenesis have not been deciphered completely and still been a challenge yet. Hitherto, cumulative evidences [2][3][4][5][6][7] have demonstrated that a variety of factors including microRNAs (miRNAs), a group of non-coding RNAs, were involved in OS development. The first study on miRNAs expression in OS published by Gao et al. 8 identified 182 differentially expressed miRNAs (DEmiRNAs), accelerating revelation that miRNAs may have an obscure but critical impact underlying OS pathogenesis. Recent researches 9-12 also suggested that miR-1, -409-3p, -379, -665, -489-3p function as sequence-specific tumor suppressors mediating primary OS proliferation, cell death and even distant metastasis. Alternatively, development of high throughput testing technology (microarray, next-generation sequencing) has successfully made it convenient to acquire large-scale genetic data. Bioinformatics approach uniting biology, mathematics, and computer science has further widely facilitated molecular mechanism explanation and discovery of tumor-correlated diagnostic markers. RNA-sequencing 13 has found that amounts of genes come into discrepancy along the course of bone malignancy transformation. By comparing mRNA expression profiles between OS tissues and cell lines and xenografts, Kuijjer M L et al. 14 initially achieved histological subtyping classification (osteoblastic, chondroblastic, fibroblastic) at transcriptome level. In parallel, epigenetic events and RUNX2 interactome were identified to be constitutively activated in OS 15 . Nevertheless, targeting networks of miRNAs to mRNAs underlying osteosarcomagenesis have not been systematically interpreted yet. MiRNAs are essential components in biological homeostasis and the current paper Results DEGs and DEmiRNAs between hMSC and OS cell lines. The sample set GSE70415, which consists of miRNA (GSE70367) and mRNA (GSE70414) expression profiles of five human OS cell lines (MG63, Saos, HOS, NY, Hu09) and a corresponding control (hMSC) was obtained from Gene Expression Omnibus (GEO). Following standard protocol 16 of samples qualification and normalization, raw expression values were summarized and analyzed in a consecutive workflow (seen in Fig. S1) based on GCBI. In total, 3856 (P < 0.01) significant DEGs were identified, of which 1705 over-presented and 2151 showed an attenuated behavior (Fig. 1a). Periostin (POSTN), a canonical osteoblast marker, has not only exhibited a most significant declination among the whole collection, but recent study has also already verified hypothesis that aberrant stimulation of it concerned with bevacizumab induced resistance in the cases of glioma implementing with anti-VEGF-A therapy 17 . Meanwhile 250 (P < 0.01) DEmiRNA picked out from microRNA repertoire comprised by 161 ascent items and 89 down-regulated miRNA episodes (Fig. 1b). Whereas, of some limitation, miR-182-5p and miR-708-5p, existing the highest contradictory deviation (absolute fold change|FC > 100) within current community events both could not be tracked among 81 small sequences in curated Osteosarcoma Database 5 . Along the clarification of microRNA-engaged epigenetic reprogramming, potential connection between both of them and vorinostat, an approved histone deacetylase inhibitor was further validated in 143B and MG63 (data have not been published). In addition, DEGs which were statistically significant complied with cumulate information partially (about 7%) after matching to 911 trustworthy entries within the Osteosarcoma Database (seen in Fig. S2). The full tables of DEGs and DEmiRNAs were included in Tables S2 and S3. Functional enrichment of DEGs and DEmiRNAs between hMSC and OS cell lines. As known, tumorigenesis is featured with a number of biological disorders and cellular events dysregulation, such as angiogenesis, cell adhesion, signaling transduction. Thus, it is absolutely necessary to unravel discrepant biological processes and pathways recruited along the duration of neoplasia. In the enrichment modules, 395 records of GO and 142 KEGG pathways (full tables can be seen in Tables S4 and S5) were verified through employment with Fisher exact testing and FDR 18 . Moreover, we annotated top-ranked 20 GO and KEGG pathways respectively without distinguishing biological process (BP), cell component (CC) and molecular function (MF) ( Fig. 2a and b). It is obvious that the top three enriched biological processes contained extracellular matrix organization, small molecule metabolic process, cell adhesion (GO IDs: 0030198, 0044281, 0007155). Whereas, pathways in cancer, PI3K-Akt signaling pathway, metabolic pathways (pathway IDs: 5200, 4151, 1100) were three most significantly concentrated pathways through which oncogenes silencing was switched on or off. Both GO and KEGG pathway enrichment analysis showed a peak distribution of DEGs in metabolic dysfunction. To some extent, this was consistent with previous consensus 19 that tumor events, such as proliferation, metastasis and angiogenesis could be partially attributed to hypermetabolic activity of neoplasm. Besides, MAPK signaling pathway, pathways in cancer, and cell cycle (pathway IDs: 4010, 5200, 4110) acted as leading initiators mediating follow-up aberrant pathway cascades through assessment of determination coefficient (Fig. 3a). Intensive pathways featured with more than 10 contribution degrees were formatted into Table 1. Genes interplay and co-expression networks. To further explore and clarify realization of message or communication flow from member to member scattered at the crossing pathways, visualization and cluster analysis of hub genes were accomplished using cytoscape 3.4.0. We picked out 698 overlapped genes derived from GO and KEGG pathway analysis and applied them to genesignal (shown in Fig. S4) and co-expression network construction (Fig. 3b). As co-expression graphic illustrated, correlative genes positively or negatively interacted with their neighbors in a non-direction nested manner. According to MCODE 20 analyzer, 19 subordinated nodes intimately clustered to creatine kinase, mitochondrial 2 (CKMT2), also known as SMTCK, which was indispensible when maintaining rational energy metabolism. Thus, our colleague later validated hypothesis that CKMT2 might as a key regulating factor participating in osteosarcomagenesis (data have not been published). Targets prediction and miRNAs-targets interaction. MiRNAs, a group of well-known endogenous non-coding RNA, usually act as transcription regulators during gene expression through binding to 3′-untranslated region (3′-UTR) of target mRNAs. It is explicit that diversity of miRNAs resulted from length or alignment of seed region complicates regulatory models. Thus, further understanding net-association between miRNAs and mRNAs is extremely needed. By utilizing GCBI that integrating TargetScan 21 and miRanda 22 databases, we mined out 250 DEmiRNAs with an up to down ratio at 161/89 (shown in Fig. 1b). Abiding by the base-pairing principle, there were 29227 genes found deposited in the target pools (TargetScan and miRanda). Conversely, 388 were substantially involved in GO enrichment (seen in Fig. S3) and 608 were mingled with DEmiRNAs regardless of exact binding pair bases. To delineate miRNAs-mRNAs axis vividly, we postulated index degrees which changed no less than 10 to be of significance in transcription function in our research and deeply screened impaction networks of selected 40 DEmiRNAs (shown in Table S1). Illustration of connective networks of miRNAs and corresponding targets were realized using cytoscape 3.4.0. In summary, 238 downstream genes were blocked and 181 targets found to be in an activated status ( Fig. 4a and b). The results showed that either lower-expressed miR-29b-3p or over-presented miR-93-5p was hub miRNA possessing most significant impact on gene transcription and even protein function implement. Protein-protein interaction in OS cell lines. To study protein-protein interactive association of DEGs mediated by DEmiRs, we screened 35 typical DEmiRs (FC ≥ 10 compared to control) and integrated protein-protein interaction (PPI) network of under-manipulated target mRNAs by means of STRING 10.0. Neither six isolated nodes (has-miR-941, 127-3p, 487b-3p, 34a-3p, 493-3p, 654-3p) without microRNA-mRNA joint nor molecules that absent from function (GO or KEGG) participation was eliminated. Then emerged 43 genes were employed to construct PPI network by using cytoscape 3.4.0 (Fig. 5a). Within shaped model, receptor nodes already have been verified or not, such as FOXO1, BMP, members of COL and ITG families were predicted to interact with members essential for pathway perturbation, among which some classical suppressive factors involved, like TP53, EGFR and MMP2. Figure 2. Representative GO and KEGG pathways enrichment analysis of osteosarcoma. Significantly changed GO (a, left) and KEGG pathways (b, right) of predicted DEGs were illustrated. The left y-axis titled with −log 10 P and the right y-axis presented DEGs while the x-axis showed GO/KEGG category. The larger −log 10 P indicated a smaller P-value. −log 10 P: negative logarithm of the P value. Figure 3. Co-expression network analysis of osteosarcoma. Significantly coefficient KEGG pathway network (a, left) was visualized with augmented index degrees (circles from cyan to red). Co-expressed DEGs were integrated into networks using bioinformatics methodology (b, right). Positive/negative function among common genes (rectangles, blue) and even tightly clustered elements (purple and green) were displayed with different colors (red and black). Discussion In this study, we firstly provided a systematical miRNAs-mRNAs functional model based on expression profiles of OS transcriptome. Distinctive to previous researches focusing on individual command element, we analyzed a large number of molecules and integrated them into a functional network via adopting a bioinformatic approach. This research is not only a promotion in revealing small non-coding RNA disorder hiding in oncogenesis, even chemoresistance, but also indispensable for clinical early-screening and targeted therapy exploitation 23 , though underveining disturbance mediated by genetic or microenvironment origin remains to be a challenge. By microarray analysis, we firstly identified 3856 mRNAs and 250 miRNAs which significantly diverged in OS cells. POSTN, mainly involved in osteoblasts adhesion and differentiation 24,25 , was found declined remarkably in OS subgroups comparing to normal sets. Nevertheless, what fascinated us was that expression of POSTN had been reported to remain at a high level in OS compared to osteochondroma and high content of POSTN intensely correlated with tumor angiogenesis and poor prognosis in the OS as well as high grade glioma in vivo 17,26,27 . The probable reasons for this discrepancy might be inconsistent of sample type (cell lines versus specimens) and detection approaches (RNA microarray versus immunohistochemistry). Subsequently, the results of functional enrichment analysis demonstrated that metabolic pathway played an important role and a large number of cancer associated pathways were distinguished, including PI3K-Akt and MAPK signaling. There is a reason to believe that chemoresistance is relevant to metabolic abnormality as miR-221, −101, −22, −155 28-31 have already been proved to participate in cisplatin and doxorubicin derived chemo-resistance as well as our investigation about SAHA to miR-182-5p and -708-5p in OS cells. Alternatively, activation of the PI3K-Akt pathway suppressed cell longevity through phosphorylation of FOXO members and balancing its activity with MAPK and NF-κB pathway intimately associated with tumors survival 1 . On the other hand, stimulation of MAPK signaling was confirmed to link with elevated EGFR phosphorylation and MMP-9 levels mediated by lowering miR-143 in OS 32 . Except those miRNAs-pathways 5,7,33 verified so far, newly discovered miRNAs expanded OS related miRNAs spectrum notably. Furthermore, modeling of miRNAs-mRNAs networks was achieved using a well-established tool to visualize intricate nodes connections ( Fig. 4a and b). Despite not the most altered, miR-29b-3p and miR-93-5p were two core upstream elements targeting transcription proceeding of which miR-29b-3p induced OS depression had been affirmed to be with tumor-specific subcellular localization 34 while the most significant miR-182-5p and miR-708-5p displayed relatively moderate and even lower contribution degrees. It seems that efficiency of miRNAs is not simply determined by the level of variation but relied on critical GO and pathways. In summary, 1181 linkages have been established in the current study, which has been a striking acceleration about non-coding unit mediated OS carcinogenesis. Bioinformatics approach combining GCBI and cytoscape, an innovative pipeline distinguished from troublesome data processing to pattern display, facilitating dimensional molecular interaction and model analysis based on antecedent data and improved algorithm multidisciplinary. It is insufficient that our present paper has just successfully explained the relationship between microRNAs and coding targets, and necessary to further supplement another non-coding factors, including long non-coding RNAs, circle RNAs mediated competitive mechanism. There is no doubt that combinational strategies through employing identification of group effect of non-coding RNAs-mRNAs-proteins even small inhibitors and drugs would be potent approaches and might bring a breakthrough. Differentially expressed genes and miRNAs-mRNAs analysis. To identify DEGs and DEmiRNAs between OS and hMSC cell lines, a web-based online tool GCBI (www.gcbi.com.cn/gclib/html/index) was utilized. Entries qualification and calibration were then achieved by taking standard Median Polish algorithm 16 . Only probe signals with p-values < 0.01, false discovery rate (FDR) < 0.01 and absolute value of fold change (FC) > 2 were considered to be statistically differential. Genesignal and co-expression network were further constructed based on contribution degrees according GCBI protocol (http://college.gcbi.com.cn/helpme). Enrichment analysis and networks construction. For visualization, cytoscape 3.4.0 36 (http://www. cytoscape.org/), an open source platform, was utilized to portray the relationship among target molecules. Gene Ontology (GO) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis 18 for DEGs was performed with GCBI mentioned above. DEGs and DEmiRNAs given previously were selected out to construct networks. Molecular Complex Detection (MCODE) 20 , based on vertex weighting by local neighborhood density and outward traversal from a local dense seed to the isolate the dense regions, was employed to find molecular complexes.
2023-02-17T15:04:14.090Z
2017-07-14T00:00:00.000
{ "year": 2017, "sha1": "0ef0103996f31f0e6c6604e5de4b195d6917a4f7", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-017-05819-7.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "0ef0103996f31f0e6c6604e5de4b195d6917a4f7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
248613507
pes2o/s2orc
v3-fos-license
The prevalence and major determinants of non-compliance with anti-seizure medication among children Background: A wide range of adherence to the use of anti-seizure medications has been reported among children with the disease, and accordingly, various factors on the degree of adherence to the drug have been reported. But in our society, there is no clear picture of drug adherence and related factors among children with seizures. We evaluated the frequency of adherence to anti-seizure medication as well as related factors. Methods: This cross-sectional study was conducted on 120 children with epilepsy who referred to Ali Asghar Hospital in Tehran, Iran, during 2019 and 2020. Along with demographic characteristics, adherence to antiepileptic medications was assessed by the Modified Morisky Scale (MMS). Results: The overall frequency of adherence to anti-seizure medications among children was reported to be about 41.7%. Among all baseline characteristics, much higher adherence was revealed in patients with educated parents. The rate of drug adherence in children with a history of perinatal morbidities was much lower than in other patients. The type of seizure could also affect the rate of drug adherence as the highest and the lowest adherence was found concerning focal impaired awareness seizure (57.1%) and atonic seizures (11.1%) indicating a significant difference (P = 0.022). The most common causes of non-adherence to treatment were expressing inability to treat the patient (23.0%), parents’ forgetfulness to give medicine to the child (18.3%), and not taking medication when traveling or leaving home (16.7%). Conclusion: The lower level of education of the parents, type of seizure, as well as the presence ofunderlying perinatal morbidity in the child can predict non-compliance with anticonvulsant medication regimens among affected children. Introduction Epilepsy is a disease characterized by an enduring predisposition to generate epileptic seizures and the neurobiological, cognitive, psychological, and social consequences of this condition. Seizure is caused by abnormal, cortical, and neuronal hyperexcitability and is characterized by abnormal or normal-appearing brain scans. 1,2 In addition to physical injuries, epilepsy also hurts the patient and the individual, social, and economic aspects of the patient's life. Proper control of epilepsy with the use of anti-seizure medications prevents many side effects. 3 The incidence of epileptic seizures is higher in the early years of life and in 42% of cases, seizures occur before the age of 22. 4 In Iran, the prevalence of epilepsy is estimated at 1.3% of the total population. 5 Treatment of epilepsy is based on drug therapy, and in recent years, the number of anti-seizure medications approved by the Food and Drug Administration (FDA) has increased dramatically. However, the seizure has been remained uncontrollable with medication in 30% to 35% of patients. 6 According to the results of studies, the most common cause of recurrence of epileptic seizures is non-compliance with the medication regimen by patients or arbitrary discontinuation of these drugs. 7 In various texts, several definitions of adherence to the drug regimen have been presented. In a comprehensive definition, "adherence to the medication regimen is the use of prescribed drugs at the correct time and amount, and continuing to use them during the prescribed period". Evidence suggests that non-adherence to the medication regimen is a major problem in patients with chronic diseases such as hypertension (HTN), asthma and other chronic respiratory diseases, diabetes, and epilepsy. 8 Non-adherence to the medication regimen not only reduces the effects of treatment but also increases the financial burden associated with chronic diseases. 9,10 Epilepsy is also a chronic disease and non-adherence to the medication regimen is one of the problems in its treatment. It has been determined that 30% to 60% of patients with epilepsy do not adhere to the medication regimen. 6 The results of studies show that there is a significant relationship between adherence to the drug regimen and control of epileptic seizures as well as mortality. In one study, the mortality of patients with epilepsy who did not adhere to the medication regimen was three times higher than that of patients with adherence to the medication regimen. 11 It should be noted that the degree of adherence to the drug regimen varies in different conditions. In those who have just started treatment and for patients with acute illness for whom long-term use of the drug does not seem necessary, the rate of adherence is high. 5 It means that the rate of adherence decreases over time, so that after three months, 21% and after one year, 88% of patients stop their medication. Poor adherence to the medication regimen is a serious problem, because almost half of the patients with chronic diseases do not take their prescribed medications. 7 In one study, one-third of adolescents with epilepsy showed poor adherence to treatment. 11 In the social dimension, people with epilepsy may experience social isolation and limitations or may feel unable to work and be rejected by others. All of these factors reduce their level of psychosocial performance, self-efficacy, and quality of life (QOL). 12 Our study was designed to determine the factors affecting drug adherence in patients with epilepsy to identify and eliminate them and help improve the treatment process and QOL of children and adolescents with epilepsy. Materials and Methods This cross-sectional study was conducted on 120 children with epilepsy who referred to Ali Asghar Hospital in Tehran, Iran, during 2019 and 2020. Samples were selected by available sampling method, based on inclusion criteria, which were a definite diagnosis of the disease by a neurologist, aged 1 to 15 years, lack of physical and mental disability, literacy, and taking at least one anticonvulsant drug for at least 6 months. The data collection tool consisted of two questionnaires: 1) a questionnaire containing demographic information completed by the samples, which was prepared after studying the texts and articles related to drug adherence and then was released to experts to determine the validity of the content. In addition to demographic characteristics such as age, sex, education, marital status, occupation, etc., possible factors related to drug adherence and the number of seizures in the past 3 months as well as drug side effects were also examined. 2) The Modified Morisky Medication Adherence Scale (MMAS), also known as the Modified Morisky Scale, designed by Morisky http://cjn.tums.ac.ir 05 January et al. in 1986 to assess drug adherence in different disorders was also used. 13 This tool includes eigth 5-point questions based on the Likert scale, with a score of 0 for never, 1 for rarely, 2 for occasionally, 3 for more often, and 4 for always adherence to drug use. Four terms of this questionnaire include: 1) forgetfulness in taking the drug, 2) carelessness when taking the drug, 3) stopping the drug if there are no seizures, and 4) stop taking the drug due to its side effects. In this questionnaire, the achievable score of each question is rated as 0 to 4. The score range of this questionnaire is 0 to 16, which higher score indicates lower adherence. Content validation and face validity methods were used to determine the validity of data collection tools. After studying several books and articles, the epilepsy self-management behaviors questionnaire and the demographic profile form were available to 10 faculty members of the School of Nursing and Midwifery of Iran University of Medical Sciences, Tehran, 5 neurologists, and 5 patients with epilepsy and then were examined and judged in terms of content validity and formal validity, and according to their comments and suggestions for improvement, the necessary changes were considered. After collecting patients' background information based on a questionnaire containing demographic information, the MMAS questionnaire was provided to patients and their parents, and information related to the degree of adherence to medication regimens was collected. For statistical analysis, results were presented as mean ± standard deviation (SD) for quantitative variables and were summarized by frequency (percentage) for categorical variables. Continuous variables were compared using t-test or Mann-Whitney test whenever the data did not appear to have normal distribution or when the assumption of equal variances was violated across the study groups. P-values ≤ 0.05 were considered statistically significant. For the statistical analysis, the SPSS statistical software (version 23, IBM Corporation, Armonk, NY, USA) was used. Results In the present study, a total of 120 children with seizures admitted to the hospital were included in the study. The mean age of patients was 7.58 ± 4.36 years in the range of 1 to 15 years and 62.5% were boys. Baseline characteristics were summarized in table 1. Most fathers and mothers had a good level of education. In terms of the type of seizure, 29.2% had generalized tonic-clonic seizure, 17.5% had focal impaired awareness seizure, 15.0% had atonic type, 9.2% had nonmotor type, 16.7% had tonic type, and 12.5% had other types of seizures. In total, in 43 cases (35.8%), there was a family history of seizures, which included 18 cases in the father, 8 cases in the mother, 5 cases in the brother, 7 cases in the sister, and 5 cases in other relatives. History of prenatal disorders was reported in 34 cases (25.3%), including 6 cases of kernicterus, 11 cases of asphyxia, 11 cases of cerebral palsy, and 6 cases of metabolic disorders. Previous history of anticonvulsant medication was also reported in 61.7% of patients. Table 2 presented the antiepileptic medications in use along with dosage and duration of use. In terms of adherence to prescription drugs, out of a total of 120 patients studied, optimal adherence was observed in 50 cases (41.7%). The most common causes of non-adherence to treatment (Table 3) were expressing inability to treat the patient (23.0%), parents' forgetfulness to give medicine to the child (18.3%), and not taking medication when traveling or leaving home (16.7%). As shown in table 4, drug adherence was independent of patients' sex (P = 0.069), age (P = 0.185), family history of seizure (P = 0.459), and type of drugs (P = 0.468). However, the adherence to antiepileptic medications was significantly higher in patients with more educated fathers (P = 0.002) and mothers (P = 0.035), and in those without perinatal morbidities (P = 0.001). The type of seizure could also affect the rate of drug adherence as the highest and the lowest adherence was found concerning focal impaired awareness seizures (57.1%) and atonic seizures (11.1%), indicating a significant difference (P = 0.022). However, the type of medication did not affect the rate of adherence to medication (P = 0.468) (Figure 1). Discussion Full adherence to anticonvulsant medication is critical, especially in affected children. In addition to the complete supervision of the patient's parents on how to prescribe the medication, the patient's desire to receive the medication can also be considered as a factor influencing the implementation of the medication protocol. Based on various studies, a wide range of adherence to the use of anti-seizure medications has been reported among children with the disease, and accordingly, various factors on the degree of adherence to the drug have been reported. But in our society, there is no clear picture of drug adherence and related factors among children with seizures. Therefore, we decided to evaluate the frequency of adherence to anti-seizure medications as well as related factors. In this evaluation, we first showed that the overall frequency of adherence to anti-seizure medications among children was reported to be about 41.7%, which was quite in the middle of the reported range of adherence in other studies. In the study of Yang et al., 14 the frequency of complete, relative, and poor adherence to anticonvulsant therapy was 21.3%, 51.4%, and 27.3%, respectively. In the study of Shetty et al., 15 in total, 30.9% of 320 children had adherence to prescribed medications. In the study of Modi et al., 16 the adherence rate within one month from the start of treatment was 79.4%. In the study of Jacob et al., 17 adherence to anti-seizure medications was 68.9%. In a systematic review study by Yang et al., 18 the rate of drug adherence was estimated to be between 22.1% and 96.5%, with an overall average of 55%. 20 adherence to anticonvulsant therapy during the first year was 70.1%, which was reduced to 56.8% in the second year. Therefore, a review of different studies showed a very diverse degree of adherence to the treatment of children with seizures with anti-seizure medications, that can predict the involvement of a wide range of different factors, especially demographic characteristics. In the second step, to justify such a significant difference in the frequency of drug adherence, we also evaluated the features associated with this adherence therapy and showed that among all baseline characteristics, firstly, much higher adherence was revealed in patients with educated parents. It could be said that, the level of awareness of educated parents about the importance of medication in epileptic patients and also the degree of their responsibility to follow the disease was higher than parents with a lower social levels. As a second finding, the rate of drug adherence in children with a history of perinatal morbidities was much lower than in other patients. In other words, it seems that the main reason for the decrease in drug adherence among children with a history of perinatal morbidities, fearful parents from worsening the clinical condition of their children with anti-seizure medications and also fear from severe side effects in these children after taking drugs. In contrast, other factors including gender, age, family history of seizures, previous history of anticonvulsant drug use, and type of anticonvulsant drug did not affect drug adherence in these patients. Comparison of the findings of this study with other studies indicates the similarity of our findings with previous studies. In the study of Yang et al., 14 patient age, type of seizure, total family income, and source of drug information were identified as factors related to adherence therapy, which, of course, differed significantly from the findings of our study. In the study of Shetty et al., 15 there was no relationship between the degree of adherence therapy with clinical features such as gender, duration of seizure period, other underlying clinical problems, and seizure frequency. In the study of Modi et al., 16 adherence therapy depended only on the socioeconomic level of individuals and was not affected by gender, age, type of seizure, type of prescription drug, frequency of seizures, or duration of seizure onset, which is quite similar to our study. In the study of Jacob et al., 17 in terms of drug adherence than children in East Germany (in West Germany, the socioeconomic level of individuals is much higher than in East Germany), but the degree of adherence was not related to the type of drug used. In a systematic review study by Yang et al., 18 family financial support, family size, support of health care institutions, and higher socioeconomic status were identified as factors affecting compliance. In this study, it was also shown that the results regarding the effect of age, frequency of seizures, type of seizure, type of drug, and the number of prescribed drugs were completely contradictory. In the study by Nazziwa et al., 19 cases of non-compliance were significantly lower among children whose parents were employed. 20 Besides, in the study by Lee et al., 20 patients who started treatment before the age of one year, patients who received treatment with older generations of patients, or those who had localized seizures were much less likely to follow treatment. Overall, about the factors related to the lack of drug adherence among children with seizures, what can be emphasized and seems http://cjn.tums.ac.ir 05 January certain is the reduction of drug adherence in families with lower social and economic levels is accompanied with a low level of awareness about the importance of this obedience. Therefore, increasing the awareness of these families can lead to increased treatment adherence in such families. Conclusion As a final result, first, the frequency of non-adherence to the medication regimen among children with seizures is estimated at 41.7%. The lower level of education of the parents, type of seizure, as well as the presence of underlying perinatal morbidities in the child will predict non-compliance with anticonvulsant medication regimens, while gender, age, family history of seizures, history of taking anti-seizure medications, or type of prescription drugs will not affect this drug adherence.
2022-05-10T16:47:03.104Z
2022-01-05T00:00:00.000
{ "year": 2022, "sha1": "552ee753424cf40d3ff862adc32f140c692b1233", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.18502/cjn.v21i1.9358", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "31425c30136c9e16146ce36c27eff12f7761b274", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255415429
pes2o/s2orc
v3-fos-license
Statistical and machine learning approaches to predict the necessity for computed tomography in children with mild traumatic brain injury Background Minor head trauma in children is a common reason for emergency department visits, but the risk of traumatic brain injury (TBI) in those children is very low. Therefore, physicians should consider the indication for computed tomography (CT) to avoid unnecessary radiation exposure to children. The purpose of this study was to statistically assess the differences between control and mild TBI (mTBI). In addition, we also investigate the feasibility of machine learning (ML) to predict the necessity of CT scans in children with mTBI. Methods and findings The study enrolled 1100 children under the age of 2 years to assess pre-verbal children. Other inclusion and exclusion criteria were per the PECARN study. Data such as demographics, injury details, medical history, and neurological assessment were used for statistical evaluation and creation of the ML algorithm. The number of children with clinically important TBI (ciTBI), mTBI on CT, and controls was 28, 30, and 1042, respectively. Statistical significance between the control group and clinically significant TBI requiring hospitalization (csTBI: ciTBI+mTBI on CT) was demonstrated for all nonparametric predictors except severity of the injury mechanism. The comparison between the three groups also showed significance for all predictors (p<0.05). This study showed that supervised ML for predicting the need for CT scan can be generated with 95% accuracy. It also revealed the significance of each predictor in the decision tree, especially the "days of life." Conclusions These results confirm the role and importance of each of the predictors mentioned in the PECARN study and show that ML could discriminate between children with csTBI and the control group. These observations raise concerns that many of the CTs performed for this indication unnecessarily expose children to radiation, which is harmful in the long, leading to increased risk of secondary malignancies [20][21][22]. In particular, in many children, history, physical examination, and observation over a while are sufficient to rule out significant intracranial injury [23][24][25]. It is important for physicians in the emergency department to decide whether or not to perform CT for children with head trauma. Clinical decision rules such as PECARN have revealed an excellent algorithm to identify the children with clinically-important traumatic brain injury (ciTBI) and prevented many unnecessary head CT scans in children [26][27][28]. Artificial intelligence (AI) uses computer systems to simulate cognitive abilities to achieve goals. Machine learning (ML) classification is one of the domains of AI that enables an algorithm or classifier to learn patterns in large, complex datasets and produce useful predictive outputs. The number of published ML studies in neurosurgery is increasing [29][30][31][32][33]. Some of them have focused on the application of ML algorithms to support clinical decision-making in neurosurgery [30]. However, no studies have yet been published as to the use of ML to predict the necessity of CT in children with mTBI. The purpose of this study was to clarify two issues regarding mTBI and the requirement of a CT scan. Firstly, we tried to statistically assess the differences in the predictors in the PECARN study between the control and the children with mTBI. Secondly, we evaluated the feasibility of ML to predict the necessity of CT scans in children with mTBI. intubation for more than 24 h, or hospital admission of 2 nights or more. Definition of mTBI on CT included intracranial hemorrhage or contusion, cerebral edema, traumatic infarction, diffuse axonal injury, shearing injury, sigmoid sinus thrombosis, midline shift of intracranial contents or signs of brain herniation, diastasis of the skull, pneumocephalus, and skull fracture depressed at least the width of the table of the skull. We defined clinically-significant TBI (csTBI) included ciTBI and mTBI on CT because of requiring at least hospital admission for observation or further treatment. CT scans were obtained at the clinician's discretion with helical CT scanners, with radiographic slices separated by 5mm or less. Before the application of the PECARN criteria, criteria for performing CT scans in our hospital were based on physician judgment and caregiver preference, although children with impaired consciousness, a history of LOC, and a history of seizures were of course considered. CT scans were interpreted by site board-certified neurosurgeons. Selection of predictors Risk predictors were described based on those of the PECARN study [26], including gender, the severity of injury mechanism, history of loss of consciousness (LOC), LOC duration, history of vomiting, number of vomiting, acting abnormally per caregivers, Glasgow Coma Scale (GCS), altered mental status, signs of basilar skull fracture, palpable fracture, and scalp hematoma. Age was recorded in days in this study. Injury mechanisms were divided a priori into three categories [26]: severe, moderate, and mild. These predictors except for gender and days of life were categorized as shown in Table 1. Data analysis For a two-group comparison of the control and csTBI, an unpaired t-test and the Mann-Whitney U test were used to determine significance for parametric and non-parametric data, respectively. We also performed a three-group comparison among control, mTBI on CT, and ciTBI. For parametric and non-parametric data, unpaired (between groups) one-factor analysis of variance and multiple comparisons and multiple comparisons by Ryan's method using the Mann-Whitney U test were applied, respectively. All hypothesis tests were conducted against a 2-sided alternative. P value were considered statistically significant when less than .05. Machine learning Our primary analysis sought to understand the predictive accuracy of a local big-data-driven, machine learning approach based on the previously published clinical decision rules and traditional analytic techniques for classification. A decision tree was selected as the modern machine learning-based model. This study used python version 3.7 and its accompanying packages, implemented from packages such as Scikit-Learn. To predict csTBI based on predictors, we applied supervised ML (sML) using a program written in python. The decision tree method was used for the classification of the children. The accuracy of the algorithm was assessed by calculating the precision. The data for this study were divided into two sets: a training data set and a test data set. The training dataset accounted for 80% of the total data in the evaluation of the predictive model using machine learning. The performance of the predictive models was evaluated using Receiver Operating Characteristic (ROC) curves, specifically Area Under Curve (AUC). In order to investigate the risk of mTBI (csTBI) at a specific days of life, the outcome of mTBI (csTBI) was plotted against the days of life. This study complies with the standards of the Declaration of Helsinki and the current ethical guidelines. The study also was approved by the institutional ethics board and by the IRB. Verbal consent was obtained from the caregivers for using the data. Table 2 showed the demographic characteristics in control, mTBI on CT, ciTBI, and csTBI. The female ratio and days of life in the control group were significantly higher than in mTBI on CT, ciTBI, and csTBI, respectively (Tables 3, 4). Results The ratio of CT obtained in all children was 26.0%, those of each group showed 21.9%, 100%, 100% in control, mTBI on CT, and ciTBI, respectively. Group comparison In the two-group comparison between control and csTBI, statistical significance was observed for all non-parametric predictors except for severity of injury mechanism (Table 3). Table 4 also showed the results in three-group comparisons for all parametric and non-parametric predictors. Based on the results, these predictors were divided into four classes (Table 5). Prediction with machine learning Supervised ML with a decision tree was applied to classify the children into two classes: control children who did not need a CT scan and children with csTBI who needed a CT scan. Fig 1a showed the relationship between the maximum depth (max depth) of the tree and the area under the curve (AUC), revealing that the test data showed a peak AUC at the third depth, Abbreviations: mTBI, mild traumatic brain injury, history; CT, computed tomography; ciTBI, clinically important traumatic brain injury; csTBI, clinically significant traumatic brain injury. https://doi.org/10.1371/journal.pone.0278562.t002 followed by a decreasing AUC. Therefore, we created an ML algorithm with this constraint and achieved an accuracy of 0.95 (Fig 1b). Fig 1c shows the relationship between the false positive rate (fpr) and the true positive rate (tpr) for max depths of 2, 3, and 10. In the setting of max depth 3, the accuracy of the training and test data was 0.961 and 0.955, respectively (Table 6). A comparison of the actual and predicted data showed that accuracy, precision, and F1 scores were 0.95, 0.95, and 0.95, respectively. The AUC was 0.85 in the max depth 3. Discussion This study identified two issues regarding the need for CT scans in children with minor head trauma. First, the statistical evaluation on predictors presented in the PECARN study [26] showed a significant difference between control and csTBI, mTBI on CT, or ciTBI, respectively. Secondly, the study showed that sML could be used to predict the necessity of a CT scan of the head with high accuracy for children with mTBI. This study also elucidated the importance of each predictor, especially days of life. Table 2 showed the demographic characteristics of children in control, mTBI on CT, ciTBI, and csTBI. In the two-group comparison between control and csTBI, there were statistical differences in days of life although gender showed no difference with p = 0.05 (Table 3). In the three-group comparison, the control group had significantly more days of life than mTBI on CT and ciTBI (Table 4), while there was no difference in days of life between mTBI on CT and ciTBI, or gender. The CT acquisition rate in this study was 26% of all children. This is lower than the 35% reported in the PECARN study [26]. Meanwhile, the CT acquisition in children with mTBI on CT and ciTBI were 100%, respectively. These findings were better than expected [27,[34][35][36]. The comparison regarding the non-parametric predictors Comparison of the non-parametric predictors between the two groups showed that all predictors except severity of injury mechanism were significant between control and csTBI (Table 3). It means that this study also confirmed most of the predictors in the PECARN study were important to identify children with csTBI. Meanwhile, the non-parametric predictors could be subdivided into four classes to discriminate between the three groups of children: control, mTBI on CT, and ciTBI (Tables 4, 5). Gender and severity of injury mechanism were classified as class I, both of which showed no significance in comparisons between any two of the three groups (Table 5). Class II included days of life, history of vomiting, frequency of vomiting, and palpable skull fractures, which were found to be predictors for clarifying children with mTBI on CT and with ciTBI from control children. Conversely, the class II predictors could not discriminate between children with mTBI on CT and with ciTBI. In addition, history of LOC, LOC duration, and scalp hematoma were classified as class III and showed significance between control and ciTBI and between mTBI on CT and ciTBI, but not between control and mTBI on CT. Taken together, the class II predictors could identify children with csTBI, but it is hard to point out the severity of the head injury. Class III predictors may be used to identify more severe types of traumatic brain injury. All of the class IV predictors relating to consciousness were significant in all of the two-group comparisons among the three groups. In other words, the results suggested that predictors related to consciousness are important when considering the need for CT scans in children with head trauma. The PECARN study showed that six predictors were important: altered mental status, scalp hematoma, LOC, mechanism of injury, palpable skull fracture, and acting normally per parent. In particular, altered mental status and palpable skull fractures were associated with a higher risk of ciTBI. Suggested CT algorithm for children younger than 2 years elucidated that GCS 14 or altered mental status, and palpable skull fracture were the first predictors to pick up the children who require a CT scan [26]. They were classified as II and III in this study, suggesting these results were compatible with those in the PECARN study. In the second branch of the PECARN algorithm, scalp hematomas other than frontal, a history of LOC longer than 5 seconds, severe injury mechanism, and acting abnormally per parent were predictors of excluding children for whom CT was not recommended. These predictors were classified as class III and IV, except for severe injury mechanisms. This suggested that children with minor head trauma requiring CT scans may be picked up by a combination of class II and IV or class III and IV predictors [37-39]. To our best knowledge, this is the first implication that each predictor fulfills its role. The injury mechanism has been previously identified as an independent predictor of TBI [24,26,27,34,40]. Mechanisms associated with increased risk of TBI in children after blunt injury include high-speed motor vehicle accidents, bicycle-related injury, impact from the highspeed projectile, and fall from a height or downstairs [27,34,41]. Nigrovic et al. concluded that children with isolated severe injury mechanism at low risk of ciTBI, and many do not require emergent neuroimaging [42]. Prediction of the necessity of a CT of the head with sML With sML using a decision tree method, the children with csTBI could be successfully identified from the control with a prediction accuracy score of 95% (Fig 1b). Fig 1d illustrated the importance of the predictors when creating the decision tree, revealing that days of life was the most important, followed by palpable skull fracture, and scalp hematoma. On the other hand, GCS and signs of basilar skull fracture showed less importance in this decision tree. Because decision trees are powerful and popular prediction methods, this study applied sML with the decision tree method. The final decision tree is very well suited for operational use because it can explain precisely why a particular prediction was made. Decision tree algorithms are known to overfit the training set. It is, therefore, critical to providing information on the performance of the training and test sets separately, as well as information on the parameter tuning of the algorithm such as grid search [43]. The prediction accuracy and AUC were maximized at a maximum depth of 3 when creating the sML algorithm for 2 class classification in this study (Fig 1a and 1c), the training and test achieved a high accuracy of 96.1% and 95.5%, respectively, under these conditions. Accuracy, precision, and F1 score were 0.95, 0.95, 0.95, respectively, which also indicated the effectiveness of the algorithm. We also attempted to use sML to identify children with mTBI on CT or ciTBI from control. Fig 2a showed that a decision tree could be created with sML, with a prediction accuracy score of 95% when applying the max depth 7. The ROC for mTBI on CT was indicated 0.85 as shown in Fig 2c, while the ROC for control and ciTBI showed moderately high. On analysis regarding the contribution of each predictor on the decision tree, days of life was the most significant for identifying the children of each classification (Figs 1d and 2c). Furthermore, day of life with different cutoff values was observed in many branches (Figs 1b and 2a). These findings suggest that days of life may be the most important factor to decide on obtaining CT scans for head trauma in children younger than 2 years of age, and that days of life could be used instead of age in general clinical decision rules. The days of life was employed in this study because we believe that children have important characteristics about time, especially when small children are the subject of clinical research. For example, a child who is 364 days old is to be 0 years old, and a child who is 365 days old is to be 1 year old, but it is natural to assume that there is no significant difference in terms of development and growth. In addition, Figs 1d and 2c revealed the importance of the predictors, such as scalp hematoma, palpable skull fracture, and altered mental status. These predictors were also key factors to identify the children requiring a CT scan in the PECARN algorithm. In the PECARN study, the prediction rule with normal mental status, no scalp hematoma except frontal, no LOC or LOC for less than 5 seconds, non-severe injury mechanism, no palpable skull fracture, and acting normally per caregivers had a negative predictive value of 100% and sensitivity of 100% [26]. To our best knowledge, this is the first expertise analysis that showed the feasibility of the sML to identify children with csTBI from control, and the significance of each predictor, especially days of life. However, Fig 3 could not show a characteristic relationship between the risk of mTBI and days of life. This study indicated sML could be used to predict the necessity of a head CT regarding childhood mTBI. Although AI-based systems are powerful technologies [44-51], they should not replace the clinical judgment of physicians and medical teams [29][30][31][32][33]. The ideal role of these systems is as a data-driven input to the surgical decision-making process, designed to solve focused problems such as predicting the risk of mTBI in this study. Limitation of this study Regarding demographic characteristics, statistical differences were found between the control group and children with csTBI in two-or three-group comparisons, particularly concerning days of life. This issue may affect the interpretation of the results of this study. Future studies may need better demographic controls. CT scans were not performed on all children because we could not ethically justify exposing children to radiation. As with other decision support tools, these methods provide information to physicians and do not replace their decision-making [52]. In this study, decision tree method was applied to create sML algorithm, further studies using Random Forest, CatBoost and LightGBM, etc may be required for more precise analysis. [53,54]. Also only the parameters identified in the PECARN study were included in this study, other parameters should be included in the feature to obtain much benefits in performance. Since the purpose of this study is to determine the feasibility of sML for the problem of CT scans of children with minor head trauma, strict scientific procedures such as under-sampling and bagging were not applied to resolve class imbalances. This issue should be resolved in future studies. Conclusion This study clarified two issues regarding the need for CT scans in children with minor head trauma. First, the evaluation on predictors in the PECARN showed there is a significant difference between control and csTBI, mTBI on CT, or ciTBI, respectively. Secondly, the study showed ML could be used to predict the necessity of a head CT with high accuracy for children with mTBI, and also elucidated the importance of each predictor, especially days of life. These results are substantial for ER physicians because they need to balance radiation exposure with the need to miss serious head trauma in children when they must decide if a child with minor head trauma needs a CT scan.
2023-01-05T05:11:34.412Z
2023-01-03T00:00:00.000
{ "year": 2023, "sha1": "5aade92356647955aa45475de51859809126df7b", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "5aade92356647955aa45475de51859809126df7b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55066113
pes2o/s2orc
v3-fos-license
Inhibition of Diphenol oxidase activity of strawberry (Fragaria sp) Using L-cysteine and L-glycine L-glycine and L-cysteine exhibit, strong inhibition of partial purified DIPHENOL OXIDASE at strawberry. The concentration of L-glycine inhibiting Diphenol oxidase activity by 50% (IC50) was 0.5 and 0.4 mM at pH 6.7 and 8, respectively. The inhibition of partial purified Diphenol oxidase activity is pH and inhibitor dependent. Kinetic studies indicate that L-glycine is a uncompetetive inhibitor and L-cysteine is competitive and noncompetitive inhibitor of Diphenol oxidase activity. V max and Km for catechol oxidation at pH 6.7 and in presence of L-glycine (1.4M) was 0.09 ∆A min -1 and 10 mM. V max for catechol oxidation at pH 8 and in absence of L-glycine was 0.09 ∆A min –1 , with a Km of 3.5 mM. Kinetics parameters indicated the highest catalytic efficiency ( units mg –1 prot mM –1 ) with catechol and L-glycine at pH 8: 4 , then with L-cystein at pH 8: 1.7, L-glycine at pH 6.7: 1.4 and L-cystein at pH 6.7: 0.25. INTRODUCTION Many vegetables and fruits become discolored during storage or processing, an action mediated by the enzyme polyphenol oxidase [ Broothaerts et al., 2000]. Diphenol oxidase (tyrosinase, EC 1-14-18-1) is a copper-containing enzyme that is widespread in plants, synthesised early in tissue development and stored in chloroplasts [Van Gelder et al., 1997]. The enzyme is widely distributed in a multitude of organisms from bacteria to mammals [Robb., 1984]. Enzymatic browning is the main function of polyphenol oxidase in fruits and vegetables, and it is often undesirable and responsible for unpleasant sensory qualities and reduction in nutrient quality [Sanchez-Amat et al., 1997]. When cell membrane integrity is disrupted, phenolic substrates encounter the enzyme and are converted to o-quinones in a two-step process of hydroxylation of monophenols to diphenols (monophenolase activity), followed by oxidation of diphenols to o-quinones (Diphenol oxidase activity) [Espin et al., 1998]. Diphenol oxidase has been implicated in the formation of pigments, oxygen scavenging [Trebst et al., 1995] and defense mechanism against plant pathogens, [ Mohammadi et al., 2002] and herbivory insects [Constabel et al., 2000]. Phenolic compounds serve as precursors in the formation of physical polyphenolic barriers, limiting pathogen translocation. The quinones formed by Diphenol oxidases can bind plant proteins, reducing protein digestibility and their nutritive value to herbivores [Ryan., 2000]. On the other hand, the oxidation of phenolic substrates by Diphenol oxidase is thought to be the major cause of the brown coloration of many fruits and vegetables during ripening, handling, storage and processing. Enzyme Extraction and Ion Exchange Chromatography 500 grams of strawberry were homogenized in 250 mL of 0.1M phosphate buffer (pH 6.8) containing 10 mM ascorbic acid and 0.5% polyvinylpyrrolidone with the aid of a magnetic stirrer for 1h. The crude extract samples were centrifuged at 30000 g for 20 min at 4ºC. Solid ammonium sulphate (NH4)2SO4 was added to the supernatant to obtain 30 and 80% (NH4)2SO4 saturation, respectively. After 1 h, the precipitated proteins for each stage were separated by centrifugation at 30000 g for 30 min. The precipitate was redissolved in a small volume of distilled water and dialyzed at 4ºC against distilled water for 24 h with 4 changes of the water during dialysis. The dialysate was applied to a column (2.5 cm x 30 cm) filled with DEAE-cellulose, balanced with 10 mM phosphate buffer, pH 6.8. In order to remove non adsorbed fractions the column was washed with 200 mL of the same buffer at the flow rate of 0.5 mL/min. Then, a linear gradient of phosphate buffer concentration from 20 to 180 mM was applied. 5 mL fractions were collected in which the protein level and diphenol oxidase activity towards catechol as substrate were monitored. The fractions which showed diphenol oxidase activity were combined and were used as enzyme source in the following experiments. Protein concentration measured by lowry method. [ Lowry et al., 1951]. Diphenol oxidase Assay Enzymatic activity was determined by measuring the increase in absorbance at 420 nm for catechol with a spectrophotometer (6305 JENWAY). The sample cuvette contained 3 ml of substrate catechol in constant concentrations and in presence of different concentration of L-glycine or L-cysteine, prepared in the phosphate buffer. Assays were carried out by addition of 200 µl of extracts to the sample cuvette, and changes in absorbance 420 nm were recorded. The reference cuvette contained just 3 ml of substrate solution. Polyphenol oxidase activity was determined by measuring the amount of quinone produced, using an extinction coefficient of 2450 M -1 cm -1 for catechol. Enzyme activity was calculated from the linear portion of the curve. One unit of diphenol oxidase activity was defined as the amount of enzyme that produces 1 micromole of quinone per minute. Assays were carried out at room temperature and results are the averages of at least three assays. Inhibition of Diphenol oxidase Activity by L-glycine Concentration and pH Inhibition of Diphenol oxidase activity was conducted in a disposable cuvette containing 3 mL of the standard reaction mixture. The concentration of L-glycine was 0, 0.2, 0.4, 0.6, 0.9, 1.2, 1.4, 1.8 and 2 M in an phosphate buffered reaction mixture with pH of 6.7 and 8, and Diphenol oxidase activity for the oxidation of catechol at a final concentration of 15 and 45 mM was determined at pH 6.7 and 8 respectively. This concentration for Lcysteine was 0, 0.2, 0.4, 0.8, 1, 2, 3, 5, 8 and 16M in an phosphate buffered reaction mixture with pH of 6.7 and 8. Inhibition of Diphenol oxidase Activity by L-cysteine Concentration and pH Inhibition of Diphenol oxidase activity was conducted in a disposable cuvette containing 3 mL of the standard reaction mixture. The final concentration of L-cysteine was 0.35 and 1.2 M in an phosphate buffered reaction mixture with pH of 6.7 and 0.33 and 1.2 M in an phosphate buffered reaction mixture with pH of 8, and Diphenol oxidase activity for the oxidation of catechol at a final concentration of 15 and 45 mM was determined at pH 6.7 and 8, respectively. The reaction mixture and Diphenol oxidase activity assay were the same as those for the standard reaction. The inhibition kinetics of L-glycine and L-cysteine on Diphenol oxidase activity were determined by Lineweaver-Burk plots [Marangoni., 2002]. Preincubation of L-glycine with Diphenol oxidase or catechol. Preincubation of L-glycine with Diphenol oxidase was performed by mixing a series of L-glycine solutions (0.4, 0.7 and 1.2 mM) prepared in 0.1 M phosphate buffer (pH 8) with diphenol oxidase extract in a cuvette held at 25 ˚C for 0, 1, 2, 4, and 5 min. The reaction was initiated by adding 45 mM catechol to the L-glycine and Diphenol oxidase mixture after the tested incubation time. For the preincubation study between L-glycine and catechol, 1 M L-glycine and 15 mM catechol were mixed and held at 25 ˚C for 5 min. The reaction was initiated by adding Diphenol oxidase to the mixture, and the diphenol oxidase activity was determined following the same procedure as described above. Preincubation of L-cysteine with diphenol oxidase or catechol. Preincubation of L-cysteine with Diphenol oxidase was performed by mixing a series of L-cysteine solutions (0.4, 0.7 and 1 M) prepared in 0.1 M phosphate buffer (pH 6.7) with Diphenol oxidase extract in a cuvette held at 25 ˚C for 0, 1, 2, 4, and 5 min. The reaction was initiated by adding 15 mM catechol to the L-cysteine and Diphenol oxidase mixture 196 ILCPA Volume 48 after the tested incubation time. For the preincubation study between L-cysteine and catechol, 5 M L-cysteine and 45 mM catechol were mixed and held at 25 ˚C for 5 min. The reaction was initiated by adding Diphenol oxidase to the mixture, and the Diphenol oxidase activity was determined following the same procedure as described above. Effect of L-glycine and L-cysteine acid on Diphenol oxidase activity in strawberry extract L-glycine and L-cysteine inhibited the Diphenol oxidase activity detectable with catechol as substrate. The concentration of L-glycine inhibiting Diphenol oxidase activity by 50% (IC50) was 0.5 and 0.4 mM at pH 6.7 and 8, respectively (Figure 1). IC50 for Lcysteine inhibiting Diphenol oxidase activity was 0.4 at pH 6.7 and 0.5 at pH 8 (Table 1). Inhibition Kinetic of L-glycine on DIPHENOL OXIDASE Activity at pH 6.7 and 8 Inhibition of DIPHENOL OXIDASE by L-glycine was determined in the presence of different concentrations of L-glycine for three fixed concentrations of catechol at pH 6.7 and pH 8 ( Figure 2). Lineweaver burk plots used to analyze inhibition kinetics show that the extrapolated lines for 1/V versus 1/[catechol] are parallel and don't intersect each other near or on the y and x-axis, indicating that L-glycine is a uncompetetive type inhibitor. L-glycine as a uncompetetive inhibitor are thought to bind the the DIPHENOL OXIDASE-catechol complex and not the diphenol oxidase. The effect of L-glycine is to decrease both Vmax and Km. A lower Km corresponds to a higher affinity. The presence of L-glycine as uncompetitive inhibitor increases the affinity of the enzyme for the catechol. An 5-min preincubation of diphenol oxidase with 0.5M L-glycine resulted in a 35% loss in DIPHENOL OXIDASE activity compared to control. Interestingly, preincubation of Lglycine with catechol for 5 min resulted in no additional loss of DIPHENOL OXIDASE activity compared to that without incubation (Figure 3). This finding suggests that L-glycine inhibits DIPHENOL OXIDASE activity by acting directly on the diphenol oxidase-substrate rather than on the enzyme. Inhibition Kinetic of L-cysteine on diphenol oxidase Activity at pH 8 Inhibition of DIPHENOL OXIDASE by L-cysteine was determined in the presence of different concentrations of L-cysteine for three fixed concentrations of catechol at pH 8.0. Lineweaver burk plots used to analyze inhibition kinetics show that the extrapolated lines for 1/V versus 1/[catechol] intersect each other on the y-axis, indicating that L-cysteine is a competetive type inhibitor. L-cysteine as a competetive inhibitor can bind at the active site of the enzyme to form an diphenol oxidase-L-cysteine complex. L-cysteine blocks the active site, and catechol as substrate cannot bind until the inhibitor dissociates. Since, L-cysteine and catechol compete for the same site, raising the catechol concentration can eventually overcome the L-cysteine, and Vmax can be achieved, but L-cysteine raises Km, indicating that the affinity of diphenol oxidase for catechol is lower in the presence of L-cysteine. To further investigate whether the inhibition of diphenol oxidase activity by L-cysteine is attributable to the inhibitor's effect on diphenol oxidase, the substrate, or both, preincubation of L-cysteine with diphenol oxidase or catechol was carried out before the inhibition reaction started. Figure 3. Effects of preincubation of glycine with diphenol oxidase or catechol on the inhibition of strawberry diphenol oxidase activity at pH 6.7. diphenol oxidase activity for the oxidation of catechol was determined in a standard reaction mixture buffered with 0.1 M phosphate buffer, after preincubation for 5 min by mixing either diphenol oxidase or catechol (15 mM final concentration) with 0.5M glycine. Activities were expressed as percent relative activity to that determined without glycine or preincubation: no glycine or preincubation (A); 0.5M glycine, no preincubation (B); preincubation of glycine with catechol (C); preincubation of glycine with diphenol oxidase (D). The vertical bars represent the standard errors of three replicates. An 5-min preincubation of diphenol oxidase with 1M L-cysteine resulted in a 60% loss in diphenol oxidase activity compared to control (without inhibitor) (Figure not shown). Interestingly, preincubation of cystein with catechol for 5 min resulted in no additional loss of diphenol oxidase activity compared to that without incubation . This finding suggests that L-cysteine inhibits diphenol oxidase activity by acting directly on the diphenol oxidase rather than on the substrate. Inhibition Kinetic of L-cysteine on DIPHENOL OXIDASE Activity at pH 6.7 Inhibition of diphenol oxidase by L-cysteine was determined in the presence of different concentrations of L-cysteine for three fixed concentrations of catechol at pH 6.7 (not shown). Lineweaver burk plots used to analyze inhibition kinetics show that the extrapolated lines for 1/V versus 1/[catechol] intersect each other on the x-axis, indicating that L-cysteine is a noncompetetive type inhibitor. L-cysteine as a noncompetetive inhibitor can bind at an allosteric site on the diphenol oxidase and leave the active site unblocked. Catechol as substrate has an identical affinity for both the L-cysteine diphenol oxidase complex and diphenol oxidase. In presence of L-cysteine as noncompetitive inhibitor of diphenol oxidase, the Km value is unchanged (100 mM), while V max is decreased from 0.2 to 0.09 ∆A min -1 . An 5-min preincubation of diphenol oxidase with 1M L-cysteine resulted in a 70% loss in diphenol oxidase activity compared to control. Interestingly, preincubation of cystein with catechol for 5 min resulted in no additional loss of diphenol oxidase activity compared to that without incubation ( Figure 4). This finding suggests that L-cysteine inhibits diphenol oxidase activity by acting directly on the diphenol oxidase rather than on the substrate. Kinetic parameters of DIPHENOL OXIDASE activity in strawberry extract in presence of inhibitors The Michaelis-Menten constant (Km) and maximum rate (V max ) values for diphenol oxidase activity in strawberry extract were determined by performing activity assays at pH 6.7 and pH 8, in the presence of extract aliquots and various concentrations of either catechol as substrate and various concentrations of L-glycine and L-cysteine as inhibitors. The rate of catechol oxidation to its corresponding o-quinone was measured by monitoring the absorbance increase at 420 nm in a 3-ml reaction mixture containing 0.75 mg extract protein. The maximum rate (V max ) for catechol oxidation at pH 6.7 and in absence of Lglycine was 0.25 ∆A min -1 , with a Km of 25 mM. The catalytic efficiency calculated per milligram protein in the extract was 1.8 units mg -1 prot mM -1 ( Table 2). The maximum rate (V max ) and Km for catechol oxidation at pH 6.7 and in presence of L-glycine (1.4M) was 0.09 ∆A min -1 and 10 mM, but catalytic efficiency decreased to 1.3 units mg -1 prot mM -1 . The maximum rate (V max ) for catechol oxidation at pH 8 and in absence of L-glycine was 0.09 ∆A min -1 , with a Km of 3.5 mM. V max in presence of L-glycine (1.25M) decreased and reached to 0.07∆A min -1 and Km decreased to 3.1 mM. Catalytic efficiency at pH 8 in presence of L-glycine decreased from 4.6 to 3.8 units mg -1 prot mM -1 . Data in table 2 shows that catalytic efficiency decreased for catechol oxidation in presence of L-glycine and Lcysteine at pH 6.7 and pH 8. CONCLUSION This study demonstrates that L-glycine and L-cysteine exhibit, strong inhibition of strawberry diphenol oxidase activity. The inhibition of diphenol oxidase activity is pH and International Letters of Chemistry, Physics and Astronomy Vol. 48 inhibitor dependent. Kinetic studies via lineweaver-Burk plots indicate that L-glycine is a uncompetetive inhibitor and L-cysteine is competitive and noncompetitive inhibitor of partial purified diphenol oxidase activity. As reported for other plants [Ho K-K., 1999], [ Escribano et al., 2002], multiple isoforms of diphenol oxidase were detected in saffron , so we can conclude that diphenol oxidase in strawberry (Crataegus spp) maybe have two isoforms, because of different kinetic properties at pH 6.7 and 8.
2019-03-28T13:43:16.623Z
2015-03-25T00:00:00.000
{ "year": 2015, "sha1": "d0cadc9297c525567699acade9d5e90a2f3fd14f", "oa_license": "CCBY", "oa_url": "https://www.academicoa.com/ILCPA.48.194.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "93432aeec796c15c4253c9405c2801b3e4ae7c84", "s2fieldsofstudy": [ "Chemistry", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
41704547
pes2o/s2orc
v3-fos-license
Optimization of Tuberosity Healing in Prosthetic Reconstruction of Proximal Humerus Fractures Achieving successful clinical outcomes after hemiarthroplasty for 4-part proximal humerus fractures remains a sobering challenge for even the experienced shoulder reconstruction sur‐ geon or traumatologist. Despite what appears to be secure tuberosity fixation at the time of wound closure, serial postoperative radiographs often reveal progressive displacement and/or resorption of the greater tuberosity.[1-3] This results in a situation akin to a posteriorsuperior rotator cuff tear, where most patients cannot generate sufficient cuff strength to sta‐ bilize the humeral head against the superior pull of the deltoid. Secondary mechanical consequences, including shoulder weakness, superior instability and trapezial substitution can compromise outcomes both in terms of shoulder function and pain in such circumstan‐ ces.[4, 5] Stiffness and cuff dysfunction frequently render the functional results only fair and many patients must accept a limited goals end result.[6-8] Introduction Achieving successful clinical outcomes after hemiarthroplasty for 4-part proximal humerus fractures remains a sobering challenge for even the experienced shoulder reconstruction surgeon or traumatologist.Despite what appears to be secure tuberosity fixation at the time of wound closure, serial postoperative radiographs often reveal progressive displacement and/or resorption of the greater tuberosity.[1][2][3] This results in a situation akin to a posteriorsuperior rotator cuff tear, where most patients cannot generate sufficient cuff strength to stabilize the humeral head against the superior pull of the deltoid.Secondary mechanical consequences, including shoulder weakness, superior instability and trapezial substitution can compromise outcomes both in terms of shoulder function and pain in such circumstances.[4,5] Stiffness and cuff dysfunction frequently render the functional results only fair and many patients must accept a limited goals end result.[6][7][8] It is well established that restoration of shoulder function after hemiarthroplasty for fracture depends on successful tuberosity healing in combination with proper reconstruction of the head-tuberosity and head-shaft relationships.[1,[7][8][9] In the native proximal humerus, the edge of the articular cartilage of the superior head is directly adjacent to the cuff insertion and the two are nearly confluent.The dome of the head is about 5-8 mm above the supraspinatus footprint.Restoring this confluence between the cuff insertion and the prosthetic head while maintaining appropriate tuberosity offset relative to the center of rotation is essential for proper cuff mechanics.Restoring proper head height, medial offset, posterior offset and retrotorsion is also critical to achieving soft tissue balance that will provide both strength and stability. Despite the introduction of fracture-specific prostheses, translating successful anatomical reconstruction into shoulder function is not guaranteed by the theoretical solutions these new-er designs propose for complex fracture treatment.Realistically, outcomes after hemiarthroplasty for fracture are a blend of appropriate prosthesis selection and use, optimal management of tuberosity fixation, respect for the biology of fracture healing and application of an appropriate rehabilitation protocol that does not jeopardize these other aims.As follows is a discussion about principles for optimizing tuberosity reduction, fixation and healing using horizontal cable cerclage in combination with a press-fit, porous coated fracture-specific prosthesis.This technique can be applied in the setting of hemiarthroplasty or reverse shoulder arthroplasty for fracture. Why does failure of tuberosity healing occur? As with fractures in other bones, successful union of the tuberosities after humeral hemiarthroplasty requires an optimal biological and mechanical environment for bone healing.Failure occurs for several potential reasons alone or in combination.Firstly, aggressive mobilization techniques during exposure may devascularize and further destabilize the tuberosities by stripping periosteal attachments.These periosteal attachments are critical to the blood supply of the greater tuberosity when the posterior circumflex humeral artery has been severed by the fracture pattern.This is generally the case when fracture severity warrants prosthetic reconstruction.Secondly, violation of the rotator interval capsule during exposure and head retrieval disrupts the remaining bridge of tissue that links the tuberosities.This further destabilizes the tuberosities by dissociating the transverse force couple that counteracts their individual deforming forces. Thirdly, thermal damage from cement may further damage the endosteal blood supply of the humerus, and cement blocks the marrow cavity and areas where the fracture fragments may interdigitate.Fourthly, conventional suture fixation constructs often fail to achieve sufficiently rigid fixation to permit healing.Poor bone quality and fracture comminution increase the likelihood of suture loosening, which occurs early in the postoperative period.Finally, prosthesis designs that do not provide an adequate template for recreation of the cortical shell of the proximal humerus and those that do not allow direct fixation of the tuberosities to the body of the prosthesis will invite a degree of micromotion that is not compatible with fracture union. Features of the EPOCA prosthesis For fracture hemiarthroplasty, the author prefers the EPOCA Shoulder System (Synthes, Westchester, PA).The EPOCA shoulder prosthesis has several features that make it an ideal choice for use in reconstructing proximal humerus fractures.The design of the humeral prosthesis is based on extensive anatomical studies with the goal of restoring the normal structural relationships between the head, tuberosities and shaft.[10] The rationale behind the design of the EPOCA system is that aspects of the proximal humeral anatomy that are highly variable across the population should be adjustable while those aspects with minimal variation should be standardized.Features with a high variation include head radius, size of the humeral medullary canal, medial offset and tuberosity offset.Features with a low variation include neck-shaft angle and the ratio of head height to radius.To this end, the system offers 5 stem sizes (6 -14mm in 2 mm increments) and 10 head diameters (40 -58 mm in 2 mm increments).There are also standard (115 mm) and long (215 mm) stem lengths.Independent adjustment of medial and posterior offset can be achieved by a dual eccentricity (Eccenter) that allows the head to be placed in an infinite number of X-Y positions within a 6 mm orbit relative to the humeral component.This ensures precise reconstruction of the proximal humeral anatomy and center of rotation. The EPOCA stem comes in both a press fit and cemented option (Figure 1).The former has porous coating on the proximal half, the roughness of which may help promote tuberosity adherence and security.The tapered wedge geometry has a prominent calcar design that helps the stem self-center, self-rotate and self-lock as it is inserted.Thus, even in a fracture situation, a press-fit stem can be used and achieve excellent stem stability without the need for cement fixation.The EPOCA stem comes in press-fit, porous coated (A) and smooth, cemented (B) options.Both have a tapered wedge geometry with a prominent calcar design that promotes metaphyseal fill (C).Medial and lateral holes in the proximal stem allow cerclage directly through the prosthesis rather than around its medial calcar portion.These features permit use of the press-fit stem in the setting of fracture due to the rotational stability afforded by the stem geometry. The proximal body of the stem has both a medial and lateral hole through which cables can be passed for tuberosity cerclage (Figure 1).This construct improves rotational stability of the cerclage fixation compared to cables or sutures passed around the calcar section of the prosthesis.In the latter case, the fixation is not directly linked to the stem so that the tuberosities can still move independently of the prosthesis when the arm is rotated about the axis of the humerus.By passing fixation through the stem of the prosthesis, the tuberosities are compressed directly to the stem so that the construct rotates as a single unit during arm rotation.The improved stability of this fixation obviates the need for multiple other sutures, specifically vertical sutures between the shaft and bone-tendon junction that tend to result in the common mistake of tuberosity over-reduction. Preoperative planning When the decision to operate has been made, the surgeon needs to consider a variety of factors in deciding the best method of treatment for the given fracture pattern.Aspects of the patient's medical and social history are important to consider.The following patients factors may bear on the decision to attempt fixation versus prosthetic replacement: age, hand-dominance, physical demands, expectations, compliance, smoking history, and medical comorbidity.Head perfusion is best assessed by the length of the medial metaphyseal extension (A) and the displacement of the medial periosteal hinge (B).If the metaphyseal extension is less than 5mm and/or the displacement of the medial hinge is more than 5-10mm, the head is likely ischemic. It is essential when assessing these fractures to have a thorough understanding of the fracture "personality" and this usually demands a CT scan with coronal and sagittal reconstructions that allow for 3-D rendering.Such imaging can be invaluable in determining the location and degree of comminution, the integrity of the articular surface, the exact relationship between the head, tuberosity and shaft, as well as prognostic indicators of head vascularity.In the latter case, the length of the medial metaphyseal extension and the displacement of the medial periosteal hinge are most predictive of head perfusion (Figure 2).This collective information can help the surgeon determine if fixation is both warranted and feasible.Feasibility depends on factors such as bone quality and comminution, fracture complexity, availability of the necessary implants and surgeon skill.If stable, anatomical fixation is not possible, then prosthetic replacement is warranted.While reverse shoulder arthroplasty has become increasingly popular in this setting, there remains a role for hemiarthroplasty in younger and more physically demanding patients.Of note, the technique described herein can be used for secure tuberosity fixation during reverse arthroplasty for fracture where outcome can also be improved by successful tuberosity healing allowing restoration of active external rotation. Surgical technique The patient is positioned as for a shoulder arthroplasty such that the scapula is supported but the arm can be brought over the side of the bed to expose the humeral shaft.The fracture is exposed through a standard delto-pectoral approach taking the cephalic vein laterally with the deltoid.The anterior deltoid is elevated off the coracoacromial (CA) ligament and a sharp angled lever is placed behind the ligament.This helps "roll" the deltoid laterally to expose the proximal humerus. The clavipectoral fascia is excised en bloc from the CA ligament proximally to the pectoralis major tendon distally and the conjoint tendon medially to the deltoid laterally.Once this layer has been removed, the humeroscapular motion interface is accessible and adhesions in this interval can be freed using blunt dissection.One must avoid overzealous dissection to prevent stripping of any residual periosteal attachment of the tuberosities to the shaft.A curved ring retractor can then be placed beneath the deltoid and a right-angle retractor beneath the conjoint tendon. The biceps tendon is then identified and followed proximally.It should be sutured to the pectoralis major tendon to preserve native tension and then tenotomized at the superior aspect of the bicipital groove.Because the bicipital groove and a portion of the anterior greater tuberosity usually remain attached to the lesser tuberosity fragment, it is critical to preserve the rotator interval capsule (Figure 3A).Thus, it should not be routinely divided above the transverse humeral ligament as many conventional techniques recommend (Figure 3B).Preservation of the rotator interval will help stabilize the tuberosity repair by leaving a soft tissue bridge between the anterior and posterior fragments.This helps neutralize the individual deforming forces that lead to loosening and failure of fixation.In a majority of cases there is a longitudinal split in the supraspinatus tendon where the anterior bundle remains attached to the lesser tuberosity fragment.Maintenance of this attachment is critical to maximize the potential for cuff function postoperatively.Exposure of the humeral head and glenoid can be achieved by extending the longitudinal cuff split medially.This can be repaired side-to-side at the conclusion of the case and does not jeopardize the cuff insertion to the bone.Heavy braided suture is placed through the bone tendon interface of each of the subscapularis (SC), supraspinatus (SS) and infraspinatus (IS) tendons.It is essential when placing the posterior sutures that excessive traction is not applied so that soft tissue attachments between the tuberosity and shaft are maintained.Overly aggressive tuberosity mobilization injures the periosteal blood supply and reduces the likelihood of eventual healing.As much as possible, the greater tuberosity should be left in-situ posteriorly. The humeral head can then be retrieved from the joint through the split in the SS tendon.The head can then be "keyed in" to the shaft to determine the location of the medial metaphyseal extension.The length of this extension is then measured and this length represents the distance above the calcar that the prosthetic head should sit to restore proper head height (Figure 4).This is a simple, reliable and accurate method of determining head height that can be cross-referenced with other accepted methods per the surgeon's discretion.The humeral head is then sized against the prosthetic head trials.One should typically downsize if the native head is in between trial head sizes so as not to overstuff the joint.Cancellousautograft is then harvested from the humeral head for supplemental bone grafting of the tuberosities to aid in restoration of tuberosity offset.Prior to stem implantation, it is important to place the cerclage cables through the greater tuberosity in an inside-out fashion (Figure 5) At a level approximately 5 mm below the bone-tendon junction, a 2 mm drill bit is used to make the medial and lateral holes through the tuberosity bone.Again, care should be taken to leave the tuberosity in-situ when these holes are drilled to protect soft-tissue attachments.A Synthes 1 mm needled, beadless cable is then passed through each of the holes.The cable crimp must be taken off before the cable is placed and the crimp saved on the back table so that it is not inadvertently lost.The needle can be bounced off of the curved deltoid retractor and the cable retrieved on the dorsal tuberosity surface.The needle is removed and the cables are then tagged with a hemostat and parked posteriorly for later tuberosity repair. A distally-angled Fukuda retractor is next placed behind the glenoid to inspect the joint.The root of the biceps should be excised and the glenoid articular surface checked for concomitant fracture.The labrum should be preserved to aid in stability and load distribution.Aggressive capsular releases are not necessary in fracture reconstruction as would be performed during shoulder arthroplasty for degenerative disease, and the temptation to perform a circumferential subscapularis release should be avoided.This will only jeopardize the anterior circumflex humeral artery, which provides vascularity to the anterior tuberosity fragment, and disrupt the important rotator interval "bridge." The humeral shaft is then exposed by placing the arm in extension, adduction and external rotation.Two blunt Hohman retractors, posteriorly and medially, are used to leaver the shaft anteriorly.If necessary, the medullary canal is opened with the cylindrical starter rasp.Further reaming is not necessary as the EPOCA system uses impaction broaches to prepare the canal.Starting with the smallest broach, proper stem rotation is determined by orienting the laser-etched center line of the broach with a point 8 mm posterior to the deepest point of the bicipital.This point has been shown to correspond to the equatorial plane of the humeral head (Figure 6A & B).[10] The broach is seated to the level that restores the head height according to the pre-determined metaphyseal extension length.Proper retrotorsion of the humeral stem can be confirmed by inserting the 6 mm rod into the broach and measuring roughly 25 degrees relative to the forearm axis with the goniometer.Progressively larger broaches are introduced until distal (diaphyseal) canal fill is achieved.A curved curette can be used to remove cancellous bone along the medial humeral metaphyseal region to help fully seat the desired broach if necessary.The pronounced calcar design allows the broach to self-center, self-align and self-lock in the proper height and orientation, obviating the need for cumbersome jigs to position the trial stem.Once the stem size has been determined, the trial stem is impacted to the proper height using light progressive taps with the mallet to prevent fracture of the shaft by the wedge-shaped stem. In a majority of cases, an optimal fit can be achieved allowing the use of a press-fit stem.In the occasional case, one stem size is over-recessed relative to the calcar and the next size too big for the diaphysis.In these cases, the surgeon has two choices.The first is to attempt impaction grafting the smaller stem to the proper height using autograft from the humeral head and the smaller impaction broach.With the diaphyseal portion of the broach inserted only slightly into the canal, small croutons of bone graft can be placed circumferentially around the canal opening and progressively impacted into the metaphysis.This process can be repeated until a snug fit is achieved with the broach.In patients with severely osteoporotic bone, a stable press-fit may not be possible without undue risk of humeral shaft fracture.The second option is to cement the final prosethsis in a conventional manner.In such a case, the final chosen stem will be one size smaller than the broach and trial stem to allow for a circumferential cement mantle.The Eccenter is then placed on the trial stem followed by the trial humeral head.The 2.5 mm hex driver is used to dial the Eccenter with respect to the stem while the head can be manually rotated on the Eccenter.The combined dual eccentricity of this design allows the head to be placed in an infinite number of antero-posterior (AP) and medio-lateral (ML) offset positions within a 6mm orbit (Figure 7A).More importantly it allows independent adjustment of the medial and posterior offset to more accurately restore the patient's native anatomy and center of rotation.Optimal medial offset is achieved by recreating the medial calcar line without step off (Figure 7B).In the AP plane, slight posterior offset is desirable to accommodate the larger greater tuberosity and restore native posterior offset of the humeral head relative to the humeral medullary canal.Once the head position has been chosen, the head and Eccenter can be locked using the 2.0 mm hex driver.The trial prosthesis is then reduced into the joint to confirm a congruent stable fit with the glenoid.After the offset number of the head is recorded, the head is removed and the offset letter of the Eccenter is then recorded so that the construct can be replicated with the final components.The final component is then assembled using the press and inserted as a monoblock.The diaphyseal portion of the stem is placed into the medullary canal.Prior to fully seating the component, the cables are passed through the medial and lateral holes from posterior to anterior (Figure 8A).The 3 mm retrotorsion bar is then used to cross-check proper rotation and the component is then fully press fit to the pre-determined height. The prosthesis is then reduced into the glenoid.Two holes are then drilled into the lesser tuberosity fragment using the 2.0 mm drill.These holes should be placed slightly below the bone tendon junction and correspond to positions of the cables exiting the stem.A 14 gauge angiocath can then be inserted from outside to inside through these holes as a transit to shuttle the cables through the bone fragment.Prior to final tuberosity reduction, bone graft from the humeral head is packed around the stem to fill any voids and augment the often fragile cortical sleeve of the tuberosity fragments.A #2 non-absorbable suture is next used to reapproximate the longitudinal SS split.This aids in fine tuning the tuberosity reduction. Care must be taken not to over-reduce the tuberosities especially distally.Rather than being pulled down and fixed to the humeral shaft with vertical sutures, the tuberosities should be pushed up to restore the native position of the superior rotator cuff insertion relative to the edge of the prosthetic head.Once this position has been optimized, the tuberosities can be securely fixed with horizontal cable cerclage (Figure 8B). .The cables should be passed through the prosthesis from back to front before final prosthesis seating.After holes are drilled into the lesser tuberosity fragment, these cables are then passed from inside to outside through the lesser fragment (A).Once the joint is reduced and the tuberosities situated to recreate the proper head-tuberosity relationship, the cables can be tightened and crimped to effect a horizonal cerclage directly to the prosthesis (B). Both ends of each cable are threaded through their respective crimps, which are positioned over the bicipital groove.The cables are then spaced superiorly and inferiorly on the tuberosities.The superior cable must be placed below the bone tendon junction so that it does not subluxate over the humeral head.To tension the beadless cable, the crimp must be stabilized on one side by either a hemostat or by the accessory locking portion of the Synthes tensioner.The tensioner is then placed on the opposite side and tensioned until a firm embrace is achieved (roughly 20-30 kg).Overtensioning should be avoided so to prevent deforming or crushing the fragile bone and to avoid devascularization.After crimping and cutting the cables to length, the biceps tendon can be used to cover the crimps by a soft tissue tenodesis to the cuff.Further tuberosity fixation is not necessary and usually only promotes overreduction and devascularization.A single vertical suture, however, can be passed from the shaft around the superior cable to prevent it from slipping over the head.After copious irrigation, the wound is closed in layers over a drain, followed by a sterile compressive dressing and sling.Postoperative radiographs are obtained in the recovery room to confirm an optimal reconstruction (Figure 9). Postoperative protocol Active use of the arm is avoided for 6 weeks to allow tuberosity healing but passive motion exercises must be started early to maximize postoperative function.Although some advocate no passive motion for several weeks, stiffness remains a significant problem that limits the final outcome of these procedures.Because dense adhesions form in the subacromial space and humeroscapular motion interface, nonoperative and operative treatment of postsurgical adhesive capsulitis in the presence of prosthesis is a substantial challenge that is often marginal in its success.Codman's exercises and positional exercises such as gentle table slides or resting the arm in an abducted position can be started as soon as patients are comfortable.Patients are instructed to steadily increase their passive range on a self-directed basis.Formal physical therapy is often avoided in the early stages to prevent overly aggressive applied stress that might jeopardize tuberosity fixation.Serial x-rays and clinical status are checked at approximately 2, 4 and 8 weeks postoperatively.Active-assisted range of motion can be added around 6 weeks assuming stable tuberosity fixation.Progressive active range of motion and active use can be started at 8 weeks based on radiographic evidence of tuberosity healing and patient compliance. Discussion Prosthetic replacement for the fractured proximal humerus follows the same biological and mechanical principles that have evolved from experience in fracture fixation in other areas.Surgeons should approach this case with the same tenets and goals as any fracture case and not abandon these principles given the insertion of a prosthesis.Preservation of soft tissue and periosteal attachments is critical to maintaining blood supply to the fracture fragments.Preservation of the endosteal blood supply and avoidance of suture strangulation are also important.Finally, fixation must be sufficiently rigid to reduce micromotion to a level that permits fracture healing.The use of horizontal cable cerclage for tuberosity fixation using the above-described technique in combination with a press-fit, porous humeral stem addresses each of these critical elements to optimize the chance for successful healing in these difficult cases. Nils et al performed a meta-analysis of fracture hemiarthroplasty outcomes.Although the quality of existing reports was deemed to be insufficient to make formal recommendations about the role of hemiarthroplasty in the fracture setting, the authors did note that "tuberosity healing has influenced functional outcome in all series mentioning this parameter."[7] Boileau et al followed 66 patients after hemiarthroplasty for fracture and found tuberosity malposition and migration in 50% of cases leading to unsatisfactory results including superior migration, stiffness, weakness and pain.[1] Greiner and associated found that tuberosity malposition correlates with the development of fatty infiltration of the cuff muscles and this occurrence was significantly associated with poorer clinical outcomes in patients after hemiarthroplasty for fracture.[11] Huffman and colleagues studied the biomechanics of tuberosity malposition in 4-part fractures and determined that inferior placement (tuberosity overreduction) has a significant negative impact on the mechanical advantage of the deltoid during shoulder abduction.[5] Taken together, these reports demonstrate that complications related to failure of tuberosity reduction and fixation are frequent, have a negative impact on normal shoulder kinematics, and result in inferior outcomes for pain and function.This fact has remained true despite advances in the development of fracture-specific prostheses, improved suture material and purportedly improved suture constructs.Borowsky and colleagues recently reported on failure modes of suture repair and found that tuberosity migration occurs early and in many cases was over 1 centimeter.[2] Given the frequency of clinical reports of tuberosity migration, it seems clear that currently accepted methods of suture repair fail to achieve a biological and mechanical environment that is suitable for bone healing, particularly in osteoporotic bone.Cable cerclage on the other hand has 4.8 times the circular embracing strength of conventional suture material and does not succumb to creep as suture material is proven to do.[12] Cables also have a prone track record in fracture fixation in long bones, such as periprosthetic fractures, and in fixation of trochanteric osteotomy in revision hip arthroplasty.Thus, their application to tuberosity fixation has a solid mechanical and clinical foundation.[13] Krause et al retrospectively compared cable fixation to nonabsorbable suture fixation and found that consistently better radiographic and functional results were achieved when cables were used with the Epoca stem.[12] Figure 10.Histogram of Constant Scores in a consecutive series of 56 patients s/p fracture hemiarthroplasty with cable cerclage The technique described above has been refined through Prof. Ralph Hertel's extensive use of this prosthesis in the fracture setting.Between 1997 and 2002, 60 patients were followed prospectively following humeral hemiarthroplasty for fracture.(RHertel, unpublished data) The mean age was 68 years (range 39 -88 years) and there were 26 males and 34 females.Four patients were lost to follow-up leaving 56 patients available for review with an average follow-up interval of 40 months (range 12 -92 months).Successful tuberosity healing was achieved in 49 patients with displacement or resorption in 7 patients.Five patients underwent an additional operation to refix the tuberosities.A total of 31 patients achieved active forward elevation above 120 degrees.The histogram in Figure 10 demonstrates the range of Constant Scores in this series of patients.These results, while not as favorable as those ach-ieved in arthroplasty for degenerative joint disease, do demonstrate that relatively robust shoulder function can be restored by hemiarthroplasty given tuberosity healing and successful patient rehabilitation.Stiffness remains a problem with neither an optimal preventative strategy nor a reliably effective treatment. Technique for the reverse prosthesis There is growing interest in fracture reconstruction using a reverse prosthesis which may afford better active elevation in cases where tuberosity healing is unpredictable and will potentially be unsuccessful.Even in these cases, however, the surgeon should attempt to achieve stable tuberosity fixation to improve the possibility for rotational movement which aids in positioning the hand in space.Specifically, if active external rotation can be achieved through reattachment of the greater tuberosity, patients may achieve greatly improved the functional outcomes with a reverse arthroplasty. Similar to primary shoulder arthroplasty, fracture specific systems are now available to address this reverse arthroplasty for fracture.As with primary systems, however, their design does not guarantee successful tuberosity fixation and the principles outlined above still apply to reconstruction with a reverse prosthesis.In addition to the importance of sound technique which preserves the optimal biological conditions for fracture healing, tuberosity fixation with horizontal cable cerclage can also be used to achieve a stable reconstruction with a reverse prosthesis.In such cases, the technical steps are essentially identical to those outlined for primary fracture hemiarthroplasty.Figure 11 demonstrates cable cerclage of the tuberosities in a fracture reconstruction using a reverse prosthesis. As the indication of reverse shoulder arthroplasty for fracture and fracture sequel has gained more traction and as experience with this technique has grown, clinical studies are now available to report on the outcomes of this procedure including comparative studies with conventional hemiarthroplasty.Boyle and colleagues compared 313 fracture hemiarthroplasty patients to 55 fracture reverse patients and found that Oxford Shoulder Scores at 5 years postoperatively were superior in the reverse group.[14] Young et al, however, we unable to realize any gains in range of motion, American Shoulder and Elbow Surgeons Score or Oxford Shoulder Score in patient who underwent a reverse reconstruction compared to those who underwent hemiarthroplasty for fracture cases.[15] Cazeneuve et al reported on 35 patients who underwent reverse reconstruction for fracture or fracture dislocation.[16] Complications including neurological injury, infection, instability and progressive scapular notching led to a complication rate of 24% and stiffness was noted to be a functionally limiting problem.Bufquin et al also reported stiffness with mean active elevation of only 97 degrees and mean active external rotation of only 30 degrees.[17] Tuberosity migration also occurred in 53% of cases.Lenarz and colleagues reported on 30 patients status post reverse arthroplasty for fracture and mean achieved active elevation of 139 degrees and mean active external rotation of 27 degrees with a 10% complications rate.[18] Collectively these early outcomes are somewhat sobering relative to the anticipated advantages that reverse shoulder replacement might achieve in fracture cases.They prove the complexity of these cases and the challenges they present to the shoulder reconstruction surgeon.As design modifications continue to improve reverse systems and as experience with reverse arthroplasty in the fracture setting increases, surgeons can hopefully look forward to future advancements in our ability to provide improved function restoration in these difficult cases.Nevertheless, strict adherence to surgical techniques that preserve the biology of fracture healing, that maximize stability of fragment fixation and that permit early rehabilitation to encourage recovery of function are all critical regardless of the theoretical merits of any specific system in terms of biomechanics and design. Figure 1 . Figure 1.The EPOCA stem comes in press-fit, porous coated (A) and smooth, cemented (B) options.Both have a tapered wedge geometry with a prominent calcar design that promotes metaphyseal fill (C).Medial and lateral holes in the proximal stem allow cerclage directly through the prosthesis rather than around its medial calcar portion.These features permit use of the press-fit stem in the setting of fracture due to the rotational stability afforded by the stem geometry. Figure 2 . Figure 2.Head perfusion is best assessed by the length of the medial metaphyseal extension (A) and the displacement of the medial periosteal hinge (B).If the metaphyseal extension is less than 5mm and/or the displacement of the medial hinge is more than 5-10mm, the head is likely ischemic. Figure 3 . Figure 3.In a typical 4-part fracture, the bicipital groove and anterior most portion of the greater tuberosity remains attached to the anterior, lesser tuberosity fragment (A).The anterior bundle of the supraspinatus tendon remains attached to this anterior fragment, separated by a longitudinal split in the tendon at the level of the fracture plane.The rotator interval capsule remains intact and should not be violated as is current convention (B). Figure 4 . Figure 4.The length of the medial metaphyseal extension can be used as an accurate and reproducible method of determining the height at which the prosthesis should be seated to recreate proximal humeral anatomy.The native head can be keyed onto the shaft to determine this height (A).The prosthesis height and medial offset should be set to reproduce the native shoulder anatomy (B). Figure 5 . Figure 5.The Synthes beadles, needled cable should be used (A); cables should be placed prior to instrumentation of the humeral shaft with care taken not to disrupt periosteal attachments between the tuberosities and the humeral shaft (B). Figure 6 . Figure 6.The equatorial plane of the humeral head bisects the edge of the articular cartilage adjacent the rotator cuff at a point approximately 8mm posterior to the deepest point of the bicipical groove (A); This is also true in the metaphyseal region and can be used to orient the trial stem into the proper retrotorsion.When the laser etch on the back of the stem is 8mm posterior to the bicipital groove on the proximal aspect of the humeral shaft, the retrotorsion should measure roughly 25-30 degrees relative to the forearm axis (B). Figure 7 . Figure 7.The Eccenter in combination with the humeral head provides a dual offset that allows independent adjustment of the medial and posterior offset for optimal head positioning (A); The medial offset should be adjusted to recreate the normal calcar line relative to the humeral shaft (B). Figure 8 Figure8.The cables should be passed through the prosthesis from back to front before final prosthesis seating.After holes are drilled into the lesser tuberosity fragment, these cables are then passed from inside to outside through the lesser fragment (A).Once the joint is reduced and the tuberosities situated to recreate the proper head-tuberosity relationship, the cables can be tightened and crimped to effect a horizonal cerclage directly to the prosthesis (B). Figure 9 . Figure 9. Postoperative AP film showing stable tuberosity reduction with anatomical reconstruction of the calcar line, head height and tuberosity height and offset. Figure 11 . Figure 11.AP view showing horizontal cable cerclage tuberosity fixation in reverse arthroplasty for fracture.
2017-08-15T11:32:29.585Z
2013-02-20T00:00:00.000
{ "year": 2013, "sha1": "925264545e9ea6e9c324e07d69f8ab86b87caa3c", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/42817", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "9d1c8a1b150bbcc724a325820efd0b5f20600210", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
98988330
pes2o/s2orc
v3-fos-license
Temperature induced complementary switching in titanium oxide resistive random access memory On the way towards high memory density and computer performance, a considerable development in energy efficiency represents the foremost aspiration in future information technology. Complementary resistive switch consists of two antiserial resistive switching memory (RRAM) elements and allows for the construction of large passive crossbar arrays by solving the sneak path problem in combination with a drastic reduction of the power consumption. Here we present a titanium oxide based complementary RRAM (CRRAM) device with Pt top and TiN bottom electrode. A subsequent post metal annealing at 400°C induces CRRAM. Forming voltage of 4.3 V is required for this device to initiate switching process. The same device also exhibiting bipolar switching at lower compliance current, Ic <50 μA. The CRRAM device have high reliabilities. Formation of intermediate titanium oxi-nitride layer is confirmed from the cross-sectional HRTEM analysis. The origin of complementary switching mechanism have been discussed with AES, HR... INTRODUCTION Feature size (F) of the nonvolatile memory is scaling down toward nanometer size, because of the drive toward faster, smaller, and denser nano-electronics systems. As one continues to shrink cell size, it becomes ever more complicated to sustain a sufficient number of electrons in these charge storage based memories. Among the several emerging memory, resistive random access memory (RRAM) based on the resistive switching (RS) effect taking place in metal-insulator-metal (MIM) cells, has attracted renowned interests as a promising next generation nonvolatile memory owing to its simple constituents, high speed operation, nondestructive readout, low operation voltage, long retention time, and high scalability. [1][2][3][4][5][6][7] Binary transition metal oxides, such as SiO 2 , HfO 2 , TiO 2 , NiO, ZnO, Ta 2 O 5 , etc., [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18] has been intensively investigated as an active layers in RRAM application, for their big advantage, like crystal structure and stoichiometry are more easily controlled than perovskite oxides that consist of more than three components. Two-terminal RRAM structure allow its integration in crossbar arrays, by accessing each memory cell through the selection of a word-line and a bit-line. 1,4 Small device size of 4F 2 and the availability of 3-D architecture solutions in a crossbar array, 10 make RRAM a promising competitor of flash NAND device. On the contrary, select device, used to avert the sneak-path current of unselected cells in low resistance state (LRS), is one of the main challenge for getting high-density RRAM crossbar arrays. 4,11 To resolve this concern, several approaches like, threshold switches, 12 oxide diodes, 4 and self-rectifying RRAM 13 have been proposed. Recently, complementary resistive switch was attracted renewed interest for coding the logic bit in two different reset (high resistance) states to resolve the sneak-path issue, without using any select devices. 14,15 Recent report 19 suggest that complementary resistive switching can be achieved by fabricating back-toback RRAM cells configuration, however, the approach requires complicated process flow and time consuming. In this work, we report annealing induced complementary switching (CS) in TiN/TiO x N y /TiO 2 / Pt (TTTP) structure having TiN as a bottom electrode. Complementary switching performances and mechanism in the TTTP structures are associated with the charged oxygen vacancies. At nanoscale, the forming voltage (V f ) and the switching characteristics significantly controlled by the quantity of oxygen vacancies. The physical and resistive switching properties of TiN/TiO x N y /TiO 2-x /Pt structures are investigated. Using analytical ion-migration models the complementary switching mechanism is finally discussed. RESULTS AND DISCUSSION A 17 nm thin TiO 2 thin film was grown by rf magnetron sputtering on Si/SiO 2 /Ti/TiN substrate. A Pt (20 nm) top electrodes (diameter: 150 µm) were deposited by e-beam evaporation to form Si/SiO 2 /Ti/TiN/TiO 2 /Pt structure for memory device characterization. Subsequently, the devices were annealed (PMA) at 400 • C to 550 • C for 1-2 min in oxygen ambient for oxidation. Control sample with TiO 2 (as-deposited) having same thickness is also prepared under the same condition for comparison. To probe the thickness of the layers, cross-sectional high resolution transmission electron microscopic (HRTEM) observations were performed using JEOL JEM-2010F. Auger electron spectroscopy (AES) (VG Scientific Microlab 310F) is used to study the composition of the stacked structures at different depth. Electrical complementary switching characteristics at the TTTP structure are measured using Agilent B1500A analyzer. Voltage bias is applied on the Pt top electrode, whereas TiN bottom electrode is grounded during electrical measurement. A forming process (measured at 0 to +6 V, I cc =1 mA) is necessary to activate the initial device, a positive forming voltage (V f ) of ∼4.3 V is essential to initiate the complementary switching process. Voltage is applied on the Pt top electrode, whereas TiN bottom electrode is grounded during whole measurement. Figure 1(a) shows the typical forming curve of the TiO 2 based TTTP CRRAM device after annealed at 400 • C. The current is abruptly increases from ∼690 pA to the set compliance of 1 mA (I cc ) during forming process and the device switches from pristine resistance state (PRS) to the low resistance state (LRS). To reset the electroformed device, a negative voltage of -3 V, without any compliance is applied. A reset voltage (V Reset ) of ∼-1.8V is required to return the device to high resistance state (HRS) again, as shown in the figure 1(a). After the initial (first) reset, the device is able to work at lower current operation. Current compliance of 0.1 µA is applied during set process while no compliance is applied during reset (LRS to HRS) the device. Figure 1 Interestingly, a complementary switching can be observed along with bipolar switching in the 400 • C annealed device only, when the compliance current is increased to 50 µA during voltage sweeping, as shown in figure 2. Almost similar CRRAM characteristics are also detected by setting higher compliance current (>50 µA), as shown in inset of figure 2(b). Due to I cc limitation, most of the available oxygen vacancies do not contribute to migration and remain at the cathode. Here I cc controls the amount of positively charged oxygen vacancies, which are produced for migration from the bottom cathode toward the top anode. 2,3,20 For the RRAM based on binary oxides sandwiched by the inert electrodes, the reversible switching is mainly attributed to oxygen vacancies or oxygen ions. It's well established that transport mechanism of TiO 2-x based bipolar RRAM can be well modeled with tunneling barrier or other non-linear transport barrier. 20,21 The electronic conduction of such devices can be modulated by inducing the motion of ionized defects, such as oxygen vacancies, by applying an appropriate voltage across the device. 2,3,22 Using defect chemistry the filament formation mechanisms of titanium oxides can be explained. TiO 2-x is a type of hypostoichiometric transition metal oxides (TMOs). 23 Hypostoichiometry (MO x−δ, δ > 0) results from the formation of (i) oxygen vacancies or (ii) cation interstitials. 24 The formation reactions for (i) and (ii) of TiO 2-x are expressed in the Kröger-Vink notation 25 as and Where, positive charge is represented by a dot (•), and neutral by (×). V for a vacancy or Ti for a Titanium ion. The subscript represents defect site (i) for interstitial, (Ti) for Titanium lattice site. To support this explanation, figure 2 shows the current-voltage (I-V) characteristics of the device by setting I C =50 µA. A set transition, i.e., HRS to LRS, is observed during positive cycle at ∼0.36 V, as shown in figure 2, except the current is increasing to a maximum value of ∼15.8 µA at 1.16 V. After set transition, further increase of the positive voltage causes changes of resistance state to high resistance state or reset transition insisted. The complete reset occurred during positive cycle at 1.44 V. Quite similar characteristics is also detected during negative voltage sweeping. A set transition is observed during negative cycle at ∼-0.28 V. As the current compliance value set at 50 µA, the current value increased to maximum value 15.96 µA at -1.2 V and current starts decreases or device starts resets for the further increases of negative voltage. During negative cycle the complete reset occurs at -1.52 V. Figure 1(a) shows the linear I-V curve, whereas, figure 2(b) shows the semi-log I-V characteristics. Alternate application of positive and negative sweeps exclusive of current compliance limitations thus permits for programming the RRAM in two unusual reset states. 26 This can serve for encoding two logic bits in passive crossbar arrays, without any requirement of select device. 14,27 The cycling measurements were repeated by the dc sweep. Endurance of the Pt/TiO x / TiO x N y / TiN structure after annealing is presented in figure 3(a) and 3(b) for positive and negative switching cycles, respectively. The current value measured at @ ±1.12 V. Figure 3 reveals that HRS/LRS ratio is higher than 10 2 times, without any noticeable degradation and much fluctuations even after 200 switching cycles. Note that the device performs no data loss after 10 3 seconds (data not shown). In order to study the switching mechanism in details, compositional analysis is necessary of the TTTP structure. Figure 4 shows the typical AES spectra of the annealed CRRAM device. A clear oxygen gradient is observed from the spectra. After annealing, a layer of TiO x N y having almost same thickness of TiO 2-x (∼10 nm) is formed by intermixing between TiO 2 and TiN at the bottom electrode junction. There are no nitrogen atoms inter-diffusion is observed throughout TiO 2 layer after annealing based on the measurement and analysis of the AES spectra, as shown in figure 4. As seen from the figure, the oxygen atom concentration decreases after 300 seconds and it is almost zero after 660 seconds etching, due to the intermixing at the junction by the diffusion of oxygen atoms. It attributes the formation of interfacial TiO x N y gradient layer at TiN/TiO 2-x interface by the inter-diffusion of oxygen atoms from the TiO 2 layer to the TiN bottom electrode after annealing. This oxygen gradient plays a crucial role during complementary switching mechanism, as discussed in figure 6. To probe the thickness and confirm the formation of intermediate layer, which is obtained from AES result, cross sectional HRTEM analysis is employed to determine the difference between as-deposited and annealed TTTP structures. The TEM image of a typical as-deposited sample is shown in figure 5(a), clearly shows the 17 nm TiO 2 layer is present between TiN and Pt layers. There are no sign of intermixing at the TiN/TiO 2 interface. Figure 5(b) shows the typical cross sectional HRTEM image of the 400 • C annealed film. However after annealing the sample at 400 • C, a clear colour contrast gradient is observed in figure 5(b) indicating that a formation of a 10 nm thin interfacial TiO x N y layer between TiN and TiO 2-x layers. After intermixing the self-assembled layer exists in the film. The thickness of the remaining TiO 2-x layer is found to 10 nm. This result corroborates with the results obtained from the AES spectra. The switching mechanism of the binary oxides based RRAM devices can be explained by taking into account the oxygen vacancy migration under a bias voltage and the contributions of both the TiO 2-x /TiO x N y bottom and Pt/TiO 2-x top interfaces. 2,3,26 AES spectra reveals that there is an oxygen gradient inside the film. So, we can assumed that the TiO 2-x layer is to consist of two resistor regions in a series: one at the TiO 2-x /TiO x N y bottom interface (R bot ) and another one at the Pt/TiO 2-x top interface (R top ), as marked in figure 6(a). The changes of resistances in these two layer leads to complementary switching. However, the bottom interfacial TiO x N y layer is always believed to be in LRS and acts as an oxygen reservoir, which modulates the oxygen vacancy concentration to control the complementary switching in the bottom TiO 2-x /TiO x N y interface. The initial state of the memory cell is in HRS, when both the R top and R bot interfaces are in HRS (R top /R bot in HRS/HRS), as shown in figure 6(a). During forming process positive bias voltage is applied on top electrode, a huge amount of oxygen vacancies are introduced in the TiO 2-x layer towards bottom electrode. This oxygen vacancies leads to formation of an oxygen deficient conductive channel or filament and allows the device to be switched to LRS (R top /R bot in LRS/LRS), as shown in figure 6(b). As mentioned in equation (1), oxygen gas evolution problem can be solved by explaining the evolution of oxygen vacancy formation from the oxygen atom, which is stored at the TiO x N y oxygen reservoir layer, through an oxidation or/ and a physical adsorption process. To reset the device after forming a negative voltage of -1.5 V is applied at the top electrode, which attracts positively charged oxygen vacancies and a large amount of oxygen vacancies drifted from the bottom interface region to top interface region. As a result, the filament at the lower region of the TiO 2-x i.e., closed to the TiO 2-x /TiO x N y interface layer, will be ruptured and resistance state changed to HRS. But, the filament at the upper region remains unaffected or still in LRS, as shown in figure 6(c). Since, once one side filament is ruptured there are no flow of electrons. As mentioned before, complementary switching is observed after increasing the compliance current to 50 µA. Once we applied positive voltage with 50 µA compliance current, the positively charged oxygen vacancies are start to forming filament from the Pt/TiO 2-x top interface. At a positive voltage of 0.36 V (i.e., V Set ) the filament at bottom interface is completely formed and both the regions are changed to LRS, as shown in figure 6(d). Further increase of positive voltage (V >V Set ) the charged oxygen vacancies are depleted at the top interface, leads to change to HRS by rupturing the filament at Pt/TiO 2-x top interface, as shown in figure 6(e). In the case of negative applied voltage with higher compliance current (50 µA), the oxygen vacancies are attracted towards Pt top electrode and by drift motion the filament is start to form. At set 075314-7 Panda, Simanjuntak, and Tseng AIP Advances 6, 075314 (2016) voltage of -0.28 V, the complete filament is formed at the two regions in the TiO 2-x layer and both the regions are changed to LRS and device is set state now, as shown in figure 6(f). Further increase of negative voltage the oxygen vacancies are start to deplete from the bottom TiO 2-x /TiO x N y interface and the filament is ruptured, states changed to HRS, as shown in figure 6(g). Which leads to reset the device. From the above mechanism it's cleared that the complementary switching depends on the amount of oxygen vacancies present inside the TiO 2-x layer for this structure. It is also important that an appropriate amount of power is required to make movable the oxygen vacancies. Since, at lower compliance current the same device acts as a bipolar switch, due to the insufficient power to make movable oxygen vacancies. So, not only appropriate amount of oxygen vacancies, the amount of power is also an important parameter to achieve the complementary switching. CONCLUSION In summary, a novel approach to transition from bipolar switching to complementary switching of a TiN(BE)/TiO x N y /TiO 2-x /Pt(TE) structure has been demonstrated. A forming process is essential for the all as-deposited and annealed devices to initiate the forming process. All the devices shows bipolar switching below 50 µA compliance current. The 400 • C annealed device acts as a complementary switch above 50 µA compliance current. During CRRAM operation the device set at 0.36 V and reset at 1.44 V during positive cycle and for negative cycle set at -0.28 V and reset at -1.2 V. The CRRAM device shows good endurance and retention. A clear formation of oxygen gradient layer at TiO 2-x and interfacial 10 nm TiO x N y layer are observed from AES and HRTEM spectra. Based on AES and HRTEM observation and with the help of schematic structures the complementary switching mechanism is explained. This structure has the potential for use in highly dense crosspoint memory without the cell selection devices.
2019-04-08T13:06:45.546Z
2016-07-20T00:00:00.000
{ "year": 2016, "sha1": "ae9d40e46a33dd45848c0694260d38ae8a4d5b10", "oa_license": "CCBY", "oa_url": "https://aip.scitation.org/doi/pdf/10.1063/1.4959799", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "8a999be566f20a20e737be5cee2de06aab7c4b9b", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Chemistry" ] }
222170037
pes2o/s2orc
v3-fos-license
An oral presentation of dermatofibromasarcoma protuberans with literature review: A case report Highlights • Dermatofibrosarcoma rarely presents as an intra-oral lesion.• Surgical excision in the head and neck region has inherent challenges.• Dermatofibrosarcoma is an insidious tumor that requires careful pathologic margin assessments. Introduction Dermatofibrosarcoma protuberans (DFSP) is a rare, low-grade malignancy of the dermis. DFSP accounts for <0.1% of all malignancies and 1-2% of all soft tissue sarcomas [1,2]. The overall annual incidence is 4.2 per million [1]. The average age of presentation ranges from 20 to 50 years but has been known to occur in all ages. There is a higher incidence in women and blacks with poorer survival associated with increased age, male gender, black race and location in the limbs, and head and neck region [3]. It typically presents as a slow-growing, painless, skin-colored cutaneous plaque. Following the early slow growth, a period of rapid growth can occur, resulting in classic protuberant nodules [4]. DSFP most often occurs on the trunk and extremities, with only 10-15% occurring in the head and neck region [5]. The common areas affected in this region are the scalp and neck. Characteristic pathologic features include finger-like projections of irregular fibroblasts interwoven in the subcutaneous or muscular tissues, and entrapment of adnexal structures. An immunohistochemical analysis marker specific to DFSP is CD34 [6]. While metastatic rates of DSFP are low, the tumor is locally invasive with a great potential for destruction of underlying tissue [4]. There is often significant subclinical extension of malignant cells leading to a high rate of local reoccurrence [2]. The current treatment of choice is wide excision with at least 2-3 cm margins or Mohs micrographic surgery. Wide excision requires the removal of normal tissue 2-3 cm from the gross boundaries of the tumor including skin, subcutaneous tissue, and any underlying fascia, muscle or periosteum, to best ensure achievement of adequate tumor extirpation [4,6]. Wide excision may result in sizeable defects. In the head and neck region this can lead to functionally and aesthetically unacceptable outcomes. We present a rare case of DFSP presenting from within the oral cavity and the surgical challenges that arise with tumor excision and subsequent recon- struction. In addition, we present a review of reported cases of DFSP presenting within the oral cavity within the PubMed database. The work has been completed in compliance with the SCARE criteria [7]. Methods A literature review was conducted using the US National Library of Medicine "PubMed" database. We sought to identify case reports that included the following terms "buccal," "oral cavity," "lips," and, "dermatofibrosarcoma protuberans." Of the 40 papers identified 35 were excluded because the reports did not identify intraoral presentations of DFSP. Case report A 50-year-old Caucasian male presented with the chief complaint of a rapidly enlarging nodule in his mouth for over 5 months. He had a remote history of a tumor on the skin of his left cheek that was excised 16 years prior to this presentation. Pathology analysis from the previous surgery was benign. He has a history of chewing tobacco use and no prior medical issues. As a sequala the patient had a mobile scar on the left cheek. In addition, he reported a history of fullness in his left cheek for years that he attributed to this scar. The new intraoral lesion was biopsied at an outside facility and it was confirmed to be DFSP. Physical examination revealed no facial deformities or palpable lymphadenopathy. Intraorally, a 3 cm irregularly shaped, bulky submucosal mass was present in the left buccal mucosa extending anteriorly to the vestibule of the mouth (Fig. 1A). The lesion was resected with 2 cm margins in all directions, creating a full thickness defect of the left cheek (Fig. 1B). This defect was reconstructed with a folded radial forearm free flap (Fig. 1C). The final pathology report noted positive margins in both peripheral and central deep subcutaneous tissues ( Fig. 2A, B). Subsequent re-excision was performed with 2 cm additional peripheral and 1.5 cm deep margins (Fig. 1D). The resulting defect was reconstructed with a circumoral rotational flap, to close the left oral commissure defect. Buccal advancement flaps were used for closure of the included intraoral defect. The remaining defect was closed using wide undermining of preplatysmal skin and a rotational skin flap from the left preauricular area. One year postoperatively, there was no evidence of recurrent disease (Fig. 1E). Our patient does not report any oral incompetence or speech issues. He is reasonably satisfied with his aesthetic result aside from wanting hair on his reconstructed flap. Results We reviewed the pertinent literature covering DFSP presenting in the head and neck region. Only five papers were identified dealing with oral presenting DFSP. The average age of patients from our literature review is 56 years-old (44-72 years-old) with 60% occurring in males. All the articles had a similar presentation consisting of a slowly enlarging oral cavity mass. Treatment consisted of conventional excision. Only 2 papers reported formal surgical margins. 3 papers included reconstructive and closure techniques. Interestingly the margins used in these reports were variable ranging from 1.5 to 2 cm but none reported recurrence. It is unknown how long these patients were seen for follow up [ Table 1]. Discussion Dermatofibrosarcoma protuberans (DFSP) is a low-grade mesenchymal cell neoplasm with low metastatic potential but a highly locally invasive nature, leading to frequent recurrence after surgical excision [8]. DFSP classically presents in the third to fifth decades of life as a slow-growing cutaneous plaque. It is most often skin-colored but can be violaceous, erythematous or have a blue-brown discoloration. Following the initial indolent phase of slow growth, there can be a phase of rapid growth resulting in the characteristic protuberant nodules [4]. The diagnosis of DFSP is based on histology. DFSP has a characteristic storiform pattern with spindle-shaped tumor cells arranged in a cartwheeling pattern [13]. Histologic analysis will demonstrate mesenchymal tumor cells arranged around a central area of collagen or vascular space [13]. This classic pattern is diagnostically significant as it has been shown to be unique to DFSP. Despite this well described, invariable histologic presentation, the cell of origin is unknown [5,11,[13]. The presentation of DFSP in the head and neck region is a relatively uncommon occurrence as it only accounts for 7% of all head and neck sarcomas [11,13]. DFSP of the head and neck most commonly presents as a single nodular area of painless, firm cutaneous swelling and is rarely multinodular [4,13]. In the head and neck the tumor presents in third and fourth decades of life, arising as a firm skin nodule on the scalp or neck over a period of months to years. Of note, as in our case 10%-20% of patients have a history of prior trauma, surgical or burn scars and vaccination at the site of the tumor, although a causal relationship has not been determined [8,9]. The average size of DFSP tumors in the head and neck range from 2 to 5 cm at time of presentation [2,13]. Stojadinovic et al. found the median tumor size to be 2 cm, with the four largest tumors presenting as multinodular plaques. Classical DFSP is diagnosed when the histological examination showed tumor cells arising in the dermis with the characteristic storiform pattern [13]. DFSP predominantly grows horizontally, typically only involving the dermis and subcutaneous tissue. However, long standing untreated tumors can invade into the deep fascia, muscle and bone. DFSP presenting primarily as an intraoral mass with no involvement of the overlying dermis is extremely rare. Meehan et al. and Nemenqani et al. describe cases of intraoral DFSP. In both cases, the patient presented with a solid, solitary intraoral nodule with no involvement of the dermal tissue of the overlying cheek. In both instances, the histologic analysis demonstrated spindle-shaped tumor cells arising from the submucosa, without any involvement of the adjacent buccal mucosa [7,10]. Similarly, Gonzaga et al. reported a case of buccal mucosa DFSP that presented as a solid, yellow, solitary intraoral mass that was found to be located entirely within the submucosal plane [5]. All of these cases were classified as intraoral or buccal mucosa DFSP based on the lack of involvement of the overlying dermis and the primary presentation as an intraoral mass [5,7,10]. Although the tumor cells originated in the subcutaneous tissue in this case, it similarly had no involvement of the dermis both grossly and on microscopic analysis and it presented as a solitary intraoral mass, which led us to classify it as an intraoral presenting DFSP. The surgical management of DFSP has evolved into two routes, conventional excision and Moh's micrographic surgery. Surgical excision with margins of 2 cm or larger has been shown to be appropriate with a low risk of recurrence [4]. Recently, Moh's micrographic surgery has been shown to be an effective modality with fewer reoccurrence rates than conventional excision with smaller margins in cosmetically sensitive areas [9,10]. Imatinib has emerged as an alternative in cases where DFSP cannot be controlled locally [11]. A systematic review found that when used as neoadjuvant therapy, imatinib in conjunction with surgery has potential for tumor removal with negative margins [12]. While DFSP has a propensity for regional invasion, it rarely metastasizes. Both local metastasis to lymph nodes and distant hematogenous metastasis are rare, occurring in less than 5% of f DFSP cases [2,5,13]. Factors associated with increased risk of metastatic disease include age greater than 50, increased cellularity, high mitotic index, multiple recurrences, positive microscopic margins, location in the head and neck region and large size 14,111,213. Survival rates are reported over 95% over the course of 5-15 years [9]. The nonspecific appearance, and often asymptomatic presentation often results in initial misdiagnosis and incomplete excision [8]. These tumors can be mistakenly diagnosed as keloids, hypertrophic scars or benign soft tissue tumors such a lipomas [4,13]. Conclusion DFSP is a tumor that rarely can present from within the oral cavity. If a patient presents with a slow growing mass, biopsy and appropriate staining is crucial in determining the correct diagnosis. The literature review is consistent in that excision requires wide margins or Mohs's surgery to reduce recurrence. Furthermore, close follow up is always indicated as there is a high rate of recurrence. Using a multi-disciplinary team approach is essential to diagnose, treat and follow up patients. In the head and neck region, total excision can pose difficult reconstruction needs due to the sensitive and aesthetic nature of facial reconstruction. Declaration of Competing Interest There are no conflicts of interest to disclose Funding There are no sources of funding for this research. Ethical approval This study is exempt from ethical approval in our institution Consent Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal on request.
2020-10-06T13:37:16.284Z
2020-09-28T00:00:00.000
{ "year": 2020, "sha1": "70852f278bbb6ea4a6811cfe1f913d5c60a639ff", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ijscr.2020.09.172", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "91d0e822e28dd1d3010121db5460875efd5e4853", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
11473244
pes2o/s2orc
v3-fos-license
Traumatic Asphyxia with Diaphragmatic Injury : A Case Report Asphyxia (from the Greek words α “without” and sphyxis, “heartbeat”) is a condition of deficient tissue oxygenation leading to hypoxia with different etiologies. Traumatic asphyxia is a mechanical cause of hypoxia resulting from external compression and blunt thoracic trauma. It is also called crush asphyxia.1 Approximately 33% of blunt trauma patients have a thoracic injury and it is estimated that 25% of traumatic deaths are secondary to chest trauma.2 Excessive venous pressures cause the major manifestation of asphyxia, and the characteristic signs include facial and upper chest petechiae, subconjunctival hemorrhages, cervical cyanosis, and neurological signs due to cerebral edema and temporary loss of vision as a result of retinal edema. The venous hypertension in the valveless cervicofacial system results in the above findings.3 Treatment of traumatic asphyxia aims to maintain adequate tissue oxygenation and perfusion of the affected organs, and is directed towards the associated chest injuries (hemothorax, pneumothorax, fracture ribs, and pulmonary contusion) and other injuries; however, surgical intervention may be necessary. The primary concept of the management is based on the systemic nature of the insult, which needs careful assessment, and then general and specific treatments accordingly. A B S T R AC T Traumatic asphyxia, or Perthe's syndrome, is a rare clinical syndrome characterized by cervicofacial cyanosis, petechiae, subconjunctival hemorrhage, neurological symptoms, and thoracic injury.It affects both adults and children after blunt chest traumas.The diagnosis of this condition is based mainly on the specific clinical signs, which should immediately bring to mind the severity of the trauma, the various probable types of pulmonary injuries, and the need for screening and careful assessment of other organs that might also be injured.In this report, we describe the case of a 39-year-old male who developed traumatic asphyxia after severe blunt chest trauma during his work at a construction site.The patient had multiple injuries to the chest, abdomen, head and neck, which were treated conservatively.An associated diaphragmatic injury was successfully treated by video-assisted thoracic surgery.This patient is one of five patients who were admitted to Saqr Hospital in the United Arab Emirates, diagnosed with traumatic asphyxia, and treated by mechanical ventilator, supportive measures, and fiberoptic bronchoscopy, for both diagnostic and therapeutic indications, in our unit in the period between July 2006 and June 2013.As traumatic asphyxia is a systemic injury, careful assessment of the patient and looking for other injuries is mandatory.Treatment usually involves supportive measures to the affected organs, but surgical intervention may sometimes prove to be an important part of the treatment.Bronchoscopy should be performed for diagnostic and therapeutic reasons because of the associated pulmonary and possible tracheobronchial injuries. subconjunctival hemorrhages [Figure 1], bluish to black discoloration of the face and bluish to red discoloration of the neck and upper chest [Figure 2].There were bruises and surgical emphysema at the neck and upper chest but no bleeding from the nose or the ears.His vital signs were as follows: blood pressure (BP)=110/60mmHg, heart rate (HR)=108 beats/min, respiratory rate (RR)=34 breaths/min, and oxygen saturation on room air was 80%.Auscultation of the chest revealed poor air entry in lungs and scattered crepitation.Heart auscultation was normal and there were no signs of cardiac tamponade.Abdomen was soft but with tenderness in the left upper quadrant and bowel sounds were positive. A portable cer vical spine X-ray was normal.Portable chest X-ray showed bilateral hemopneumothorax and rib fractures on the left side.Two tubes size 32F were inserted one on each side and immediately drained 800ml of blood from the left side and 550ml from the right side.Oxygen therapy was started by nasal cannula, but the patient's oxygen saturation did not improve.Arterial blood gases showed hypoxia (partial pressure of oxygen (PO 2 )=43mmHg) and hypercapnia (partial pressure of carbon dioxide (PCO 2 )=61mmHg) and a pH of 7.27.Thus, the patient was intubated and started on controlled mechanical ventilation at the following settings: a frequency of 18 breaths per minute, tidal volume of 800ml, positive end-expiratory pressure of 5cm H 2 O, and fraction of inspired oxygen of 80%.A nasogastric tube was also inserted and drained gastric juice.The patient was admitted to the intensive care unit (ICU). In the ICU, the patient was further evaluated to determine the amount of blood loss and other organs functions.An electrocardiogram and cardiac enzyme studies were assessed to exclude blunt cardiac injury (BCI).While the patient was on mechanical ventilation, the arterial blood gases improved (PO 2 =100mmHg, PCO 2 =40mmHg, bicarbonate =23.9mEq/L, oxygen saturation 99%, and pH 7.35). The radiological and computed tomography (CT) scan examinations of the chest confirmed the above findings in addition to bilateral pulmonary contusions.CT scan of the abdomen was normal and brain CT showed mild brain edema.While the patient was in the ICU, he developed bleeding from the endotracheal tube, which was controlled using tracheobronchial irrigation with cold saline and sodium bicarbonate as well as suctioning.We used fiberoptic bronchoscopy through the endotracheal tube to clear up the thick inspissated bloody secretion and to clarify the cause of the bleeding.The examination of the airway tree showed contused anterior wall of the trachea 2.5cm above the carina [Figure 3].Ultrasound examination of the abdomen was normal on two follow-up assessments and neither solid organ injury nor intra-abdominal fluid collection were detectable.The patient was evaluated by an ophthalmologist and ENT specialist to rule out any related injuries. The patient was weaned from the mechanical ventilator after six days, and the right-sided chest tube was removed.Repeated chest X-ray showed expanded lungs, but the left dome of the diaphragm was elevated and the costophrenic angle was blunted with retained blood in the left thoracic cavity [Figure 4].These findings with the unexplained upper abdominal pain justified performing diagnostic left thoracoscopy, although the CT scan and ultrasound of the abdomen were normal. The patient underwent left side thoracoscopy and a tear in the diaphragm was found with slight bulging of the fundus of the stomach through the tear without herniation of any abdominal viscera to the thoracic cavity apart from a small piece of omentum.Video-assisted thoracic surgery (VATS) repair of the tear was done immediately using single lung ventilation: two ports (10mm and 5mm) and 3cm utility wound were used to access the left thoracic cavity and the tear was closed with 3/0 prolene interrupted sutures.Retained clotted blood was also removed from the left thoracic cavity by frequent suctioning and irrigation.The size 32F chest tube was left in place.The patient was extubated immediately after surgery and shifted to the thoracic surgery ward.On the second post-operative day, the chest tube was removed after repeated chest X-rays showed a completely expanded left lung.The postoperative period was uneventful.The patient was discharged five days after surgery.He was followedup in the outpatient clinic for six months and both clinical and radiological examination did not show any significant morbidity. D I S C U S S I O N Traumatic asphyxia is described in most of the literature as a rare condition, 4,5 caused by sudden compression of the thoracoabdominal region and Valsalva maneuver is necessary for the development of the syndrome. 6Trauma is usually caused by road traffic accidents, compression of the body between two heavy objects, entrapment beneath vehicles where the body is compressed against the ground, or falling in a narrow space. 7It has also been reported to be related to asthma, paroxysmal coughing, protracted vomiting, and jugular venous occlusion. 8t was first described over 170 years ago by Ollivier in his observations on the cadavers of people trampled upon during crowd upheavals in Paris on Bastille Day. 9 Later, Perthe's added some other characteristics such as mental dullness, hyperpyrexia, hemoptysis, tachypnea and "contusion pneumonia" to the initial description. 9The reason why the signs of traumatic asphyxia are confined to the head and upper chest may be due to the fact that the lower part of the body is protected from the elevated venous pressure by a series of valves.Alternatively, increased airway pressure may compress or obliterate the inferior vena cava to protect the lower part of the body. 10n our patient the trauma occurred at work, whereas most published cases followed road traffic accidents. 4The characteristic physical appearance of our patient was the first thing to attract the attention towards the diagnosis of traumatic asphyxia; the importance of an accurate history, paying attention to the mechanism of trauma and a thorough systemic clinical examination are to be stressed. There was strong evidence of the systemic nature of traumatic asphyxia confirmed by the involvement of different organs in the clinical syndrome without direct involvement by the trauma.In some fatal cases, autopsy findings showed macro-and microscopic changes in the thyroid gland resulting in black thyroid, which may be considered one feature of the syndrome. 11reatment of traumatic asphyxia is based on oxygen supplementation and intubation with mechanical ventilation, general supportive measures, as well as dealing with the other associated injuries where the lungs head the list of the affected organs.The outcome is usually good; however, it can lead to significant morbidity and mortality depending on many factors including the age of the patient, the duration of chest compression, the severity of pulmonary injury, and the associated injuries especially those that involve the cardiovascular system and the head, which may be fatal. 12 C O N C LU S I O N Traumatic asphyxia implies more systemic injuries than just a syndrome and it manifests as unforgettable, striking, and alarming specific physical signs.It could be missed in patients with multiple injuries or when the picture of the presentation is not classical and, hence, it should always be kept in mind in such patients especially when the injury involves a blunt trauma to the chest.Management includes early recognition, general supportive measures, and organ specific treatment.Bronchoscopy should be considered in any case of traumatic asphyxia for both diagnostic and therapeutic purposes. Figure 1 : Figure 1: Patient presented after being stuck between two heavy objects for about five minutes, with subconjunctival hemorrhage in the right eye with bluish to black discoloration of the face. Figure 2 : Figure 2: Bluish to red discoloration of the neck and upper chest ,with well-marked demarcation between involved area of trauma and normal tissue, could be seen in the patient. Figure 3 : Figure 3: Bronchoscopic view showing contusion of the anterior wall of the trachea 2.5cm above the carina. Figure 4 : Figure 4: Chest X-ray showing elevation of the left dome of the diaphragm with clotted hemothorax on day six after admission and one day after weaning from the mechanical ventilator.
2018-04-03T00:22:37.001Z
2015-03-01T00:00:00.000
{ "year": 2015, "sha1": "9ff6ab44a8a31909149219916a015f35d73cd0eb", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5001/omj.2015.30", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "9ff6ab44a8a31909149219916a015f35d73cd0eb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
229428182
pes2o/s2orc
v3-fos-license
Harnessing the unique properties of MXenes for advanced rechargeable batteries In recent years, two-dimensional MXenes have been emerged as potential electrode materials for rechargeable batteries due to their unique properties such as exceptional safety, significant interlayer spacing, environmental flexibility, large surface area, high electrical conductivity, and excellent thermal stability. This review examined all of the recent advances in the field of MXenes and their composites (hybrid structures), which are found to be useful for the electrochemical applications of advanced rechargeable batteries. The main focus of this review is on metal-ion batteries and lithium–sulfur (Li–S) batteries. It is intended to show that the combination of recent improvements in the synthesis and characterization, greater control of the interlayer distance, and new MXene composites, together serve as an emerging and potential way for energy storage applications. Introduction Since the isolation of graphene in 2004, the importance of two-dimensional (2D) graphene-like materials has been growing significantly in the scientific community owing to their outstanding features such as excellent mechanical, electronic, and optical properties, and their exciting potential for various practical applications. Given their high surface-to-volume ratio, 2D materials (2DMs) offer high specific surface areas to enable full utilization of all accessible sites as effective electrode materials. As a result, the exposed contact area is significantly enhanced within the electrodes and electrolytes, and also the pathways for the transport of charges are reduced. These characteristics of excellent electrochemical properties make them suitable candidates for various energy storage applications. The key appeal of 2DM is the property that makes them naturally suited for a type of integration that is not so much possible with any three-dimensional (3D) material, forming heterostructures by stacking 2DMs together in the form of lateral vertical device, doping flexibility, nanoconstrictions, and several others. Over the last decade, a considerable research has been carried in identifying new 2DMs, mainly by focusing on mono elemental and those containing double elements such as, graphene, [1] silicene, [2] germanene, [3] phosphorene, [4] transition metal dichalcogenides (TMD), [5] transition metal oxides (TMD) [6]. Among the family of 2DMs beyond graphene, transition-metal carbides, carbonitrides, and nitrides also called MXenes materials [7][8][9][10][11][12] have attracted munch interest. Their general chemical formula is M n + 1 X n T x with n = 1, 2, 3 or 4, where M is Transition-metal, X represents carbon and/or Nitrogen-atom, and T is hydroxyl (OH), oxygen (O), or fluorine (F) [13][14][15]. The transition-metal carbide Ti 3 C 2 T x was the first 2D MXene successfully synthesized using aqueous fluoride-containing acidic solution to etch down to a few layer and further exfoliated with water and other organic solutions in 2011 [7]. Ti 3 C 2 T x manifested unique structural and electronic properties in both the experiments and theoretical modeling. A year after this ground-breaking research, universal etching of Al from many other MAX phases gave rise to the new family of 2DMs known as MXenes. Until now numerous MXenes have been successfully synthesized, including 2D Vanadium Carbide (V n + 1 X n T x ) [16] and Niobium Carbide (Nb n + 1 X n T x ) [17]. It has become the broadest category of 2DMs and is still growing significantly due to the range of various elemental configurations and prospective applications [18][19][20][21][22][23]. Their exciting properties and utilization in number of applications have been reported in various published reviews recently [24][25][26][27][28][29][30]. Motivated from the stimulating properties and suitability in energy storage application, in this review, we would focus on alkali-ion and unexplored Li-S batteries electrode applications of pristine phases and heterostructures of MXenes. We start by introducing the family of the layered MAX phases with the aim of better understanding of the MXenes and their synthesis with the various functional groups such as oxygen (-O), hydroxyl (-OH) and/or fluorine (-F). We would simultaneously focus on the importance of computational design of MXenes in the prediction of new members of the family and their heterostructures, as well as toward the better understanding of their physical and chemical properties at an atomic-scale (section 2). In sections 3 and 4, we outline recent cutting edge works related to the development of MXene with enhanced performance for rechargeable battery electrode applications such as lithium-ion, non-lithium ion, and lithium-sulfur batteries. We will review the effect their surface functionalization and heterostructure in order to achieve high storage capacity and high safety battery performance, focusing on physical and chemical characterization, computational modeling, and electrochemical properties. Finally, section 5 is dedicated to summary and highlighting the critical challenges in developing MXene and their heterostructures for alkali-ion and Li-S battery applications. Structure In 1960s, hundreds of new carbides and nitrides were identified by Hans et al in Vienna [31], but these phases remained unexplored for nearly 30 years until 1990s. Later Barsoum and El-Raghy synthesized Ti 3 SiC 2 , which showed remarkable electronic and thermal conductivity with a suitable combination of ceramic and metallic properties [32]. Further, synthesis of Ti 4 AlN 3 suggested that these phases shared a basic structure, which was later designated the acronym 'M n + 1 AX n phases' ( n = 1, 2, 3, etc) or 'MAX phases' . M is a transition metal (mostly group 13 and 14 elements), A stands for group IIIA or IVA elements, and X is C/N [33,34]. The MAX phases constitute a significant group of currently > 130 diverse compositions, vast majority of which crystallizes in hexagonal structure with the P63/mmc space group or other derivatives [35]. Figure 1 shows the schematic crystal structures of MAX phase materials. The structure is comprised of MX 6 octahedra, with pure Al layers. The main difference separating these three different MAX phases, viz. n = 1, 2, or 3, is in the number of M layers (2)(3)(4) between the A layers. They are assigned as 211, 312, and 413, sequentially [36][37][38]. The exposure of the chemically active exterior metals layer of a MAX phase causes surface terminations. This further can remove the 'A' element from the bulk MAX phases and result in the mono or few layers of MXene, which is described in figure 1. The schematic picture of 2D-MXenes consist of different variants of monolayers. M 2 X is the tri-sublayer structure, where the carbon/nitrogen (X) atom is sandwiched in between two metal (M) atom layers in a hexagonal unit cell. There are also M 3 X 2 and M 4 X 3 sheets, as shown in figure 1. Besides these, there are possibilities of double transition metal MXenes. This contains a sandwiched configuration of X between two different transition metals 'M and M' , and these metals can be randomly distributed [11,39,40]. It is well understood that outer metal atoms would play a vital role in deciding the surface behavior of these MXenes. Since the metal atom generally has coordination number six, it is quite natural to expect that the transition metals in MXenes favor making six chemical bonds with the neighboring atoms [41,42]. Hence, for environmental stability, these MXenes have functionalization at the outer metal atoms, further tuning their properties. It is experimentally perceived that the outer layers are often terminated with F, O, and/or OH groups, and these group's ratios and distributions strongly depend on the synthesis methods [15,[43][44][45]. Synthesis For the bulk MAX phases synthesis, we recommend readers go through the literature [18,[46][47][48][49][50]. We focus only on its 2D counterpart. In general, MAX phases have stronger interaction in between the layers than graphite, bulk black phosphorus, and MoS 2 [51][52][53][54]. The simple mechanical (scotch tape) method has difficulty in providing monolayer MXenes. The best suitable way to exfoliate monolayers and the colloidal solution of mono or few-layer is the delamination technique. Within a few years of the initial announcement in 2011, MXenes have already grown as an established class of 2DMs with exceptional possibilities in variation of chemical composition and highly tunable properties [7,[55][56][57]. The chemical or structural properties of MXenes are directly associated with the structural order in the bulk phases [7]. There are several steps to synthesize MXene from these bulk MAX phases, described as below: Due to the strong metallic M-A bond, it has not been possible to isolate the M n + 1 X n layers and get MXenes by mechanical shearing of MAX phases. Nevertheless, M-A bonds are chemically reactive compared to the more robust M-X bonds, which makes a selective etching of A-layers possible. This selective etching is the central requirement for MXenes synthesis. In the process of MAX to MXene conversion, the etched layers are substituted by various termination groups T z , such as hydroxyl (-OH), oxygen (-O) or fluorine (-F) [7,18]. The material recovered after etching thus consists of M n + 1 X n T x multilayers attached by hydrogen and/or Van der Waals (vdW) bonds. The most used, suitable, and highly selective etchant solution for this etching is hydrofluoric acid (HF). Aluminium layers are etched by the HF from MAX phases to synthesize the MXene. The reaction occurs in this process is given in equation (1), where HF first etches the MAX phase and form the few-layer or MXene clays and AlF 3 and releases H 2 . In the presence of water, one can get OH functionalized MXene, and there is also a possibility of formation of Ti 3 C 2 F 2 during the process, as described in equations (2) and (3). The details of etching conditions for a different type of beyond Al-containing MAX phase can be found in the literature [11,18,38,[58][59][60][61][62][63][64]. Overall, different MXenes can be synthesized under different etching conditions, which further gives a different quality. However, when the atomic weight of metal atoms is high, then longer time and stronger enchant solution are required due to the M-Al bonding. HF has highly corrosive nature, it can penetrate through the skin and harm the bones and, muscle tissues. That is why an alternative to a strong HF solution should be used where possible, or one should minimize the concentration of HF. The most common alternative is the etching with a mixture of hydrochloric acid (HCl) and fluoride salt. This was first reported by Ghidiu et al, where they used it in situ experiment using lithium fluoride (LiF) and HCl on Ti 3 AlC 2 , and a similar result was achieved [48]. In this experiment, they concluded that the presence of proton and fluoride ions are necessary conditions for the etching process and MXene synthesis. Metal-halide presence leads to the intercalation of cations (such as Li + ) and water in MXene layers. This increases the interlayer spacing among the MXene layers and also weakens the interlayer interaction. There have been other favorable alternatives such as sodium fluoride (NaF), potassium fluoride (KF) and NH 4 F with different HCl concentrations [16,65]. Ammonium hydrogen bifluoride (NH 4 HF 2 ) also used to synthesize thin epitaxial Ti 3 AlC 2 films and powders [66]. In addition, there is an F-free etching method with an electrolyte mixture of NH 4 Cl and tetra-methylammonium hydroxide (TMOAH) used on Ti 3 AlC 2 with more than 40% yield [67]. Tengfei et al reported another promising F-free hydrothermal method was based on the Bayer process [68]. Recently, Cl-terminated Ti 3 C 2 T x and Ti 2 CT z MXenes have also been synthesized from of these compounds in the presence of ZnCl 2 Lewis acidic solution [69]. Aside from Al etching; there have been reports of Si etching from the Si based MAX phase precursors like Ti 2 SiC 2 with the use of HF and hydrogen peroxide (H 2 O 2 ) solution [70]. Nitride MXene such as Ti 3 N 2 T x has been also synthesized in the molten eutectic mixture of KF, LiF, and NaF at 550 • C, under argon (Ar) environment [71]. V 2 NT z and Mo 2 NT z , have been realized by the ammoniation of carbide counterpart at 600 • C [72]. Chemical etching leaves the byproduct of synthesis, such as AlF 3 salts, which has to be washed out from the yield. Hence it may need to be washed several times with water, which can functionalize the MXene with -OH [7] as it can be seen in equation (2). Acid-prewash with HCl or sulfuric acid (H 2 SO 4 ) is also used to dissolve salts such as AlF 3 or LiF [71]. Further, the delamination process is used to exfoliate the MXene from the resultant. The suitable molecule intercalation can widen the gap in the interlayer space, weaken the interaction, and enables the exfoliation of multilayer MXene into a single layer nanosheet at a reasonably large scale [73]. For example, such as dimethyl sulfoxide (DMSO), isopropylamine, tetrabutylammonium hydroxide, chlorine hydroxide, n-butylamine, and urea can all be used to intercalate and followed by sonicatation to produce MXenes [74][75][76]. In the case of Ti 3 C 2 T x , interlayer spacing goes from 9.8 to 17.6 Å with DMSO intercalation and in other example, Nb 2 CT z is reported to be exfoliated via the intercalation of iso-propylamine [17,73]. Readers are suggested to go through the references for specific methods and information about the delamination process [15,77]. Theoretically predicted MXenes The advancement of state-of-the-art, high-throughput theoretical methods has enabled the prediction of thousands of new possibly stable materials amongst hundreds of thousand of potential combinations with the capability to effectively screen the multivariate chemical species of compounds [78][79][80][81]. The growing interest in MXenes as a new 2DMs class has fueled research to identify new phases. Due to the bottleneck in the synthesis of new 2DMs, computational efforts continue to expand, covering the prediction of materials and widening the field of their possible utilization [30]. Although computational calculations have predicted many promising 2DMs, only a few dozen of them have experimentally been realized. Still, high throughput calculations give us liberty and support for trying new kinds of materials, which further motivates experimentalists to focus their efforts to synthesize these materials. Henceforth, theoretical strategies are needed that would make it plausible to describe the electronic structure of MXenes and their heterostructures and suggest the new members of the MXene family at a fundamental level. Various high-performance calculations have been performed on the properties and composition of MXenes lately [82]. Nathan et al have recently used elemental information and data from high-throughput density functional theory (DFT) computations [83] to apply the positive and unlabeled machine learning approach on 2D transition metal carbides, carbonitrides, nitrides, and their layered parent MAX phases. In their work, they identified 20 different MAX phases with a high possibility of experimental realization, which can be exfoliated to produce MXene sheets. Examples include Hf 4 C 3 , Ta 4 N 3 and Sc 3 C 2 , and so on. In another work, a new form of MAX phase, Cr 2 TiAlC 2 with a sandwiched Ti-layer between two outer chromium carbide layers in a M 3 AX 2 structure, was calculated to be dynamically stable [40,84]. Another study based on ab-initio DFT predicted and verified the stability of new, ordered, double-M MXenes phases. These have stoichiometry like M ′ 2 M ′ 2 C 3 , where M ′ is the exterior layer metal and M ′ is the internal layer metal [11]. These metals can be Ti, V, Nb, Ta, Cr, or Mo. In all these cases, carbon atoms occupy the octahedral sites between the M ′ and M ′ . Besides this, all these MXene can have several termination groups such as O, OH, or F. Several theoretical predictions of nitride Mxenes with various configurations and applications have also been reported [85][86][87][88]. Although, only a few of the nitride MXenes like V 2 NT x , Ti 4 N 3 T x have been experimentally synthesized till now [71,89]. However, the theoretical discovery does not guarantee that the corresponding MXene sheets can be synthesized, even they are energetically and thermodynamically stable in calculations. The main challenge is to find a suitable etchant. Although this gives the impression that various new MXenes would be synthesized in the future, opening the possibility of expansion of the family. Electronic properties The MAX phase's striking characteristics emerge from their layered structural arrangements and the mixed nature of metallic-covalent (M-X) bonds united with M-A bonds that are comparatively weak compared to the former one [18,35,46,47,90]. These exceptional combinations make MAX phases suitable for a broad range of applications, varying from sensors, electrical contacts, microelectromechanical systems, protective coatings, and many more [28,39,91]. The electronic structures of the first few MXene phases were established rapidly [92][93][94]. Similar to MAX phases, most of the pristine MXenes have a metallic electronic structure. In DFT calculations, Ti 3 C 2 was found to be metallic, but further the functionalization of this MXene can effectively change the electronic structure from metal to semiconductor [7,13]. The electrical conductance of Ti 3 C 2 T x has attained 3250 S m −1 , which is eventually higher than that of graphene (2500 S m −1 ) [52,95]. An extensive DFT investigation was performed by Kazari et al for various metal nitride M 2 N (M = Sc, Ti, C, Cr, Zr, Nb, Ta) and metal carbide M 2 N (M = Cr, Ti, Zr) with the saturation of metal atoms with O, OH, and F [41,96]. Theoretically, it has been suggested that both F and OH groups affect the electronic structure of MXenes in the same order because they withdraw only one electron from the surface. On the other hand, O termination is different and has the capability of withdrawing two electrons from the surface to be stabilized [10]. Sulfur (S) terminated nitride MXenes also reported for metallic behavior and good electrochechmical properties [86]. Most of the functionalized MXenes were found to be metallic and magnetic with some exceptions. Only a handful of them, such as Sc 2 CT 2 (T = O, F, OH), Cr 2 CT 2 (T = OH, F), and Ti 2 CO 2 are semiconductors, whereas Cr 2 NT 2 (T = O, F, OH) and Cr 2 CT 2 (T = F, OH) are magnetic [97,98]. However, all these MXenes mentioned above have indirect bandgap, excluding Sc 2 C(OH) 2 , which has a small direct bandgap [10]. Lee et al studied the strain effect on Sc 2 CO 2 , which suggest that with the increase in tensile strain, the bandgap gradually decreases [99]. At a critical tensile strain, the indirect bandgap changes to a direct one. External electric field can also tune the gap from indirect-to-direct bandgap [100,101]. The electronic properties have been experimentally verified for selected MXene such as Ti 2 C, Ti 3 C 2 , Mo 2 C [102][103][104][105]. Furthermore, the multiple-metal-atom MXenes open the possibilities for a range of potential applications, owing to their lower symmetry and the capability to choose combinations of metal atoms, which tailors the chemical and electronic properties. Dong et al predicted possible spintronic materials with robust ferromagnetism in Ti 2 MnC 2 T x independent of surface functionalization, similar in oxidized Hf 2 MnC 2 O 2 and Hf 2 VC 2 O 2 [106]. Lately, Sun et al investigated a series of surface-metal and termination-dependent metal-insulator transitions in TiCr 2 N 2 and TiMn 2 N 2 with Ti as the middle layer [107]. Mo 2 TiC 2 T x , Mo 2 Ti 2 C 3 T x , and Cr 2 TiC 2 T x were experimentally realized and manifested to have distinct electrochemical properties from that of the conventional single-metal Ti-C based MXenes [11,108]. Besides these, Tao et al designed a new i-MAX phase in 2017 with stoichiometry (Mo 2/3 Sc 1/2 ) 2 AlC, with in-plane chemical ordering of the Sc and Mo atoms [62]. Selective etching made it possible to remove both the Al and Sc atoms and resulted in 2D MXene Mo 4/3 C sheets. The initially characterized capacitance showed the value of 1153 F cm −3 and 339 F g −1 , which exceeds those of Mo 2 C by 65% and 28%, respectively. Khazaei et al predicted that certain combinations of i-MXene may display semiconducting nature, due to the lack of centrosymmetric, and can exhibit piezoelectric properties [109]. i-MXenes can be advantageous for properties that are profoundly surface geometry dependent. We also encourage readers to go through the literature for mechanical properties of the MXenes [43,71,98,110,111]. 2D MXene based electrodes for rechargeable batteries High performance demands on electrochemical energy storage systems have become increasingly important. Unquestionably, the efficiency of rechargeable battery electrodes depends largely on the successful design and implementation of electrode materials. The specifications for potential energy storage materials consist of: (a) good reversible redox-reaction; (b) easy access to electrolyte ions; (c) sufficient number of adsorption sites; (d) high electrical conductivity. Over the last few years, a large number of 2DMs have demonstrated considerable benefits in the area of electrochemical energy storage owing to both their large surface area as well as their relatively fast ion transport pathway. With only a single or few atomic layers thickness, multiple surface active sites and excellent mechanical characteristics, 2DMs fulfill the challenges of electrochemical energy storage technologies, especially rechargeable batteries. As mentioned in the previous section, roughly 20 different types of MXene have been successfully synthesized and many important breakthroughs have been achieved in the energy storage area, owing to their outstanding and distinctive structure, high-conductivity, high ionic-diffusion and other advantages compared to other 2DMs. Based on the 2D features as well as superior electrical conductivity of MXene materials, they represent an alternative for high-performance electrochemical rechargeable batteries including metal-ions and Li-sulfur batteries. In this context, it is of utmost necessity to survey and summarize the most recent progress of MXenes as promising battery electrodes. 2D MXene for alkali metal-ion batteries In recent years, a large amount of computational and experimental researches have been carried out in order to design new materials for negative electrodes in Li-ion and non-Li-ion batteries, with high storage capacity, high safety performance and low volume expansion (summarized lists of MXenes materials in figure 2). Among the family of 2DMs which have attracted a lot of interest 2D M n + 1 X n T x MXenes (with n = 1-3, M: transition-metal, X: Carbon and/or Nitrogen-atom and T: hydroxyl, oxygen or fluorine) has been reported to be the upcoming promising alternatives for graphite 2DMs as anodes (see figure 2), owing to their unique properties including great carrying capacity, a huge surface area with more space for the intercalation of alkali metal ions, as well as the surface activity derived from the transition-metal surface terminated by OH-, O-or F-groups [18,19,23,23,112]. In this section, we will highlight some recent breakthroughs achieved in metal-ion batteries by focusing on the electrochemical efficiency of MXenes as a negative electrode. Interesting progress has been achieved through the experimental process. It was established on the basis of gravimetric capacity that bare 2D M n + 1 X n MXenes with n = 2 are more promising and can store a high number amount of metal-ions per gram compared to bare M 3 X 2 and M 4 X 3 layers. Naguib's group have studied the possibility of using Ti 2 C and Ti 3 C 2 layers as anode for Li-ion batteries and concluded Ti 2 C exhibits a gravimetric capacity about 50% higher than that of Ti 3 C 2 [73,113]. It was further determined in the case of Ti 2 C that lithiation/delithiation peaks are located at 1.6-2.0 V relative to Li + /Li with a gravimetric capacity of 225 mAhg −1 [113] and that the gravimetric capacity of Ti 2 CT x is about 1.5 times greater than that of Ti 3 C 2 T x [73]. Subsequently, niobium and vanadium carbides, namely Nb 2 C and V 2 C, have been tested as electrode materials for Li-ion batteries showing ability to handle high charge/discharge rates with reversible capacities of about 170-260 mAhg −1 at 1 C in the case of the Nb 2 C layer, and 100-125 mAhg −1 at 10 C in the case of the V 2 C layer [9]. Tang et al have reported the feasibility to use Ti 3 C 2 monolayer with its F-and OH-functionalized surfaces as a promising active materials for Li-ion batteries. Adopting DFT they have found that Ti 3 C 2 sheet has outstanding electrochemical characteristics when used as anode for Li-ion batteries, such as significant adsorption energies of about −0.504 eV/Li-atom, low diffusion barrier of 0.07 eV, low profile insertion of 0.62 V vs. Li/Li + and corresponding theoretical specific capacity of 320 mAhg −1 compared to Ti 3 C 2 F 2 (130 mAhg −1 ) and Ti 3 C 2 (OH) 2 (67 mAhg −1 ). Xie et al systematically explored using theoretical calculations combined with experiments, the interaction between Li-ion and different functionalized 2D transition-metal carbides including Sc 2 C, Ti 2 C, Ti 3 C 2 , V 2 C, Cr 2 C, and Nb 2 C [18,114]. They have found that the capacity of Li-ion storage is mainly dependent on the type of surface functional groups and that the O-terminated MXenes exhibit the highest specific capacities (see table 5). Additionally the computed Li-diffusion barriers confirm great Li-mobilities [18,114] with the possibility that Li-atoms can form an additional layer due to the high electrical conductivity of MXenes. MXenes also demonstrated encouraging performance in terms of specific capacity and stability for non-lithium ion batteries. Table 4 summarizes the capacity and stability in non-lithium ion battery applications of Ti 3 C 2 T x [19,[115][116][117][118][119]. Xie et al have investigated a family of 2D transition metal carbides for promising anodes in non-Li ion batteries including Na, K, Mg, Ca, etc by using both DFT and experiments [120]. They have found that the O-functionalized surface and bare MXenes present a great theoretical specific capacity and proved to be promising anode materials for non-li ion batteries. Furthermore, Vivek et al have modeled recently the S-functionalized nitride MXenes, namely Ti 2 NS 2 and V 2 NS 2 and their possibility to use as active materials in Li-and Na-ion batteries by using first principle modeling [86]. Based on the fact these materials exhibit a high conductivity, the Li-ion can form reversible multi-layer ion intercalation, which manifest high theoretical specific capacity (see table 5). Zinc hybrid-ion batteries are also a suitable substitute to expensive Li-ion batteries. In these batteries, Zn-metal works as an negative electrode and exhibit outstanding features among the many mentioned electrochemical batteries owing to their high theoretical specific capacity of 820 mAh g −1 , and low diffusion profile of about 0.76 V compared to the standard electrode. Significant breakthroughs have been achieved toward this type of rechargeable batteries for cathode materials, such as manganese and vanadium oxide. Although a high-capacity has been achieved in these cathode materials but unfortunately they show lower potential voltage of less than 1.0 V. In order to overcome these challenges, Xinliang et al [121] have recently fabricated a zinc hybrid-ion battery cathode through a phase transition process, which can significantly enhances battery capacity, resulting in adequate capacity and cycling stability. They have demonstrated the higher potential of the V 2 CT x for zinc hybrid-ion battery, and provide an efficient approach to attain enhanced battery efficiency through the introduction of a phase transition appropriate to the electrode materials (see figure 3). This demonstrates that the Li + /Zn 2 + insertion leads to the improvement of layer spacing of the V 2 CT x , which facilitates the further insertion and extraction of ions.To summarize, from the brief collection of the latest experimental results, it is clear that MXenes and functionalized of MXenes offer a promising performance when used as anode in Li and non-Li ion batteries. Theoretical considerations based on DFT are expected to provide guidance for the selection as well as the optimization of potential MXene-based anodes. Despite above-mentioned advantages and the flexibility of Li-S batteries, their practical implementation in day-to-day life remains challenging due to many crucial issues related to the sulfur cathode, e.g. the shuttling effect of intermediate soluble Li 2 S n with 3≤ n≤ 8), high volume change of sulfur and electrical insulating nature of S and the short-chain polysulfides (Li 2 S 2 and Li 2 S) as illustrated in figure 4(b). A description of these critical issues are discussed briefly in the following sections: 1. Shuttle effect: This corresponds to the diffusion back-and-forth of Li 2 S n polysulfides between the Limetal anode and S-containing cathode. In other words, it is produced through the dissolution of Li 2 S n in the liquid electrolyte (see figure 4(a)). During discharging and charging process, once the solid sulfur in S-containing cathode is reduced to long-chain Li 2 S n (n = 8,6,4) polysulfides, which dissolve into the liquid-electrolyte and react with the Li-metal anode. In the process, they are electro-chemically transformed into short-chain Li 2 S and Li 2 S 2 polysulfides, resulting the loss of the active materials. Subsequently, Li 2 S and Li 2 S 2 re-diffuse to the S-containing cathode and form long-chain polysulfides during charging process, followed by re-diffusion of long-chain polysulfides again towards the anode side, provoking an irreversible loss of sulfur, insufficient Coulombic performance, auto-discharge processes and poor cycle stability [133]. 2. Large volume expansion of S: Considerably higher crystalline density of sulfur (approximately 2.07 g cm −3 ) compared to that of lithium polysulfides leads to a significant volume expansion of about 80% during the lithiation process. The successive volume expansion/contraction throughout the discharging/charging cycles causes fragmentation of the active materials, leading to considerable structural instability of the electrode, quick capacity decay and safety issues [134]. 3. Insulating nature of S, Li 2 S and Li 2 S 2 : While Li-S batteries are in operation, Li 2 S and Li 2 S 2 are susceptible to coating the sulfur and hindering its further utilization. Therefore, a significant quantity of conductive carbon additive is required to ensure both the electron transport and the electrochemical reaction which results poor gravimetric-capacity [135]. Significant progress has been undertaken toward designing of modern and innovative cathode materials to overcome all the aforementioned limitations, in particular 'Shuttle effect of soluble LiPSs' [136]. A typical advancement during early investigations was the implementation of porous conductive carbon as cathode of Li-S batteries [137][138][139]. Nevertheless, the introduction of porous carbon based cathode materials and LiPSs are not sufficient to hinder the migration and diffusion of soluble LiPSs. This leads to low cycling-stability over the long term charge/discharge process. It has been recently revealed by studying a wide category of materials, based on graphene oxides [140,141], metal-oxides such as TiO 2 and MnO 2 [142,143], and metal-organic frameworks (MOFs) [144,145]. The crucial and important factor to overcome is the LiPSs migration and to achieve a long cell life, which consists strong chemical interactions over time between the host materials and the dissolved LiPSs. However, several of these materials do not possess the characteristics required for effective cathode, namely good mechanical stability and excellent electronic conductivity. In this context, MXenes can potentially fulfill a crucial role owing to their capability of achieving high-electronic conductivity as well as a significant surface area characteristics, which can enhance the electron diffusion between electrodes and improve the chemical interaction of the sulfur cathode with LiPSs. This has led to widespread interest with a growing number of publications in recent years as shown in figure 4(c). DFT based first principles calculations have yielded a relevant criteria for the theoretical strength of LiPSs/MXene interaction based on binding energies, which is the difference between the total energies of the [147,148]. Through an energetic analysis, they have confirmed that Ti 2 CF 2 successfully suppresses the shuttling of LiPSs, while the neutralization of soluble long-chain polysulfides to insoluble elemental sulfur on Ti 2 CO 2 eliminates the shuttling. Liang et al reported Ti 2 CT x as a sulfur host materials of the cathode, which enhances the performance of Li-S batteries [122,123]. It has been noticed on the basis of the ion-electron pairs nature of S-atoms [149] that the polysulfides are able to act as soft bases, which means that the MXene host without terminal groups may strongly interact with the polysulphides via the Ti-S coordination (see figure 4(d)) [122]. Furthermore, recent findings by Nazar's group provide insights into the formation of Ti-S bonds with the formation of a great quantity of thiosulfate-polythionate molecules on the surfaces of host-materials upon the contact with polysuflides [143]. Figure 4(e) illustrates the interaction between OH-decorated MXene and Li 2 S n species which can be described in a two-step process [123]: I) the Li 2 S n species are initially chemisorbed on the MXene surface, undergo redox reactions with the OH-terminations and form thiosulphate groups, II) the titanium atoms easily accept extra Li 2 S n polysulfides in the electrolyte which forms Ti-S bonds through acid-Lewis base interactions. Min Fang et al described the effect of many-body dispersion (MBD) in on the binding energies of polysulfides on Ti 2 CF 2 as host materials, and further discussed the anchoring mechanism of Li 2 S n polysulfide reduction [124]. They found by comparing the Li 2 S n polysulfides, that Li 2 S 4 tends to adsorb vertically on Ti 2 CF 2 with the both Li-atoms binding with Ti 2 CF 2 due to strong Li-F interaction (see figure 4(f)). MBD method predicts the lower binding energies over 20% compared to vdW surf method in the case of long chain polysulfides with the ratio for many-body effect range between 16.8% and 33.3% during the delithiation of Li 2 S n polysulfides. Based on experimental approach, Tang et al also reported that Ti 3 C 2 T x with nanoscale S uniformly decorated on the surface have the enhanced performance of very Ti 3 C 2 T x (see table 2) [151,152]. A recent computational study by Wang et al investigated the feasibility of S functionalization of MXenes to create a sulfur rich cathode. This showed that vanadium carbide with S-functionalization (V 2 CS 2 ) presents a enhanced binding energy (see table 1 and figure 4(g)). This suppresses the shuttling of soluble polysulfides by preventing the decomposition of the Li 2 S n polysulfides [125]. Unlike the standard conductive hosts for S-cathodes, MXenes have the potential to effectively trap Li 2 S n LiPSs during the discharge/charge for Li-S batteries. MXenes with functionalization can additionally enhance the binding capacity of Li 2 S n polysulfides, which suppress the shuttle effect of soluble polysulfides in Li-S batteries. Types of heterostructures The standard method to improve the performances of electrochemical properties of 2D layered materials, including MXenes, is by hybridization with complementary materials into multifunctional heterostructures or composites, including 0D-2D, 1D-2D, 2D-2D, and lateral-2D structures constructed from low-dimensional ingredient nanostructures. In this section, we will review the emerging applications of 2D heterostructures that enhances the performance in the field of electrochemical storage. The numerous nanostructures with different dimensionalities, such as quantum dots (QDs)/nanoparticles (0D), nanotube/nanorods (1D), and 2D nanosheets, have been established to hybridize with 2D MXene nanosheets to form heterostructures as presented in figure 5. These hybridizations can substantially improve the overall electronic conductivity, especially for the heterostructures hybridized with a conductive matrix, such as carbon-nanotubes (CNTs), different nanoparticles, and conductive polymers, etc [77]. The expansion of volume during the insertion process can also be effectively tuned by the designed porous and hollow hybrid structures and it shows better performance for electrochemical energy storage including lithium ions battery (LIB) and other batteries [54,77,[163][164][165][166][167]. Two main categories of heterostructures corresponding to the wide range of applications are vertically stacked heterostructure (see in figure 5(c)) and horizontal in-plane (lateral) heterostructure (see in figure 5(d)). The vertically stacked heterostructures are those in which two or more 2DMs have interfacial contact, and it may be either due to strong (i.e. covalent) or weak (i.e. vdW) interactions. The sequence of stacking is significant because it may change the physical and chemical properties, modulated by different stacking orders. Also, the horizontal or lateral heterointerfaces have at least two different 2DMs joined together at the edge results in strong bonding between the edge atoms. The 2D hybrid heterostructures have demonstrated their competency by outperforming potential devices made from mono-layered 2DMs. This improvement is attributed to synergistic effects caused by close interaction between different materials, which may result in considerable changes in physical and chemical properties that ultimately allow us to modulate or activate useful features for various applications. The concept of the rational design of the MoS2/m-C nanosheet superstructure for constructing ideal MoS2/C atomic interfaces to enhance lithium-ion storage, (f) the capacity retention of the MoS2/m-C nanosheet superstructure, MoS2/graphene composites, exfoliated graphene, and the annealed MoS2 nanosheets at current densities from 200 to 6400 mAg −1 and (g) 3D charge density difference plot in Li adsorption at the MoS2/G interface, in which green and dark red color indicates the electron accumulation and depletion [168]. Reproduced from [168]. Copyright 2015, Wiley-VCH. 2D heterostructures are generally synthesized by using a single 2D nanomaterial as a substrate, then adding secondary layers via either in a 'top-down' or 'bottom-up' approach. The solid-based techniques [169][170][171] have shown special importance for energy storage and conversion. Large, high quality nanosheets cannot be made via chemical vapor deposition (CVD) [172,173], however the technique can synthesize relatively small nanosheets for 2D heterostructures for energy storage and conversion. The energy-related applications are sensitive to surface [174], porosity [175] and electrolyte exposed active edge sites [19,54,[176][177][178][179][180]. The 2D hybrid heterostructures can help to overcome the limitations of single 2DMs, distinct from the various supercapacitor and the battery performance improvements achieved recently. The Ragone plot (i.e. energy density vs. power density) of batteries offer low power density and high energy density, whereas supercapacitors have vice-versa. Jiang et al [168] synthesized a novel 2D hybrid nanosheet superstructure consisting of the alternative atomic interface contact/interaction between single-layer MoS 2 and single-layer carbon nanosheet with very limited interface contact as shown in figure 5(e). It was reported that the optimized MoS 2 /m-C hybrid superstructure solves most of the key challenges for MoS 2 -base anode materials for LIBs, such as poor electrical conductivity of MoS 2 along the perpendicular direction, accommodating the volume expansion upon lithiation, overcoming the aggregation and restacking of MoS 2 layered material, providing the largest interface contact for Li-ion storage and also for mitigating the polysulfide shutting [168]. A MoS 2 /m-C nanosheet superstructure has been demonstrated as an anode materials for Li-ion storage in which it exhibits a high reversible specific capacity of 1183 mAhg −1 at current density of 200 mAg −1 , shown in figure 5(f). From the DFT investigations, the MoS 2 /m-C hybrid superstructure interface provided the most energetically favored process for high Li-ion storage and demonstrating the explicit synergetic effect between MoS 2 and single-layer of carbon nanosheet as shown in figure 5(g). Besides this, other MoS 2 -based hybrid superstructures displayed good battery performances [181][182][183][184]. In the next section, we will discuss some recent advances about 2D hybrid heterostructures for batteries to recount their promising potential in this field. Lithium ions batteries Generally, conventional LIBs use graphite as the anode. The limitations of graphite such as low specific capacity 372 mAhg −1 and poor rate capability motivate researchers to focus on the new smart 2DMs such as graphene and other monoelemental, TMDs, MXenes etc. The single layer of graphene appeared to have a moderate capacity, unavailable voltage platform, and low Coulombic efficiency. The single layer of MXene also shows relatively low specific capacity as compared to their hybrid heterostructures [19,21,23]. As discussed in earlier sections, most of the MXenes are potential candidates who have displayed high electrical conductivity, fast molecular and ion transport, low operating voltages, and high storage capacities [19,185]. The cyclic voltammogram measurements of single layer of Ti 3 C 2 nanosheet show charge and discharge capacities of 264.5 and 123.6 mAhg −1 in the first cycle at 1 C, with a Coulombic efficiency of 47% [186]. Whereas r-GO/Ti 3 C 2 heterostructures show the discharge capacity of 930 mAhg −1 , while its charge capacity is 500 mAhg −1 , with Coulombic efficiency more than 95% [187]. This significant improvement of r-GO/Ti 3 C 2 over monolayer has also been reported by other groups [164,[188][189][190][191][192]. Recently, r-GO/Ti 2 CT r films significantly enhanced electrochemical performance with an improved reversible capacity of ≈700 mAhg −1 at 0.1 Ag −1 with high Coulombic efficiency, excellent cycling stability and rate performance [187]. Moreover, the MoS 2 @Ti 3 C 2 nanocomposites give a reversible discharge capacity of 131.6 mAhg −1 and corresponding a current density of 1000 mAhg −1 for 200 cycles with excellent cycling stability, which is significantly higher than that of its pure Ti 3 C 2 (58 mAhg −1 ) and MoS 2 (3.6 mAhg −1 ) [193]. Additionally, Ti 3 C 2 /CNTs heterostructure also show very high discharge and charge capacities of 642.5 and 403.5 mAhg −1 , respectively [194]. The Ti 3 C 2 /CNTs heterostructure not showed only high reversible capacity, but it has long cyclic stability and excellent rate capability [194]. Similarly, many researchers tried to improve the electrochemical performance, by forming free standing, flexible MXene/CNTs composite electrodes. Correspondingly, the specific capacity and resultant rate performance have been enhanced via improved ion accessibility. Nb 2 CT x could be effectively synthesized in the presence of isopropylamine, as reported by Mashtalir and co-workers [17]. The free standing flexible Nb 2 CT x /CNT composite paper electrode gives an excellent cyclability, and Li-storage capacity of more than 400 mAhg −1 at 0.5 C after 100 cycles and Coulombic efficiency is approximately 100% for anode materials. Especially in case of a full battery, when an Nb 2 CT x /CNT film used as an anode and a LiFePO 4 electrode as a cathode material then the charge and discharge capacity of 24 mAhg −1 for Nb 2 CT x /CNT was reported for anode materials [195]. Apart from that, porous structure Ti 3 C 2 T x /CNT films have been reported by Ren et al [190], in which significantly enhanced reversible storage capacity of lithium ions of ≈1250 mAhg −1 at 0.1 C, good rate performance of 330 mAhg −1 at 10 C, and excellent cycling stability was resulted. The free-standing and flexible Ti 3 C 2 T x /CNT for Mg 2 + /Li + battery delivered a capacity of ≈100 mAhg −1 at 0.1 C and ≈50 mAhg −1 at 10 C. The capacity was maintained for >500 cycles at 80 mAhg −1 at 1 C with Coulombic efficiency was very close to 100% [188]. The free-standing Mo 2 CT x /8 wt% CNT films, perform a stable reversible capacities of 250 and 76 mAhg −1 at 5 and 10 Ag −1 over 1000 cycles for Li-ions as an electrode material, respectively [38]. Recently, MXene/oxides composite have been also used as an anode material for rechargeable batteries. The TiO 2 is a good candidate as an anode material for storage devices due to the low cost, environmental friendliness, and availability [196]. Researchers follow mainly two approaches, selectively partial oxidation of MXene and introduction of external metal ion to interpose oxides into MXene [197][198][199][200][201]. Ahmed et al [197] reported, TiO 2 /Ti 2 C hybrid materials delivered discharge capacities of 389, 337 and 297 mAhg −1 at current densities of 100, 500 and 1000 mAg −1 at 50 cycles, respectively and it has excellent rate capability of 150 mAhg −1 at 5000 mAg −1 . The Nb 4 C 3 T x containing external Nb 2 O 5 nanoparticles (Nb 4 C 3 T x @Nb 2 O 5 ) were synthesized via CO 2 oxidation which gives a capacity of 208 mAg −1 at 0.25 C and the specific capacity of 94% with the Coulombic efficiency of 100% after 400 cycles [198]. As reported by Zhao et al [202], the hybrid structure Ti 3 C 2 T x /NiCo 2 O 4 used as an electrode achieved high reversible capacities of 1330, and 650 and 350 mAhg −1 at 0.1, 5 and 10 C with stable over hundreds of cycles, respectively. Recently, Ti 3 C 2 has been synthesized in the presence of NiCo-MOF (3D Ti 3 C 2 /NiCo-MOF composite structure). This significantly enhanced the electrochemical performance of Li-ion battery [222]. Ti 3 C 2 /NiCo-MOF composite structure was prepared by vacuum-assisted filtration technology as shown in figure 7(a). During the synthesis of this composite, when NiCo-MOF was added into Ti 3 C 2 nanosheets solution via vacuum-assisted filtration, the porous structure was naturally constructed due to the interlayer hydrogen bonds between MXene and MOF nanostructure. The rate performance of Ti 3 C 2 /NiCo-MOF composite and bare Ti 3 C 2 electrode at different current densities are presented in figure 7(b). It is evident that the bare Ti 3 C 2 electrode displayed a charging capacity of 141 and 67 mAhg −1 at the current density of 0.1 and 1 Ag −1 , respectively. It shows relatively low electrochemical performance for Li ion diffusion limited by compact stacking of multi-layer Ti 3 C 2 structure [222]. On the other hand, the electrochemical performance of Ti 3 C 2 /NiCo-MOF composite structure is mainly depends on the loading of NiCo-MOF in the hybrid structure. The Ti 3 C 2 /NiCo-MOF-0.4 composite structure exhibits the a highest capacity of in all the considered compositions of NiCo-MOF loading and corresponding discharge capacity of 402 and 256 at the current density of 0.1 and 1 Ag −1 , respectively. The capacity maintains its initial value even at lower current density of 0.1 Ag −1 , it means that the Ti 3 C 2 /NiCo-MOF-0.4 composite electrode displayed excellent rate performance for a Li-ion battery. Figure 7(c), shows a high discharge capacity of 504.5 mAhg −1 at the first cycle. The Ti 3 C 2 /NiCo-MOF-0.4 composite electrode displayed a relatively high capacity of 240 mAhg −1 with the Coulombic efficiency of 85.7% after 400 cycles, claims excellent long cycling life as well as at a high rate. Another reported work by Meng et al [222], the Ti 3 C 2 T x -based composite (figure 7(d)) anode in LIBs exhibits higher capacity and better rate performance than pristine Ti 3 C 2 T x nanostructure. Figure 7(e) manifests the specific capacity at different current density with various concentration of Si nanoparticles. During the rolling process to in-situ produces Ti 3 C 2 T x /Si composite material with 10% silicon nanoparticles displayed the best overall enhancement among capacity, rate capability and cyclic stability for Ti 3 C 2 T x scrolls. The hybrid structures are valuable to improve the reversible limit of MXene-based materials for the classical Li-storage system. These are the result of the development of two electrochemical responses (i.e. conversion/alloying for MXene based composites and adsorption-desorption for MXene network). The efficient and environmentally friendly route of intercalation of inorganic compounds with MXenes, such as the composites presented in table 3, should be adopted to further enhance the reversible capacity of the resultant batteries. Most of the MXene composites displayed significant improvement in gravimetric capacity, and it showed the maximum capacity higher than that of the typical graphite anodes [224], some even the excellent capacity of 2000 mAhg −1 [203,218,219]. These composites also showed superior rate behavior and cycle lifetime. Furthermore, they exhibited quite strongly pseudo-capacitive cyclic voltammetry and charge-discharge profiles, which means that the significant portions of their capacities are delivered at [223]. Reproduced from [223]. (d) The preparation method for Ti3C2Tx/Si composite structure, Ti3C2Tx scrolls are used as the buffer matrix to accommodate Si nanoparticles, (e) rate profiles at the different current density, and (b) a high cycling performance at the current density of 400 mAg −1 [222]. Reproduced from [222]. Copyright 2020, Elsevier. higher voltages for conventional Li-ion battery anodes [190,202,203,[217][218][219][220]. From the above descriptions, it is clear that the rate capacity of MXenes is excellent and intrinsically connected to capacitive behavior [225]. Therefore, future work needs to optimize composite materials that exhibit fast redox peaks where MXenes contribute by providing conductive networks. Non-lithium ions batteries As described in earlier section, rechargeable batteries with non-lithium-ions such as Na + , K + , Mg 2 + , Ca 2 + , and Al 3 + have received much attention as emerging low-cost and high energy-density technologies for large-scale renewable energy storage devices [19,22,23]. The hybrid structures i.e. MoS 2 -intercalated Ti 3 C 2 T x composite produced using a hydrothermal route were presented to render an enhanced high specific capacity of 250.9 mAhg −1 over 100 cycles, and rate performance with a capacity of 162.7 mAhg −1 at 1 Ag −1 [245]. An isolated Ti 3 C 2 system gives a specific capacity of 100 mAhg −1 and a current density of 20 mAg −1 after 100 cycles [118]. Xie et al [189] reported that the porous MXene Ti 3 C 2 /CNTs films gave a high volumetric capacity of 345 mAhcm −3 at 100 mAg −1 after 500 cycles and reversible capacity of 175 mAhg −1 at 20 mAg −1 after 100 cycles. Sb 2 O 3 /Ti 3 C 2 T x hybrid structure sodium storage, delivered a rate performance of 295 mAhg −1 at 2 Ag −1 and at 100 mAg −1 after 100 cycles enhanced the cycling performance up to 472 mAhg −1 [230]. Some fruitful and enlightening attempts have been carried out to check the performance for LIBs 259 [244] practical applications of the full battery mechanism. Ti 3 C 2 /CNTs used as an anode and Na 0.44 MnO 2 as a cathode powers a 2.5 V light-emitting diode for ≈25 min which expending an electrical energy of 41 µWh [189]. During the full cell mechanism, Ti 3 C 2 /CNT-SA electrode gives the charge and discharge capacities of 270 and 286 mAhcm −3 respectively. The corresponding volumetric discharge capacity was retained as 242 mAhcm −3 at the current density of 50 mAg −1 after 60 cycles with a Coulombic efficiency of 99% [246]. Ti 3 C 2 monolayer displayed fast migration of K-ion with specific capacity of 191.8 mAhg −1 using DFT in theoretical investigations by Dequan et al [234]. While Zhao et al synthesized PDDC-N-rich porous carbon nanochips (NPCN)/Ti 3 C 2 a hybrid structure which exhibited a good rate performance as shown in figure 8. The PDDC-NPCN/Ti 3 C 2 hybrid structure is prepared in face-to-face manner by the electrostatic interaction between NPCN and multiple layers of Ti 3 C 2 (see figure 8(a)). The PDDC-NPCN/Ti 3 C 2 hybrid structure displayed a layered structure with large surface area and effectively utilize two components and more accessible active sites which can confirm the nearby contact between PDDC-NPCN and Ti 3 C 2 . The galvanostatic charge/discharge profiles of the PDDC-NPCN/Ti 3 C 2 hybrid structure anode are measured at the current density of 0.1 Ag −1 for initial first five cycles. The initial discharge capacity of 797.3 mAhg −1 and charge capacities of 583.6 mAhg −1 achieved as presented in figure 8(b). From figure 8(c), it is confirmed that the PDDC-NPCN/Ti 3 C 2 hybrid structure anode exhibits superior rate performance at high current densities. Also, it shows high reversible capacity of 358 mAhg −1 at the current density of 0.1 Ag −1 even after 300 cycles while after 2000 cycles at the current density of 1.0 Ag −1 , it displayed a reversible capacity of 252 mAhg −1 with a decay rate of only 0.03 per cycle ( figure 8(d)). From XRD pattern, the diffraction peak (001) of Ti 3 C 2 shifted to a lower angle and representing that the interlayer spacing expanded from 19.2 to 24.6 Å, which corresponds to the K + intercalation into the PDDC-NPCN/Ti 3 C 2 composite anode. The hybrid structures provide a larger interlayer spacing which makes a 3D interconnected conductive framework to accelerate the ion/electron transfer rate. PDDC-NPCN/Ti 3 C 2 composite also shows a high chemical stability due to its good tolerance toward volume change caused by phase change in fast charge and discharge process. Additionally, it was seen that the PDDC-NPCN/Ti 3 C 2 composite material significantly reduces the K + binding strength, which accelerates reaction kinetics. PDDC-NPCN/Ti 3 C 2 composite material exhibits high inter-layer spacing, providing significant capacity, excellent cycle performance, excellent speed performance, and exceptionally good speed capacity for K-ion battery. These theoretical and experimental results proposed a new strategy to prepare a high performance MXene-based composite material for K-ion battery anode. To summarize, MXene heterostructures (MXenes composite) can accommodate metal ions due to their wide interlayer spacing, where graphite have lower spacing which does not prefer heavier ion intercalation [234,235,237,238]. Although, in the case of full intercalations, the diffusion rates are limited, and in this case, an increase in the interlayer spacing of MXenes and their composites is needed. The performance of metal ions batteries are summarized in tables 4 and 5 in which Sb 2 O 3 /Ti 3 C 2 T x composite electrode displayed high performance by Guo and co-workers [230]. Overall the MXenes heterostructures (e.g. MXene combine with graphene or other layered materials) have lower diffusion barriers for alkali metals than MXene or isolated graphene [235,237]. Lithium-sulfur battery Li-S batteries have garnered much attention owing to the simple configurations, high energy density and capacity and more environmentally friendly as described in previous section. Hybrid structures and composites also enhance the properties of materials toward Li-S battery applications. As reported by Peng et al [250], hybrid structure TiC@G/S as cathode for Li-S battery shows without binder, separator, and current collector capacity of 670 mAhg −1 at a current density of 0.2 C after 100 cycles with Coulombic efficiency of 95%. Generally, a typical cathode for Li-S battery is made by carbon/sulfur. According to Zhao et al [251], the multilayers of carbon/sulfur flakes derived from MXene Ti 2 SC displayed promising characteristics for Li-S battery. The fabricated Lamellar structured, flexible Ti 3 C 2 MXene (graphene, BN) lithium film anode have low overpotential, exhibited high reversible capacity of 841 mAhg −1 after 100 cycles. A lithium-sulfur full cell with Ti 3 C 2 -Li as anode and sulfur-carbon as cathode exhibited a high energy density and excellent cycle performances [252].In another case, MXene nanosheets with hydroxyl group/CNTs composites were recently synthesized [123] and provided high polysulfide adsorption. This further enabled sulfur hosts with excellent long-term cycling performance, with reversible capacities of ≈450 mAhg −1 at 0.5 C after 1200 cycles with a capacity retention of ≈95% [123]. 3D Ti 3 C 2 T x /rGO/sulfur composites used as a cathode host material for Li-S battery prepared by using a liquid phase impregnation technique provided a high initial capacity of 1144.2 mAhg −1 at 0.5 C with a high level of capacity retention of 878.4 after 300 cycles [253]. Gao and co-workers [248] synthesized the TiO 2 QDs decorated on the surface of Ti 3 C 2 T x MXene (see figure 9(a)(I)) has a quite promising electrochemical performance as a sulfur host for achieving fast and stable Li-S batteries. The TiO 2 QDs/MXene hybrid structure exhibit superior performance suppressing the shuttling effect with polysulfides which gives excellent long-term cyclability and rate capability as presented in figure 9(a). Compared with a MXene/S cathode which displayed a lower discharging voltage (2.03-2.31 V), TiO 2 QDs/MXene delivered a slightly higher voltage (2.09-2.38 V) which described the reduction of elemental sulfur to higher-order polysulfides (Li 2 S n , 4≤ n≤8). In contrast, the plateau at the lower voltage for these two cases is related to the reduction process of higher-order to lower-order lithium polysulfides [254]. The TiO 2 QDs/MXene hybrid structure exhibited higher capacities of 1158, 1037, 925, 812 and 663 mAhg −1 at C/5, C/2, 1 C, 2 C and 5 C current densities, respectively, where 1 C = 1675 mAg −1 as presented in figure 9(a)(III). Figure 9(a) (IV) displays the long-term cyclability of TiO 2 QDs/MXene composite, which shows the high capacity of 680 mAhg −1 at 2 C after 500 cycles with sulfur loading of 1.5 mg cm −2 and this value is significantly twice that of MXene/S cathodes. The Coulombic efficiency of TiO 2 in QDs/MXene composite cathodes reaches nearly 100% in the whole cycling process. More importantly, superior electrochemical performance could be available at higher sulfur loading of 5.5 mg cm −2 . Another reported work on Ti 3 C 2 T x /MnO 2 composites delivered a high initial capacity of 1140 mAhg −1 at 0.2 C after 500 cycles [158]. Table 2 displayed the summarized lists of cathode materials for Li-S batteries. Another fascinating work by Lv et al [249], the highly conductive Mo 2 CT x /CNT composites has been synthesized via a ball milling procedure (figure 9(b)(I)). The Mo 2 CT x /CNT composite electrode exhibited superior electrochemical performance in the terms of good rate capability, high capacity and high initial reversible capacity of 1314, 1068 and 959 mAhg −1 at different sulfur loading of 1.8, 3.5 and 5.6 mg cm −2 , respectively (figure 9(b)(II)). Furthermore, the rate capacities of Mo 2 CT x /CNT/S electrode and their comparison with Mo 2 CT x /S at various current densities of 0.1 to 5 C are displayed in figure 9(b)(III). The charging-discharging profile for Mo 2 CT x /CNT/S and Mo 2 CT x /S composites is displayed in figure 9(b)(IV). The Mo 2 CT x /CNT composites exhibited the reversible capacities of 954 at 1 C after 100 cycles with ≈100% Coulombic efficiency which shows excellent electrochemical performance. MXenes and MXenes composites can capture the soluble LiPSs though a strong transition metal and sulfur interaction to suppress the shuttling effect. Also, functionalized MXenes surface displays strong chemisorption to LiPSs and thus significantly reduces the active material loss and provide high capacities after long-term cycling. Based on the summarized lists of composites materials in all the tables, heterostructures proved to be promising candidates for energy storage applications. Summary and outlook Three types of MXenes, namely 2D metal carbides, 2D metal nitrides, and 2D metal carbonitrides with different surface functional group such as oxygen (-O), hydroxyl (OH) and/or fluorine (-F) were fabricated in last decade by selective etching and exfoliation of MAX phases. High surface charge and hydrophilicity of the MXenes lead to stable water-based colloidal arrangements that do not require surfactants for adjustment. This makes MXene synthesis and processing cheap and straightforward, which further encourages researchers to investigate their potential applications. To date, the continuous increase in the interest of electrochemical energy storage devices require further investigation and development of enhanced electrode materials, which can increase the loading of ions and molecules with high capacity and faster kinetics. From this viewpoint, a new smart 2D electrode material Ti 3 C 2 MXene was reported and investigated in 2011. Furthermore, many other 2D MXenes exhibit attractive electronic, optical, photonic, photocatalyst, and thermoelectric properties. Apart from this, it has superior attributes for electrochemical such as high strength, high melting point, high electrical and thermal conductivity, oxidation resistance, hydrophilic nature, compositional variability, and large surface area. Therefore, MXenes have evolved as an alternative for advanced electrode materials. The MXene layers have large interlayer spacing, which is advantageous for different sized cations, to be easily intercalated into the layers. The combination of superior electrical conductivity and mechanical strength and their functionalized surface by oxide and/or hydroxide, including uncovered redox-active transition metal atom, make MXenes alluring for the utilization as battery electrodes. Mxenes exhibit high capacity and stability for Li-batteries. Besides lithium, there are several cations such as sodium (Na), potassium (K), magnesium (Mg), that intercalated in the MXenes layers and served as promising candidates for energy devices applications. The intercalation of the larger size of some cations has not been explored because there can be a propensity for large interlayer cations to lead to low conductivity, hence, it will influence the charge transfer and diffusion kinetics of the ions. The present review summarizes the structural and electronic properties of bulk MAX phases and then their 2D counterparts MXenes along with the synthesis methods of precursors and exfoliation of nanoflakes. The applications of MXenes and MXenes composites/hybrid structures have been discussed in detail for and alkali-ion and Li-S batteries. The outstanding features makes MXenes outlier from other 2DMs families and make them promising candidates for high-performance electrodes. In particular, MXenes and its composites have many advantages for Li-S batteries, such as the unique layered structure of MXenes with the appropriate surface area for sulfur/sulfides, which provides excellent mechanical strength to resist the stress induced by large volume expansion of sulfur. The metallic conductivity facilitates the electron transport kinetics through the electrolyte/electrode interface, improves electrode polarization, and assists high rate responses even at a high sulfur loading. MXenes have been used both as sulfur hosts and modified spacers and the electrochemical performance of Li-S batteries based on MXene can improve mainly through the materials and design of the cell. It can be achieved by increasing active sites of interaction to strengthen immobilizing Li 2 Sn, introducing spacers to suppress the restacking of nanosheet and building 3D structures to improve sulfur loading, and so on. Surface chemistry plays a essential role in accomplishing electrodes with a superior affinity for LiPSs species. The functionalization of MXenes reduces the strong form of metal-sulfur bonds for LIPS's species, and further enhances the Li-ion diffusivity on the surface, leading to excellent electrochemical performance. Furthermore, advantages of MXenes and its composites have been discussed for restacking, stability, high electrical conductivity, and fast electron/ion transfer, which open new research windows in the fields of energy conversion and storage applications. However, some of the MXenes composites with the conducting polymers are yet to be investigated. These possible MXenes hybrid structures may contribute to the field of potential applications in the next generations of battery electrodes. There are wide open challenges for controlled synthesis MXenes and MXenes composites for practical applications. New synthesis methods is needed for controlled synthesis of functionalized MXenes, as of current methods lead to random functionalization with more than one functional groups [255]. Rapid oxidation of MXene is a well known problem to be taken care of before its utilization in any commercial products such as energy production, conversion and storage applications.
2020-12-03T09:04:34.690Z
2020-11-27T00:00:00.000
{ "year": 2020, "sha1": "b76f478c1a83a1f5bd2bf629b21b2f7373333b50", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/2515-7655/abceac", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "9ffbc2aafdd4eab64929aaa6d8c7d8f4e167fb67", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
84947267
pes2o/s2orc
v3-fos-license
Why bacteria are smaller in the epilimnion than in the hypolimnion? A hypothesis comparing temperate and tropical lakes Bacterial size and morphology are controlled by several factors including predation, viral lysis, UV radiation, and inorganic nutrients. We observed that bacterial biovolume from the hypolimnion of two oligotrophic lakes is larger than that of bacteria living in the layer from surface to 20 m, roughly corresponding to the euphotic/epilimnetic zone. One lake is located in the temperate region at low altitude (Lake Maggiore, Northern Italy) and the other in the tropical region at high altitude (Lake Alchichica, Mexico). The two lakes differ in oxygen, phosphorus and nitrogen concentrations and in the temperature of water column. If we consider the two lakes separately, we risk reducing the explanation of bacterial size variation in the water column to merely regional factors. Comparing the two lakes, can we gather a more general explanation for bacterial biovolume variation. The results showed that small bacteria dominate in the oxygenated, P-limited epilimnetic waters of both lakes, whereas larger cells are more typical of hypolimnetic waters where phosphorus and nitrogen are not limiting. Indeed, temperature per se cannot be invoked as an important factor explaining the different bacterial size in the two zones. Without excluding the top-down control mechanism of bacterial size, our data suggest that the average lower size of bacterial cells in the epilimnion of oligotrophic lakes is controlled by outcompetition over the larger cells at limiting nutrients. INTRODUCTION The heterotrophic bacterial community is the most important biological component in the transformation and mineralization of organic matter in aquatic systems, and its biomass constitutes a large fraction of the total plankton biomass (Simon et al. 1992). It is recognized that bacteria mediate key pathways in biogeochemical cycles (Sherr, and Sherr 2000) and are also the main food source for microorganisms at the base of the food web (Azam et al. 1983). Among the characteristics of aquatic bacterial assemblages, cell size firstly attracted the attention of investigators (Stevenson 1978). Direct epifluorescence microscopy of DAPI (4',6-diamidino-2-phenylindole) stained samples has been successfully used as standard method for bacteria counting and sizing in the last decades (Porter, and Feig 1980). In the early 1990s, flow cytometry was introduced as an effective and rapid method for bacteria counting (Shapiro 1995) and rapidly became a key tool in aquatic microbial ecology. Nevertheless, microscope counting, although time-consuming, is less expensive and allows for a direct view of the cells. The relationship among side scatter, DAPI fluorescence and cell volume measured by epifluorescence microscopy (Robertson, and Button 1989;Gasol, and del Giorgio 2000;Felip et al. 2007) allows to compare bacterial cell size obtained by flow cytometry (LNA and HNA cells) to cell size provided by microscopy (Button et al. 1996). In general and for the purpose of this paper, small cell volume came to be associated to low-nucleic acid (LNA) bacteria, while large cell volume to highnucleic acid (HNA) bacteria (Gasol et al. 1999;Lebaron et al. 2001). Earlier, small bacteria have been considered dormant (Gasol et al. 1995). Nevertheless, while small bacteria have an ATP-per cell content 10-fold lower than large bacteria, once this is normalized to cell volume, the ATP/ biovolume rate is quite similar for small and large bacteria, thus demonstrating that both are active (Wang et al. 2009). Thus, the evidence from the analysis of cytometric parameters is contradictory and does not univocally account for the bimodal distribution of bacterial size both as to community activity and composition (Bouvier et al. 2007). Bacterial size, structure, and morphology in the vertical profile can depend on many factors, like predation (Jürgens, and Matz 2002;Pernthaler 2005), viral lysis (Weinbauer, and Höfle 1998;Berdjeb et al. 2011), UV radiation (Corno et al. 2009), and organic and inorganic nutrients (Vrede et al. 2002). The top-down control mechanism indicates that protistan grazing mainly eliminates medium-sized bacterial cells, thus shifting the size-structure of bacterial assemblages towards smaller and/or larger cells (Hahn, and Höfle 1999). Therefore, grazing pressure may be indicated as an important cause of bacteria size variation over the year, strictly related to the presence of protozoa. Less evident is the bottom-up control mechanism on bacterial size structure. In order to analyze this mechanism, in this paper we tried to clarify mainly the effects of physicochemical parameters on bacteria size, not considering the top-down effects. Lakes have mixing and stratification periods throughout the year, according to their thermal pattern. During the thermal stratification gradients of dissolved oxygen and nutrient concentrations are established, which influence the vertical distribution of planktonic organisms. In particular, bacterial cell size in inland waters can fluctuate with depth and time, depending on the characteristics and dynamics of the system. The presence of bacteria with larger mean size in the hypolimnion (20-350 m) and with smaller size in the epilimnion (0-20 m) was observed in the deep subalpine oligotrophic Lake Maggiore (Northern Italy) (Bertoni et al. 2010). In the present paper we enlarged our data set considering two-year data of bacteria cell size, temperature, oxygen, nitrogen and phosphorus in two oligotrophic lakes, one temperate (Lake Maggiore) and the other tropical (Lake Alchichica). The aim of this study was to determine if physicochemical parameters could affect the bacterial size patterns in the epilimnion and hypolimnion of the two lakes. We expected new insight for understanding cell size differences from the comparison of two water bodies which differ in many limnological features. Study sites Lake Maggiore is a large, deep, subalpine, glacial lake (212.2 km 2 , Z max 372 m) (de Bernardi et al. 1996), located in a temperate area (45°57'N and 8°38'E) at a low altitude (198 m a.s.l) within an exoreic basin with 13 main tributaries in Northern Italy. The lake is classified as holooligomictic, since complete overturn only takes place during strong wind and low air temperature periods, but the hypolimnion is always oxic (Ambrosetti et al. 2003) (Fig. 1). The lake is oligotrophic, with mean total chlorophyll-a concentration of 2 mg L -1 (Bertoni et al. 2010) and TP of about 10 mg L -1 (Bertoni et al. 2004). Total inorganic nitrogen is never limiting in the whole water column with nitrate as the dominant form. Other distinctive characteristics are shown in Tab. 1. Lake Alchichica is a small, relatively deep, maar crater lake (2.3 km 2 , Z max 62 m) (Filonov et al. 2006), located in a tropical region (19°24'N and 97°24'W) in a high-altitude plateau (2340 m a.s.l.) within the endorheic Oriental Basin of Central Mexico. It is classified as warm-monomictic as annual mixing takes place from late December to early March during the cold dry season. Lake Alchichica remains stratified throughout the rest of the year during the warm-rainy season (Alcocer et al. 2000). The lake is oligotrophic, with mean chlorophyll-a concentration of 4 mg L -1 (Adame et al. 2008). Inorganic nitrogen can be limiting in the epilimnion with ammonium as the dominant form (Macek et al. 2009) (Tab. 1). Samplings, chemical analyses In Lake Maggiore samples were collected monthly at the deepest point (Ghiffa: 372 m) during 2006 and 2007. Water samples from epilimnion were taken using a sampler which collects a 5 L sample in a single operation from the thermocline depth up to the surface (Bertoni, pat. 96/A 000121). Samples from the hypolimnion were all taken at the thermocline depth, at 50 m, and at 50 m intervals down to the bottom. The integrated sample was obtained by pooling volumes for each sample proportional to the thickness of the layer. Soluble Reactive Phosphorus (SRP) and Dissolved Inorganic Nitrogen (DIN) was determined according to A.P.H.A. (1992) after sample filtration through GF/C Whatman filters. Oxygen and temperature were measured in situ using a multiparameter submersible probe (Lake Maggiore: IDRONAUT OS316, Lake Alchichica: Hydrolab DS4/SVR4 Water Quality Monitoring System). In Lake Alchichica samples were taken monthly throughout 2006 and 2007 using Niskin bottles. The sampling station was approximately in the centre of the water body, at the deepest part of the lake (62 m). Five depths were sampled at mixing and 10 depths during stratification, covering the surface (0.5 m), epilimnion, metalimnion and/or oxycline, hypolimnion, and near the bottom layers. Bacteria morphometric measures (L:length, W:width) were obtained via images taken with a digital color camera (Cool Snap-Pro camera Media Cybernetics or Canon S45) and analyzed using image analysis software (Lake Maggiore: Image-Pro Plus 5.1 Media Cybernetics, Lake Alchichica: ImageJ). The volumes were estimated based Tab. 1. Comparative characteristics of the temperate Lake Maggiore (Northern Italy) and the tropical Lake Alchichica (Central Mexico). Dissolved Oxygen ( on simple geometric shapes (bacteria as cylinder with two spherical ends) (Posch et al. 1997) and the systems calibrated with fluorescent latex beads (0.86 mm diameter). Between 600 and 1000 cells were measured for each sample. Statistical analysis was performed using Statgraphics plus analysis program Version 5, statistical package. Correlation was tested using the Pearson product moment correlation or Spearman Rank order correlation in case of not normally distributed data. Differences between the epilimnion and the hypolimnion were tested using t-test. The Mann-Whitney rank sum test was used when data were not-normally distributed. Data of bacterial cell biovolume were smoothed among sampling date using a simple moving average of 5 term-smoothing. RESULTS The depth-time diagrams of isotherms and dissolved oxygen isopleths clearly underline the contrast between tropical and temperate lakes (Fig. 1). While in Lake Maggiore dissolved oxygen (DO) is always present along the year in the deep hypolimnion, in Lake Alchichica DO shows different concentrations in the water column. The epilimnion is saturated or oversaturated, the metalimnion develops an oxycline related to the establishment of the termocline and the hypolimnion reaches a tendency to the anoxia. Soluble reactive phosphorus (SRP, Fig. 2) was significantly lower (Mann-Whitney rank sum test) in the epilimnion than in the hypolimnion both in Lake Maggiore (n=25, p<0.001) and Lake Alchichica (n=16, p<0.001). Also DIN resulted significantly lower (Mann-Whitney rank sum test) in the epilimnion than in the hypolimnion in Lake Alchichica (n=16, p<0.001) and in Lake Maggiore (n=25, p<0.001), but in the latter this nutrient is never limiting production as the concentrations are always higher than 600 µg L -1 . Bacteria reach biovolumes lower than 0.2 mm 3 in Lake Maggiore and lower than 0.1 mm 3 in Lake Alchichica (Tab. 2). Comparing the same layers of the two lakes, cell biovolume in the epilimnion of Lake Maggiore was 1.6 fold larger than in Lake Alchichica and 2.4 fold larger considering the hypolimnion (Tab. 2). In general, the higher difference in bacterial cell size between epilimnion and hypolimnion appeared at higher water temperature. The presence of very small bacteria in Lake Alchichica Tab. 2. Morphometric characteristics of bacterial cells in the epilimnion and hypolimnion of Lake Maggiore and Lake Alchichica (length L, width W (mm cell -1 ) and volume (mm 3 cell -1 ) as annual mean±standard error). (mean annual average: 0.04 µm3) has been already underlined (Hernández-Avilés et al. 2010) but a difference along the vertical profile was never noted. Analyzing two year smoothed data, we found that bacteria cell biovolumes in the hypolimnion were significantly larger than those in the epilimnion both in Lake Maggiore (t 35 =3.73, p<0.001) and Lake Alchichica (t 19 =2.04, p<0.05) (Fig. 3). The size frequency of bacterial volume in epilimnion and hypolimnion clearly show the different size of bacterial population in both lakes (Fig. 4). In the stratification periods the differences were higher between these two zones, associated with the establishment of stratification in both lakes. In the anoxic hypolimnion cell biovolumes were 1.3 times larger than in the oxic epilimnion in Lake Alchichica, while in the oxygenated Lake Maggiore this ratio was 1.5. In both lakes bacterial morphology was quite different. In Lake Alchichica bacteria showed elongated and thin shapes whilst in Lake Maggiore the cells had a coccoid form according to the W/L ratio (Tab. 3). In Lake Maggiore bacteria cell volumes were significantly correlated with DO, SRP and DIN. In Lake Alchichica bacteria cell volume were significantly correlated only with DO and SRP. In both lakes there is no significant correlation between bacteria cell volume and temperature (Tab. 4). DISCUSSION In the temperate Lake Maggiore and in the tropical Lake Alchichica a parallel increase of cell size with depth was found during two seasonal cycles. One explanation of the larger bacteria found in the hypolimnion could be related to differences in community composition. In Lake Alchichica the bacterial composition in the anoxic hypolimnion is associated to denitrification process and sulphate reduction and may explain the larger size observed in this layer (Hernández-Avilés et al. 2010). Nevertheless no strong evidence has been published on the direct relation between the bacterial community composition and cell size. Studies of the phylogenetic composition of LNA and HNA have obtained conflicting results: some demonstrated that LNA bacteria are not substantially different from HNA bacteria in terms of composition (Bernard et al. 2000;Longnecker et al. 2005); conversely, others found that the phylogenetic composition of the two fractions differs (Zubkov et al. 2001). These results show that more studies are necessary before we can indicate the community composition as a cause of the difference in size along the vertical profile (Bouvier et al. 2007). Cell size polymorphism has been found to characterize some species, but adaptation to starvation, UVR or grazing can alter natural morphological characteristics (Bernard et al. 2000;Corno et al. 2009). Therefore, if there is a relation between bacterial cell size and phylogenetic composition, it is not that simple to demonstrate and can vary greatly between ecosystems (Bouvier et al. 2007). The absence of a correlation between temperature and bacterial size in Lake Maggiore and Lake Alchichica indicates that temperature cannot be assumed as a deterministic factor influencing the average larger size of bacteria in the hypolimnion, as previously hypothesized (Bertoni et al. 2010). The significant inverse correlation of bacterial size and oxygen indicates that smaller bacteria prevail in the epilimnetic layer, always well oxygenated in both lakes. Thus the relation cell-size/oxygen could be an indirect one, likely related to the smaller bacteria outcompeting the larger authotrophs in the more oxygenated epilimnion. But the interaction between bacteria and producers in the oxygenated epilimnion can also be not competitive. In a recent study, it has been found that the small cells are those which consume only the labile fraction of organic substrates, produced by algae, while the larger ones can also use more refractory sources of nutrients (Zubkov et al. 2004). One common characteristic of Lake Maggiore and Lake Alchichica is their trophic state. According to their nutrient and chlorophyll-a concentrations, both lakes are oligotrophic (Tab. 1). In general, oligotrophic conditions induce temporary or permanent bacterial adaptation characterized by decreased cellular volume (Schut et al. 1997). Therefore, in Lake Maggiore and Alchichica, picoplankton can constitute a large portion of the biomass and the productivity of the system. The dominance of picoplankton in oligotrophic systems (Callieri, and Stockner 2002) can also be explained by their high affinity for orthophosphate (Moutin et al. 2002) and their maximum cell specific Puptake rates. In Lake Alchichica, during mixing and early stratification, the concentrations of dissolved inorganic nitrogen (DIN) and soluble reactive phosphorus Tab 4. Spearman rank correlation coefficients between bacterial volume (mm 3 cell -1 ) and the four physicochemical parameters considered in Lake Maggiore and Lake Alchichica. et al. 2010). In both lakes, SRP concentration in hypolimnion is significantly higher than in epilimnion as well as DIN. The significant correlations between cell volume and both SRP and DIN found in Lake Maggiore and SRP in Lake Alchichica indicate that in both lakes the limiting phosphorus can favor smaller cells, with a lower surface/volume ratio, present in the epilimnion, whereas the effect of nitrogen can be controversial. Conversely, in the hypolimnion, in absence of algal competition and nutrient limitation, the necessity to be small is not so compelling. Therefore, we might suggest that bacteria in the epilimnion are smaller than in the deeper layer to outcompete larger bacteria or even algae. Actually, in nutrient-limited food webs, the primary producers and the heterotrophic bacteria will act as competitors for the limiting nutrient (Olsen et al. 2002), the latter outcompeting the former (Vadstein 2000;Cotner, and Biddanta 2002). The W/L ratio of bacteria in Lake Maggiore is around 3 times higher than in Lake Alchichica (Tab. 3). In batch cultures growing exponentially, it has been found that under C and P limitation W/L ratio increased, and at very low W/L ratio the cells were N-limited (Vrede et al. 2002). According to the W/L found in the two lakes, we infer that bacterial growth in Lake Maggiore is limited by P and in Lake Alchichica by N. Indeed, also primary production in temperate lakes is P-limited while in tropical lakes is Nlimited (Lewis 2002(Lewis , 2010 or co-limited by N and P (Hernández-Avilés et al. 2001). In Lake Alchichica, an alternation in nitrogen, phosphorus or both nutrients in limiting phytoplankton biomass has been experimentally found (Ramírez-Olvera et al. 2009). Our hypothesis that bacterial size vertical distribution in Lake Maggiore and Lake Alchichica responds to the availability of nutrients in different layers is in accordance with the results obtained on the vertical variation of bacterial nucleic acid contents in the warm monomictic Lake Biwa (Nishimura et al. 2005). In this lake, during the stratification period an increment of the percentage of HNA and decrement of LNA bacteria with depth was found. Besides, the authors found a strong positive correlation between HNA and the dissolved inorganic phosphorus concentration. They concluded that LNA bacteria outcompete HNA and become an important component of the microbial loop in P-limited environments. The presence of smaller bacteria in the superficial layers might also be related to the presence of labile compounds produced by algae. If the hypothesis formulated by Zubkov et al. (2004) is verified, the smaller LNA bacteria not only win the competition for P with algae, but they also take advantage directly from their competitors using the excreted substances they produce. In conclusion, without excluding the important role of top-down control in the size composition of bacterial community, we found that the bottom-up control can affect bacterial size on a seasonal scale in oligotrophic temperate and tropical lakes.
2019-03-22T16:12:23.715Z
2012-01-19T00:00:00.000
{ "year": 2012, "sha1": "1cfad7955d5924f6cfe7af6a6fb29bcfa1618320", "oa_license": "CCBYNC", "oa_url": "https://www.jlimnol.it/index.php/jlimnol/article/download/jlimnol.2012.e10/464", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d9bfbefd6cc835ce656982bd0c470ad746f30080", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Environmental Science" ] }
118463324
pes2o/s2orc
v3-fos-license
Generalized models of unification of dark matter and dark energy A model of unification of dark matter and dark energy based on the modeling of the speed of sound as a function of the parameter of the equation of state is introduced. It is found that the model in which the speed of sound depends on the power of the parameter of the equation of state, $c_s^2=\alpha (-w)^{\gamma}$, contains the generalized Chaplygin gas models as its subclass. An effective scalar field description of the model is obtained in a parametric form which in some cases can be translated into a closed form solution for the scalar field potential. A constraint on model parameters is obtained using the observational data on the Hubble parameter at different redshifts. Introduction The expansion of the universe is one of the most fascinating phenomena that science has encountered so far. It has served as a rich source of information on the nature and the composition of the universe. One of recently established astonishing features of cosmic expansion is that it is currently undergoing a phase of acceleration [1,2,3,4]. The source of this acceleration has not yet been unambiguously identified, although many proposals for its nature have been put forward (see [5,6] and references therein). The existence of dark energy, a mysterious component with negative pressure is still the most serious candidate. Another dark component of the universe, dark matter, seems to leave its imprint on astrophysical and cosmological scales, ranging from galaxies to galactic clusters and large scale structure in the universe. The idea that both dark matter and dark energy are actually the manifestations of a single dark component is both natural and appealing. It appeared early in the literature and its the most acclaimed representative is probably the Chaplygin gas [7] as a model of unification of dark matter and dark energy [8]. The class of unifying DM-DE models is often referred to as quartessence [9,10]. This phenomenologically introduced model can be motivated from string theory [8,11,12]. Its usually studied extension, the generalized Chaplygin gas model, was first introduced in [13]. The agreement of the Chaplygin gas and its extensions with observations has been extensively tested, including the analyses with the supernova Ia data [14,15], CMB [16], observable Hubble parameter data [17,18] and large scale structure observations [19,20,21,22,23], including the nonlinear evolution in structure formation [24] and gravastar formation [25]. Different data can be combined to produce tighter parameter constrains such as in [26,27,28,29]. Strong constraints on the generalized Chaplygin gas have been obtained that question its viability as a cosmological model distinguishable from the ΛCDM model. In order to better accommodate observational constrains, various unified models based on Chaplygin gas have been proposed, such as the modified Chaplygin gas model [30,31], recently reviewed and constrained in [32,33,34] or hybrid Chaplygin gas leading to transient acceleration [35]. The fact that perfect fluid model can be fully described by defining speed of sound equation has been used in [36]. The idea of DM-DE unification with non-canonical scalar fields has been recently studied in [37,38]. An interesting model called Dusty Dark Energy, recently introduced in [39], achieves the DM-DE unification in the formalism of the λϕ-fluid, resulting in the zero speed of sound and one scalar degree of freedom. Other approaches to models of DM-DE unification that avoid the speed of sound problem are purely kinetic k-essence models [40,41,42] and tachyon models [43,44]. The structure of the paper is the following. After the introduction presented in this section, in the second section a general class of barotropic fluid models defined by the function c 2 s is discussed. In the third section the model with the constant speed of sound is studied and in the fourth section the principal model of the paper, defined by c 2 s (w) = α(−w) γ is introduced. The fifth section is focused on an effective representation of the model in terms of a minimally coupled scalar field. In the sixth section the comparison of the model prediction against the observational data on the Hubble parameter at different redshifts is made and the constraints on the model parameters are presented. The seventh section closes the paper with the discussion and conclusions. The Appendix outlines an approach in which the solution with the piecewise constant speed of sound is used as an approximation of the dynamics of a fluid with a general dependence of c 2 s on w. The model The equation of state of a barotropic cosmic fluid can in general be written as an implicitly defined relation between the fluid pressure p and its energy density ρ The parameter of the equation of state w = p/ρ can be used to substitute the pressure, p = wρ, so that the relation (1) becomes G(ρ, w) = 0, with G(ρ, w) = F (ρ, p). This relation implies that ρ and p can be considered as functions of w, i.e. ρ = ρ(w) and p = p(w) = wρ(w). Strictly speaking, the inversion of the expression G(ρ, w) = 0 may result in several solutions for ρ(w) (and p(w)). In particular, for some value of w there could be several values ρ(w). The considerations presented below apply to each of these individual solutions. The speed of sound waves in the barotropic cosmic fluid is defined as From (1) it follows ∂F ∂ρ dρ + ∂F ∂p dp = 0 , which leads to Inserting the relation p = wρ into (3) and using (4), one readily obtains Combining this expression with the continuity equation for the fluid one finally obtains As p and ρ are functions of w, the expression for the speed of sound can be written as i.e. c 2 s = c 2 s (w). Therefore, the expression (7) is a dynamical law for the EOS parameter w. The only unknown part in (7) is the functional form of c 2 s (w). By its very definition there are some constraints on it, such as that it should be nonnegative and smaller than the speed of light squared, c 2 . The modeling of cosmic fluid unifying dark matter and dark energy by modeling c 2 s as a function of w may, therefore, be more suitable since the requirements on c 2 s can be immediately built in. Furthermore, the properties of the solutions of (7) depend on the zeros of the function c 2 s (w) − w. In particular, the dynamics of w(z) is confined to intervals determined by the zeros of c 2 s (w)−w and w = −1. For some models with a variable speed of sound see [45,46,47,48]. In the following two sections we discuss specific choices for c 2 s and discuss their relation with models previously studied in the literature. Constant speed of sound The simplest possibility is a constant speed of sound. This possibility was recently studied in [36], see also [49,50,51,52]. A direct inspection of (7) shows that for the case of constant speed of sound w is confined to one of the intervals: (−∞, −1), (−1, c 2 s ) and (c 2 s , ∞). For c 2 s = const, the parameter of EOS evolves with the scale factor as with w(0) = w 0 . From this relation it immediately follows and Finally we obtain For −1 < w < c 2 s , the energy density remains positive throughout the evolution of the universe. With the cosmic expansion, the pressure evolves from positive to negative values. In particular, the pressure changes sign at 1+z = (c 2 . For w 0 < −1 the pressure is negative throughout the evolution of the universe, p < 0. The energy density evolves from negative to positive values with the expansion and it changes sign s , the pressure is always positive, whereas the energy density evolves from positive to negative values with the expansion, with a change of sign at . The solution for c 2 s = const can be used as a building block for approximating general functional forms c 2 s (w). The discussion of a piecewise constant approximation of the general c 2 S redshift dependent function is given in the Appendix. 4 Power law dependence: For a more general parametrization the equation (7) in general needs to be solved numerically to obtain w = w(z). An example of such a numerical solution is presented in Fig 1. The redshift dependence of the speed of sound is presented in Fig. 2. We are primarily interested in the solutions in which w is confined to the interval (−1, 0) and the model is properly defined for w < 0. For γ > 1 and α > 0 the dynamics of w is confined to one of the intervals (−∞, −1), (−1, 0), (0, ∞). For γ > 1 and α < 0, Eq. (7) can be written as where w * = −(−1/α) 1/(γ−1) is the additional zero of the denominator in (7). For α < −1, w * falls in the interval (−1, 0) which is of primary interest in this paper. Depending on the relation of w 0 and w * , there are two distinct cases. For w 0 > w * , with the expansion w(z) evolves from w = 0 at early times towards w * which it asymptotically reaches in the infinity. For w 0 < w * the expansion starts from w = −1 and asymptotically reaches w * in the infinity. In the latter case the model cannot serve as a model of unification of dark matter and dark energy, but it could serve as a model of dark energy only. for γ = 1, one readily obtains For γ = 1, the EOS becomes The result (17) reveals that for γ = 1, the parametrization (13) is equivalent to generalized Chaplygin gas, whereas for γ = 1, α = 1 the EOS of Chaplygin gas is reproduced. Therefore, (13) represents a broad new class of unification models which contains the Chaplygin gas as its special case and the generalized Chaplygin gas as its subclass. The study of the model (13) could potentially lead to deeper understanding of strong constraints appearing in the comparison of the Chaplygin gas and the generalized Chaplygin gas with the observational data. For γ = 2 and α = −1, Eq. (7) can be solved analytically and presented in a closed form: Furthermore, for a special case α = 1, explicit expressions for the evolution can be obtained. The function w(z) then yields whereas the energy density and pressure read and respectively. Finally, for α = −1/2 Eq. (18) can also be presented as an explicit expression for w(z): The energy density and pressure are given by the expressions and Once the expressions for ρ(z), p(z) and w(z) are available, the expression for H(z) can be readily obtained. A straightforward integration would then yield a(t), the scale factor as a function of the cosmic time, which represents the full dynamical information. As the expressions for dynamical quantities in terms of redshift are sufficient for the description of the transition between the DM regime and DE regime and for the comparison with the observational data, we do not present numerical solutions for a(t). Scalar field representation If the cosmic fluid with the speed of sound (13) is the dominant component in the universe, in a spatially flat FLRW universe the Hubble parameter is In this section we focus on finding the effective description of the cosmic dynamics with the cosmic fluid defined by (13) in terms of a minimally coupled scalar field ϕ with a potential V (ϕ). For an effective representation of the cosmic fluid in terms of the scalar field, the expressions for ρ and p are and The time derivative of the scalar field can be expressed asφ 2 = ρ + p = (1 + w)ρ(w) from (26) and (27). On the other hanḋ Combining these two expressions forφ and using (25), the following equation is obtained 8πG Integration of this equation leads to ϕ = ϕ(w). In a similar way, combining (26) and (27) the scalar field potential is Therefore, assuming that (29) is analytically integrable, the scalar field representation is available in the parametric form as ϕ = ϕ(w) and V (ϕ) = V (w). Even in the case when it is not possible to analytically integrate (29), Eqs (29) and (30) provide a direct procedure (including numerical integration for each value of w) for the parametrically defined pair of quantities ϕ(w), V (w). In the remainder of this section we consider an analytical scalar field reconstruction for several specific examples and use the abbreviation φ = 8πG 3 ϕ. c 2 s = α For the case of constant sound speed with α > −1, the equations (10) and (30) yield On the other hand, Eq. (29) is readily integrated to The parameter of EOS is then where Combining (31), (33) and (34) the scalar field potential is c 2 s = −αw In this section we consider the sound speed linearly dependent on the EOS parameter. As already shown in section 4, such a dependence is characteristic of the generalized Chaplygin gas. In this subsection we reconstruct the scalar potential for the generalized Chaplygin gas using the method of parametric definition of φ and V (φ) in terms of w. The integration of (29) gives which results in where The scalar field potential is The combination of (37), (38) and (39) results in the potential for the generalized Chaplygin model. Specifying α = 1 we obtain the potential for the Chaplygin gas Numerical analysis and the comparison with the observational data In this section we perform a numerical analysis in order to find constraints on the model parameters from the comparison with the observational data. We use the measurements of the Hubble parameter at different redshifts obtained from the passively evolving galaxies [53] and baryon acoustic oscillations (BAO). The principal idea of the former measurement technique is that for a pair of passively evolving galaxies at close redshifts, the differential of their age can be determined. Then from the expression H(z) = − 1 1+z dz dt the value of H(z) can be calculated. This method has been recently successfully used to constrain the parameters of the generalized Chaplygin gas [18]. To constrain the model parameters α and γ, we use the H(z) data reported in [54]. We restrict our analysis to the spatially flat Robertson-Walker metric motivated by the inflationary expansion and data on CMB anisotropies [55]. Furthermore, we fix the share of baryons in the present energy density to Ω 0 b = 0.0458 [55], in accordance with primordial nucleosynthesis requirements. The expression for the Hubble parameter then becomes where ρ b denotes the energy density of baryons evolving with redshift as ρ b (z) = ρ b,0 (1 + z) 3 and ρ denotes the energy density of the unified DM-DE component evolving with redshift as ρ(z) = ρ 0 f (z), where (16) gives The expression for the Hubble parameter then becomes where H 0 = 100h km s −1 Mpc −1 is the present value of the Hubble parameter. The functions H(z)/H 0 for different values of (α, γ) along with the observational data are presented in Fig. 4. The χ 2 function is calculated according to the expression with the observational data for H(z) taken from [54]. Finally, from (47) the probability is calculated as where the symbol A denotes the normalization. Although the theoretically preferred region for α is (0, 1), we run our model for a broader range of parameters −5 < α < 5 and 1 < γ < 10. While we marginalize over the parameter h, we select and Ω 0 b refer to percentages of the cosmological constant and the baryonic matter in the ΛCDM model [55]) to obtain constraints on the parameters (α, γ) which produce the present total EOS consistent with the value of the ΛCDM model. The (68.3%, 95.4%, 99.7%) contours of the marginalized probability density are given in Fig 5. They are calculated as curves for which ∆χ 2 = (2. 30, 6.17, 11.8) where ∆χ 2 denotes the difference of χ 2 at some point and its minimal value [56,57]. From the figure it is clear that the allowed interval of α grows with the increase of γ. The contour plot for the (68.3%, 95.4%, 99.7%) probability intervals when no marginalization over h is performed and the value for the Hubble parameter is taken from [55] to be h = 0.702 is presented in Fig. 6. From figures 5 and 6 it is evident that the probability contours for the (α, γ) parameters with and without marginalization are quite similar. Discussion and conclusions The analysis presented in the preceding section serves primarily as an orientation regarding the allowed part of the parametric (α, γ) space. A more complete analysis should consider the growth of inhomogeneities and make the comparison of the model predictions with other observational data, such as the data on the matter power spectrum, supernovae of the type Ia and cosmic microwave background. Although preliminary analyses in this direction have already been made [58], a detailed analysis of observational data is left for future work [59]. From Figs 5 and 6 it is evident that for larger values of the exponent γ, a wider interval of the coefficient α is allowed. A similar conclusion follows from the preliminary analysis of the matter power spectrum [58]. This feature can be understood on qualitative grounds using the following argumentation. For large values of γ, as long as the EOS parameter is not far from w = 0, the speed of sound of the dark component remains small and suppressed by the large exponent γ. This fact prevents the formation of the sound horizon and its adverse effects on structure formation. The generalized model of unification of dark matter and dark energy introduced in this paper opens a novel perspective on Chaplygin gas and its modifications and generalizations. The model defined by the relation c 2 s = α(−w) γ encompasses both the Chaplygin gas (for γ = 1, α = 1) and the generalized Chaplygin gas (for γ = 1) as a specific subclass. It could therefore serve as a wider framework for the analysis how much these models need to be extended to satisfy the constraint from the observational data. From the modeling perspective, the crucial element of our model is the relation between the speed of sound, c 2 S , and the parameter of the EOS, w. This relation connects the quantity governing the growth of inhomogeneities with the quantity determining the global evolution of the universe. This feature might allow easier and more direct transformation of the phenomenological knowledge acquired from the data into workable models of the dark sector. In particular, an adaptive model assuming piecewise constant values of the speed of sound in consecutive redshift intervals is presented in the Appendix. A particular challenge for the future research is finding a microscopic explanation of the dependence c 2 S (w). Here the corresponding microscopic models for the Chaplygin gas might serve as a good starting point. In particular, the method of representing the evolution in terms of minimally coupled scalar fields, presented in section 5 could provide useful information on such microscopic models. the constant sound speed values c 2 S,i . An example of this simple, but powerful parametrization was given in [60] for the case of w(z) function. The equations (9), (10) and (11) and The relations between the parameters w i and ρ i in neighboring intervals are given by relations and The expression (50) can be used, e.g. for the comparison of the model with piecewise constant speed of sound against the SN Ia data.
2013-04-03T13:42:18.000Z
2012-08-02T00:00:00.000
{ "year": 2012, "sha1": "4c7887a56810dc5e052c519f730314e4301f5e20", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1208.0449", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4c7887a56810dc5e052c519f730314e4301f5e20", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }